[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Event handling
Bert Peers wrote:
> > That idea is from the Java 1.1 event model. It is a generalized version
> > of what you said. Instead of processing everything in the main loop
> > (with something like a giant switch statement), sinks subscribe to
> > emitters, which will send events directly to the interested sinks.
>
> Interesting. Where's the heartbeat then ? Not the timeflow,
> but the "check for waiting events"... Is it the emitter's reponsibility
> to track the subscribers and push the events out to them,
> or are sink and emitter both subscribing to a generic queue
> thing which accepts emitter's stuff and waits for a
> "Heartbeat" call to flush the stuff to the sink ? In the latter
> case both are the same, you just replace one queue with
> several. I guess.
In Java 1.1, it's the emitter responsibility. Like I said, added
bookkeeping.
The idea with this is to remove the extraneous checking for events we
don't care about. By having somebody (in my "new" model, the queue) send
the events directly to those objects interested in them, there's no case
of events cascading up and down an object tree.
> > The downside is that it adds bookkeeping, emitter keeping tab of all the
> > sinks they must send events to, sinks remembering to unsubscribe
> > themselves in due time (usually in the destructor)...
>
> This would probably mean making every emitter derived from a base
> emitter class, which is probably a hacked up (so to speak) queue.
> Which means option 1 and option 2 above are really the same..
Yes, to provide a "subscribe" method. Not for a queue or anything tho.
In my "new" model, you'd simply give the queue the event, no base
emitter class. The queue would have a method to subscribe interested
sinks, and sinks would need a base sink class (just like in Java 1.1).
> > Interestingly, I am reversing from my idea and no longer think it is
> > good for games. That emitter/sink model is excellent, but it lacks
> > serialization of events, since they don't go thru a single event queue.
>
> Yes, but would you want that ? Out Of Order is the hot thing
> everywhere, why not in the event loop :)
Because it is not useful. :-) Here's the meat from Carmack's rant (very
interesting):
> Since DOOM, our games have been defined with portability in mind.
> Porting to a new platform involves having a way to display output,
> and having the platform tell you about the various relevant inputs.
> There are four principle inputs to a game: keystrokes, mouse moves,
> network packets, and time. (If you don't consider time an input
> value, think about it until you do -- it is an important concept)
>
> These inputs were taken in separate places, as seemed logical at the
> time. A function named Sys_SendKeyEvents() was called once a
> frame that would rummage through whatever it needed to on a
> system level, and call back into game functions like Key_Event( key,
> down ) and IN_MouseMoved( dx, dy ). The network system
> dropped into system specific code to check for the arrival of packets.
> Calls to Sys_Milliseconds() were littered all over the code for
> various reasons.
>
> I felt that I had slipped a bit on the portability front with Q2 because
> I had been developing natively on windows NT instead of cross
> developing from NEXTSTEP, so I was reevaluating all of the system
> interfaces for Q3.
>
> I settled on combining all forms of input into a single system event
> queue, similar to the windows message queue. My original intention
> was to just rigorously define where certain functions were called and
> cut down the number of required system entry points, but it turned
> out to have much stronger benefits.
>
> With all events coming through one point (The return values from
> system calls, including the filesystem contents, are "hidden" inputs
> that I make no attempt at capturing, ), it was easy to set up a
> journalling system that recorded everything the game received. This
> is very different than demo recording, which just simulates a network
> level connection and lets time move at its own rate. Realtime
> applications have a number of unique development difficulties
> because of the interaction of time with inputs and outputs.
>
> Transient flaw debugging. If a bug can be reproduced, it can be
> fixed. The nasty bugs are the ones that only happen every once in a
> while after playing randomly, like occasionally getting stuck on a
> corner. Often when you break in and investigate it, you find that
> something important happened the frame before the event, and you
> have no way of backing up. Even worse are realtime smoothness
> issues -- was that jerk of his arm a bad animation frame, a network
> interpolation error, or my imagination?
>
> Accurate profiling. Using an intrusive profiler on Q2 doesn't give
> accurate results because of the realtime nature of the simulation. If
> the program is running half as fast as normal due to the
> instrumentation, it has to do twice as much server simulation as it
> would if it wasn't instrumented, which also goes slower, which
> compounds the problem. Aggressive instrumentation can slow it
> down to the point of being completely unplayable.
>
> Realistic bounds checker runs. Bounds checker is a great tool, but
> you just can't interact with a game built for final checking, its just
> waaaaay too slow. You can let a demo loop play back overnight, but
> that doesn't exercise any of the server or networking code.
>
> The key point: Journaling of time along with other inputs turns a
> realtime application into a batch process, with all the attendant
> benefits for quality control and debugging. These problems, and
> many more, just go away. With a full input trace, you can accurately
> restart the session and play back to any point (conditional
> breakpoint on a frame number), or let a session play back at an
> arbitrarily degraded speed, but cover exactly the same code paths..
>
> I'm sure lots of people realize that immediately, but it only truly sunk
> in for me recently. In thinking back over the years, I can see myself
> feeling around the problem, implementing partial journaling of
> network packets, and included the "fixedtime" cvar to eliminate most
> timing reproducibility issues, but I never hit on the proper global
> solution. I had always associated journaling with turning an
> interactive application into a batch application, but I never
> considered the small modification necessary to make it applicable to
> a realtime application.
>
> In fact, I was probably blinded to the obvious because of one of my
> very first successes: one of the important technical achievements
> of Commander Keen 1 was that, unlike most games of the day, it
> adapted its play rate based on the frame speed (remember all those
> old games that got unplayable when you got a faster computer?). I
> had just resigned myself to the non-deterministic timing of frames
> that resulted from adaptive simulation rates, and that probably
> influenced my perspective on it all the way until this project.
>
> Its nice to see a problem clearly in its entirety for the first time, and
> know exactly how to address it.
So, what do you think?
> > But the most innovative idea in the Carmack rant was to make
> > *everything* into an event, including time, so that just recording the
> > events occuring would enable perfect demo recording, unlike Doom (which
> > could get "out of sync"). You could even automate testing using this,
> > and you're not even forced to follow real-time, you can make testing a
> > batch job!
>
> We're using this technique, but it doesn't really have to do anything
> with emitter/sink versus single-queue. Basically any scheme were
> the client is a "dumb terminal" thing (as in Quake) is ready for this :
> just replace the communication with the server with a stub that
> replays what came back from the server when actual play was
> going on. If you squeeze *everything* through the server,
> including mousemoves you also get the right look-at in FPS
> or mouse moves in select-and-click 2D games.
Quake 1 was designed with that ideal of a "terminal", but QuakeWorld and
Quake 2 proved that for sluggish networks like the Internet, you needed
to make that much more of a lightweight thing.
Doom, as well as Quake 1 and 2, was doing "replays" (demo) thru
journaling of network packets, server-provided timetick and events
timestamps, similar to what you propose. It works, but not as reliably
as the single system queue solution. Look at how weird timings on a
different computer could sometime cause a demo to crash (well, do the
replay incorrectly). When you think that you simply *CANNOT* pass
everything thru the server, you understand...
> > For example, say you emit a "time tick" event every 10 ms. In
> > a real game, the tick would happen every 10 ms, as supposed to, but you
> > could batch-test games by simply sending them as fast as possible, with
> > the necessary input in between the ticks. The game would go very fast,
> > but the game would be identical to when it was recorded, just faster!
>
> Another advantage we discovered is that if you're careful what you
> log, you can start kicking out components and still have the playback
> going correctly. As a trivial example, you should be able to kick
> out the sound playing module and still see the right visuals, you just
> replace the sound lib with a stub that ignores those replayed server
> events which consider sound. The same is possible for graphics
> output or even some AI (client side server predicition etc, it can
> get compilcated). Which is very very handy if the game dumps
> core in frame 174635. Kick out modules until it doesn't crash
> anymore !
Yes, exactly. In fact, your idea of passing everything thru the server
is the same as mine. Just make the "server" the "system queue", and you
got it. It's close, it's fast, and it works. It can send events to a
real server whatever events need to get there, just as it send events to
local objects. In fact, the "real" server connection could very well be
implemented as a local object encapsulating the sockets and low-level
grunt work of sending over events (your proxies).
It seems that we agree. :-)
It also seems to me like the best way I know, would anybody care to
instruct me of another way that is better? :-)
--
Pierre Phaneuf
Ludus Design, http://ludusdesign.com/
.