[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: gettimeofday() and clock
Steve Baker wrote:
> Mads Bondo Dydensborg wrote:
>
>> On Sun, 25 Aug 2002, Steve Baker wrote:
>>
>>> usleep is given an argument in MICROSECONDS - but in practice it
>>> can't wake
>>> your program up any faster than the kernel timeslice - which is
>>> 1/50th second.
>>
>
>> Eh? Depends on platform, I believe, last I checked was 100Hz on intel,
>> 1000Hz on alpha.
>
>
> Really? I could have sworn it was 50Hz on Intel - because when my program
> does a short usleep (say 1 millisecond), it generally sleeps for ~20ms -
> which
> is 1/50th second. Hence, I deduce that the kernel only wakes up and
> reschedules
> my program 50 times a second - and not 100Hz as you suggest.
>
My game engine indirectly proves that time slices are 10ms on kernel
2.4.0. I should perhaps mention that it is multithreaded. The rendering
frame measures the time it needs to render the the scene. Currently
there is only one small test scene with about 10 polys (and that on a
GF3 - don't know what I bought it for ;) which renders in about 1ms. But
from time to time it measures a time of 10 or 11 ms. I'm not giving up
parts of my time slices, though.
Printed docs on the Linux kernel say the same (I'm referring mainly to a
german book titled "Linux-Kernel-Programmierung", which describes the
internals of kernel version 2.0).
The reason why time slices seem to be around 20ms probably is that
processor time is deliberately given up. When I remember the scheduling
algorithm correctly this might lead a lower priority for this process
because it already had the last time slice and it has given up part of
it (i.e. doesn't seem to have much to do at the moment, which would be a
correct assumption for a program waiting for data to arrive).
> I also believe the 50Hz figure because the *original* UNIX on PDP-11's used
> 50Hz - and I presumed that Linus picked the same number for compatibility.
>
This statement about compatibility doesn't make any sense in my eyes.
> Try running this:
>
(program cut out here)
This program really gives up two time slices here whenever usleep() is
called. See above for possible reasons.
> I'd *really* like to see a shorter timeslice on Intel. With
> 2GHz CPU's, even 100Hz is like an eternity.
>
But it should be sufficient with target frame rates of 20-30fps. Any
framerate higher than 30fps does not lead to percieveably better
rendering of *motion*. The human eye just isn't fast enough for that.
Anything else is red herring. During the development of the IMAX cinema
format research was concucted into this area with the result that higher
framerates than the one IMAX uses for the films are plain waste of
celluloid.
But why can we see a difference in monitor refresh rates of 60Hz and
100Hz? The answer is that this is flickering, i.e. changes in light
intensity, and no colour changes (these carry most of the motion
information). You cannot percieve the monitor flickering if you let a
small white rectangle move on an otherwise black screen. If the
rectangle rested still the flickering would become visible.
Cinema projectors use a little trick to prevent flickering: They usually
process celluloid at a rate of 24 frames/sec. But the shutter opens and
closes not only once per frame as you would probably expect, but
actually opens twice or thrice to simulate a screen refresh rate of 48Hz
or 72Hz depending on the projector.
> A 1000Hz kernel timeslice - and (in consequence) a usleep that was
> accurate to ~1ms instead of ~20ms - would solve a *TON* of problems
> for graphics programs that want to run at 60Hz and yet still use
> 'usleep' to avoid blocking the CPU.
>
Why do you want to run at 60Hz at all cost? Doing internal simulations
at a higher rate makes sense, so the way to go would be to step the
simulation at a much higher rate than the rendering. Depending on the
algorithms used in the simulations this might even cause a tremendous
increase in simulation acuracy. And yet the only thing to avoid is
having a too small a step size for the simulation.
Oh, and did I mention that increasing the scheduling frequency from
100Hz to 1kHz causes the scheduler to use ten times as much processor power?
Gregor