Re: [AD] X vsync emulation (timing)

[ Thread Index | Date Index | More lists.liballeg.org/allegro-developers Archives ]


On Thu, 18 Nov 2004 15:09:45 +1100, Peter Wang <tjaden@xxxxxxxxxx> wrote:
> Here's a simple patch that implements the vsync-emu idea I described,
> i.e. incrementing a counter from the bg_man thread.  With this, calling
> vsync() delays the program as is expected by many programs, including
> some of Allegro examples (e.g. exstars and exspline are back to
> normal).  It's not the same delay that you usually expect (100Hz instead
> of 70Hz) but that should be fine.
> 
> As a side effect, programs that use vsync() no longer take up 100% CPU
> on X.  It's like calling rest().

I can't say I like this. For one, it forces the main thread to sleep
for the durration of the timeslice, unconditionally, when vsync is
called. Also, the current way of vsync'ing does not always run at
70Hz, but is adjusted if the screen refresh is known (or can be
reasonably inferred). With this, it's basically timing against the
scheduler at a static 100Hz, as opposed to timing against something
reasonably assumed to be approximately near the actual refresh rate.

What I interpretted, when the idea initially was proposed, was to use
the background thread with a new procedure, not the input procedure,
to increment retrace_count, and for vsync to busy loop until
retrace_count increments. The only vsync implementation I've ever
heard of to reduce CPU usage is nVidia's OpenGL drivers. And, AFAICT,
that only works because the driver gets a hardware interrupt, and can
make (read: strongly suggest) the scheduler jump to the waiting
thread/process ASAP. Yes, it'd be nice if vsync did rest the CPU, but
the only methods available to do that are too imprecise.. and that'd
cause it to fall into the same trap the yield_timeslice did when it
was changed to forfeit the timeslice instead of yield it (changing
behavior to cover for a programmer's bad design).




Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/