Re: [AD] X vsync emulation (timing) |
[ Thread Index |
Date Index
| More lists.liballeg.org/allegro-developers Archives
]
- To: alleg-developers@xxxxxxxxxx
- Subject: Re: [AD] X vsync emulation (timing)
- From: Chris <chris.kcat@xxxxxxxxxx>
- Date: Thu, 18 Nov 2004 04:24:03 -0800
- Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:references; b=MEzYdDt6i8yxd/LTFOOPKjzuw+xVk+xB2Cf59O5DJ1sByKbsU7XJHsPTUoDnFI/ty5L2ANn14iOyYqZfXBAZXHcKhyCqWI/Jw4AVqKH1f1emuksu2bsC3bBxJ1BX4ixqsIAYqa6qSNySAYDix7XmJeGdvFyzPy2+fOEv/lnBnvo=
On Thu, 18 Nov 2004 15:09:45 +1100, Peter Wang <tjaden@xxxxxxxxxx> wrote:
> Here's a simple patch that implements the vsync-emu idea I described,
> i.e. incrementing a counter from the bg_man thread. With this, calling
> vsync() delays the program as is expected by many programs, including
> some of Allegro examples (e.g. exstars and exspline are back to
> normal). It's not the same delay that you usually expect (100Hz instead
> of 70Hz) but that should be fine.
>
> As a side effect, programs that use vsync() no longer take up 100% CPU
> on X. It's like calling rest().
I can't say I like this. For one, it forces the main thread to sleep
for the durration of the timeslice, unconditionally, when vsync is
called. Also, the current way of vsync'ing does not always run at
70Hz, but is adjusted if the screen refresh is known (or can be
reasonably inferred). With this, it's basically timing against the
scheduler at a static 100Hz, as opposed to timing against something
reasonably assumed to be approximately near the actual refresh rate.
What I interpretted, when the idea initially was proposed, was to use
the background thread with a new procedure, not the input procedure,
to increment retrace_count, and for vsync to busy loop until
retrace_count increments. The only vsync implementation I've ever
heard of to reduce CPU usage is nVidia's OpenGL drivers. And, AFAICT,
that only works because the driver gets a hardware interrupt, and can
make (read: strongly suggest) the scheduler jump to the waiting
thread/process ASAP. Yes, it'd be nice if vsync did rest the CPU, but
the only methods available to do that are too imprecise.. and that'd
cause it to fall into the same trap the yield_timeslice did when it
was changed to forfeit the timeslice instead of yield it (changing
behavior to cover for a programmer's bad design).