Re: [AD] Progress report on Linux console Allegro

[ Thread Index | Date Index | More lists.liballeg.org/allegro-developers Archives ]


On Thu, Jun 24, 1999 at 02:10:46AM +0000, Jason Wilkins wrote:
> > Yes, but the user shouldn't call the system driver directly I
> > think.  If the entry is NULL, assume switching is unnecessary or
> > uncontrollable.  This is closely linked with lost bitmap
> > callbacks...
> 
> Just to add to the idea: we should be able to disable switching
> altogether with this function.  I don't think I can do this in BeOS,
> but you can do it with DX and probably with Linux as well.

Yes, it's possible in Linux.  It's sort of fundamental to how it
ensures that you don't switch consoles in the middle of a
drawing operation if it'll be running in the background
afterwards -- you have to call a function `block_vt_switch'
before critical sections, then `unblock_vt_switch' afterwards.
You can nest them.  They're a lot like the Windows acquire and
release calls, and ought to be hooked up to those when this code
is finished.  There is also a function `release_display' that
you call when it's OK to switch; any queued switches are dealt
with at that stage.  Or maybe `unblock_vt_switch' does this for
you, I can't remember.

> Switching is just a fact of life in BeOS, the only way to disable it would
> be to change the ALT key to map to nothing at all so the OS won't get the
> key.  The key would still work in Allegro, but it feels kludgy to do it
> that way.

In the Linux version the switch only happens because our
keyboard driver detects the key combination and invites the
kernel to switch; without that code, there's no switching at
all.

> There is a callback to create video sub-bitmaps right?  Then maybe
> a linked-list of them could be kept.  In the switch callback you
> would create an identical shadow list.  Are there any hidden traps
> to this?  I am assuming that speed is not critical because screen
> switches take a long time anyway (especially if you include the time
> it takes my monitor to resync).

Good point.  You don't mean `video' though. :)  I think the
`created_sub_bitmap' callbacks in the vtable and system driver
are ideal.

I can't see any hidden traps, other than Shawn's comment that
subbitmaps aren't the only problem.  I suggest you make a set of
routines in a file in `misc' that the callbacks call to register
screen subbitmaps; your routines remember all the bitmaps, and
also track the screen's base address (line[0]).  You also need a
routine to call with a new base address for the screen, which
updates all the bitmaps (presumably by adding new_base-old_base
to each line pointer).  Any holes in this?  You need to remove
them when they're destroyed too; I don't remember whether
there's a suitable callback for that at the moment, I didn't
look earlier.

The other question is whether this should be called from the
vtable callback or the system driver callback, but I think
that's really up to the author of whatever OS it is.  Should the
vtable callback actually be in the graphics driver though?

> We would have to tell people to ask for exactly how much virtual
> screen they need.  And have the driver only save the whole vscreen
> if scrolling is enabled.

Hmm, yes... that's not sufficient though, they could be using it
just for storing sprites.  Video bitmaps could disrupt this too,
maybe the fact that on DOS they're subbitmaps of the screen and
on Windows they're other screens could be a problem yet again.

-- 
george.foot@xxxxxxxxxx

ko cilre fi la lojban  --  http://xiron.pc.helsinki.fi/lojban/



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/