Re: [AD] Allegro unable to deal with non ascii filenames |
[ Thread Index |
Date Index
| More lists.liballeg.org/allegro-developers Archives
]
- To: Coordination of admins/developers of the game programming library Allegro <alleg-developers@xxxxxxxxxx>
- Subject: Re: [AD] Allegro unable to deal with non ascii filenames
- From: Chris <chris.kcat@xxxxxxxxxx>
- Date: Tue, 30 May 2006 02:20:13 -0700
On Tuesday 30 May 2006 01:42, Elias Pschernig wrote:
> Yes, I'm not sure about that. In Grzegorz's case, I assme U_CURRENT will
> work better for him - his files use ISO-8859-1, so U_ASCII will truncate
> them to 7-bit. This was the original problem. So U_CURRENT seems the
> best default - it will always work, unless the libc encoding is
> different from Alegro's current encoding.
Which it is, when not using UTF-8. By default, Allegro uses UTF-8, which won't
work on his system (for files with extended characters, anyway). I think it'd
be at least better to provide a specific default, instead of relying on
whatever the program sets.. otherwise you could get some programs working for
some people, and others not.
> And as I understand the docs, U_ASCII_CP is only useful if the user
> provides their own mapping tables - so we can't use it for auto
> detection. What we might do is, include some common 8-bit tables with
> Allegro (e.g. ISO-8859-*).
That was the thought I had. I don't know any mapping tables, though. And can
you specify more than one mapping table? If the program sets U_ASCII_CP, for
whatever reason, and the autodetection sets its own as well, what would
happen? Could always deprecate the use of non UTF-8/16...
> Also, maybe we should have an U_ASCII16_CP, so users could also use 16-bit
> encodings?
There are 16-bit codepages? o.O I'd assume since most characters can fit into
UTF-16 (16-bit fixed-width Unicode), there'd not be much need. Could even
support the variable-width version of UTF-16, if needed.
> A completely different route for 4.3.0 might be to use libc's wide string
> support throughout the library
libc has standard wide-char/16-bit functions? I didn't know they were
standard. I'd be more keen on using UTF-8 internally, and just converting as
necessary (to/from the user or system). Most of libc's standard functions can
deal with UTF-8 data, and the ones that can't, we could supply alternatives
for (like the current u* functions, but UTF-8 only).