Re: [hatari-devel] Less dark doubled TV-monitor mode for 32-bit output

[ Thread Index | Date Index | More Archives ]

This is a multi-part message in MIME format.

On 11/4/19 11:51 AM, Thomas Huth wrote:
Am Sat, 2 Nov 2019 01:13:11 +0200
schrieb Eero Tamminen <oak@xxxxxxxxxxxxxx>:
On 10/31/19 12:59 AM, Eero Tamminen wrote:
What do you think of the attached change for making
doubled TV-monitor mode less dark on 32-bit output?

Attached are two patches doing it the way I actually wanted to do it.

First patch simplifies screen conversion Y-doubling code by removing
lot of repeated code from inner loops and adding a function for
doing the same in outer conversion loop.

After that, halving doubled line intensity for TV-mode needs to
be done only in single place (second patch).


This code is likely not endianess safe:

+			/* duplicate line pixels get half the intensity of
+			 * above line's pixels R/G/B channels (A is skipped)
+			 */
+			*next++ = *line++ >> 1;
+			*next++ = *line++ >> 1;
+			*next++ = *line++ >> 1;
+			next++; line++;

You have to use the values from SDL_PixelFormat if you want to make it
run properly everywhere.

With all the 8-bit channels getting the same treatment, it doesn't
matter in which order they are.  See attached updated patch.

As to 16-bit screen, I remember that there have been differences
in how bits are arranged to bytes.  If I would need to use
SDL_PixelFormat for that, I think it might be a bit too much
overhead, as 16-bit mode is used only for performance reasons.

Are there still 16-bit screens (for devices where Hatari could
conceivably run on) which have something else than 5:6:5 format?

And I think you could do it with 32-bit
arithmetics to speed things up, instead of doing it byte by byte.

But that would mean masking in addition to shifting, for each channel,
like I've done in the new patch for 16-bit mode.

My assumption is that:
- it doesn't matter with today's computers which one is used
  (8-bit access without masking, 32-bit with masking etc)
- line duplication is memory read/write bandwidth bound
- GCC & CPU logic would do arrange reads (previous line from CPU cache)
  & write (to system memory) screen buffer to suitable bursts.

I'm interested to hear otherwise though.  Computers are sometimes
surprisingly stupid. :-)

	- Eero

Mail converted by MHonArc 2.6.19+