|Re: [hatari-devel] Basic cpu testsuite|
[ Thread Index | Date Index | More lists.tuxfamily.org/hatari-devel Archives ]
Hi, On 04/29/2018 07:23 PM, Thorsten Otto wrote:
On Sonntag, 29. April 2018 17:33:56 CEST Eero Tamminen wrote:I was just saying that it doesn't necessary need to be cross-compilation >> one or integrated to CMake.But isn't that the purpose, of being able to say "make test", during the automatic builds?
You compile them while you're testing them, but that doesn't necessarily happen in the Hatari build tree, you could be doing that testing & development also on real Atari device, or e.g. in Aranym (I think its FPU emulation has still some advantages over Hatari). Once they've been verified to work correctly and be "complete"(*), there shouldn't be much reason to change them. (*) The tricky word here is "complete" at least when it comes to good coverage of CPU instruction set, so I understand where you're coming from. What kind of tests should be done for instructions? Is there some way to auto-generate them? And capture correct results for those from real HW?
I write the code on host and have a small script that runs AHCC with Hatari. Build speed-wise it's "as fast" as using GCC cross-compiler.Hu? I can't believe that. Starting up Hatari alone takes at least several seconds (not Hataris fault).
On my nearly decade old i3: ------------------------------$ time hatari --natfeats yes --timer-d yes --fast-forward yes --fastfdc yes --tos etos512k.img nf_quit.tos
.... Emulator name: Hatari v2.1.0 real 0m0.963s user 0m0.596s sys 0m0.040s ------------------------------ I.e. Hatari startup, TOS bootup, small program startup and Hatari quitting takes *less than a second*. If I add SDL_VIDEODRIVER=dummy and redirect compiler output to terminal with Hatari "--conout 2" option, it's only *1/2 second*. With AHCC compilation it naturally takes a couple of seconds more to build a small program.
In the same time, i can easily cross-compile several hundreds of source files.
Sure, but for developing small Atari test programs, that doesn't really buy anything, couple of second compile time is fine for me. That time is anyway spend in reading the compiler output and sometimes comprehending it may take even more. :-) (Not that I would trust AHCC to find any non-obvious errors, that's what other compilers are for.)
Are the AHCC bugs you're referring to about code generation, or about the includes & libraries?Both. When i last looked, it took me only a few minutes to spot ~20 bugs in the library. Not really surprising, given that it is based on dlibs from the early 80s. And to the code generation, to be honest i don't trust a developer who thinks he can use the same library for both 16- and 32bit ints, by just changing all header files to use short. Beside that, the ST version of the compiler isn't able to handle double, not even softfloat.
I didn't consider vbcc, because the assembler interface is really strange there,
Strange how? Vasm is supposed to support "standard" ELF, a.out and "TOS" formats: http://sun.hasenbraten.de/vasm/ (If there are bugs, Frank usually fixed them fairly quickly, and based on the changelog he seems to be still active.) > and you would have to write every test as a separate function. Is that a problem? In instruction tests, I would hope that most of asm & C side code could be auto-generated.
If test file is mostly assembly, I think it should be considered writing it completely in assembly, to avoid the issue of how to interface them to C source code,But then you also have to write the whole library in asm, otherwise you again have that problem of the interface. The idea was to have something where you can just use printf for the expected/actual values, run that on real hardware, and adjust the tests if necessary.
I was thinking that one could use C-code for that. (One definitely wouldn't want to use asm for formatting floats.) - Eero
Description: Binary data
|Mail converted by MHonArc 2.6.19+||http://listengine.tuxfamily.org/|