As for the demos suffering from low framerate, many demos are using dsp
to play mod files, and sometimes they don't use the "handshake" method
provided by the DSP when sending date between cpu<->dsp ; in such case,
I can guarantee that demos will run just worse as before.
Modifying cpu/dsp speed can be useful for programs that do everything
using OS, but for game/demos, results can be unpredictable.
Why do you think so? The frequencies I chose in my example weren't random -- the Nemesis/Phantom works on the same principle, they accelerate the data bus, Videl, CPU and DSP in the same ratio so *nothing* breaks. SDMA would still be clocked at 25.175 MHz, ACIA clock would also stay the same, so I'm pretty confident that the end-result could be easily achieved with keeping all the cool Hatari features.
Btw, Rich replied to my message as of what is synced: anyway, everything operates around an event scheduler, and the schedule dictates how long time slices are for each CPU. The scheduler is influenced by things like interrupts, semaphore detection (that's a tricky subject on its own), some register/device writes which could involve a dependency, etc. Then there is also an actual setting in the emulator to set lockstep mode to ensure the processors are actually running in lockstep, for those rare cases where I expect my various attempts to auto-synchronize as efficiently as possible will still fall over. The other part of the equation is just optimization, making sure every CPU/device supports re-entry with an absolutely bare minimum of native computational overhead. So that when the scheduler does dictate extremely small time slices (even down to a single master clock), performance isn't destroyed.