Hi,
I'm a bachelor student at ETH Zurich and am currently working on my
bachelor thesis, which is about helping in implementing a new clock
synchronisation approach called G-SINC.
G-SINC (
https://github.com/marcfrei/scion-time, paper linked in README)
focuses especially on byzantine fault-tolerance and builds on a new
internet architecture called SCION.
Right now I am focusing on implementing the sample offset interpolation
algorithm. We have already implemented a PLL-based algorithm, taken from
Ntimed by Poul-Henning Kamp and are exploring an approach using the
Theil-Sen estimator, but are also looking at re-implementing the
existing algorithms in chrony and ntpd.
In all of this, chrony is basically our "gold standard" when it comes to
accuracy and design approaches.
However, I haven't found much reasoning why e.g. weighted linear
regression is used in chrony for NTP samples, whilst a robust linear
regression (Least Absolute Deviations) is used for RTC samples and
manual input (I hope I'm not mistaken in my reading of the code).
Is there some place where these design decisions are justified? For
ntpd, there is of course the Computer Network Time Synchronisation book,
but David L. Mills seems to reach different conclusions than chrony does...
I hope I haven't overlooked some obvious place to search; otherwise, I'm
very sorry for bothering the list.
Best regards,
Julian
--
To unsubscribe email chrony-users-request@xxxxxxxxxxxxxxxxxxxx
with "unsubscribe" in the subject.
For help email chrony-users-request@xxxxxxxxxxxxxxxxxxxx
with "help" in the subject.
Trouble? Email listmaster@xxxxxxxxxxxxxxxxxxxx.