On Tue, Jun 02, 2015 at 07:00:12AM -0700, Bill Unruh wrote:
I am very uncomfortable with these kinds of leap smoothing procedures. In
particular, since there is no standard as to the smoothing, this risks
instability as a whole variety of "smoothed" servers all fight each other in
trying to control the time. I think before heading down this path, we need an
rfc to both standardize the smoothing, and a flag to warn clients that that server is
smoothing. chrony and ntpd claim acccuracies down to microseconds, and here we
are degrading that by 6 orders of magnitude to 1/2 second scales. While of
course if your system is an endpoint and not serving anyone else, it does not
matter, but this will be applied to servers.
Yes, this feature must be used carefully and the documentation I hope
is clear on that. But if you compare it to the ntpd -x option, where
steps are disabled and the server can give false time for very long
time (after leap second for 1-2 days), I think the time smoothing
feature of chrony is not much worse.
I agree it would be good to signal to the clients that the server is
smoothing in NTP packets. I'm not sure if there is a very good way to
do that however. One approach would be to increase the stratum.
Another could be to increase the root dispersion, but this could be
problematic with clients that ignore replies with root distance above
certain threshold (e.g. in ntpd 1.5 second by default).
BTW, I made a post comparing all methods that can be used with ntpd
and chrony to handle a leap second, including the leap smear on NTP
server.
http://developerblog.redhat.com/2015/06/01/five-different-ways-handle-leap-seconds-ntp/