|RE: [chrony-users] Isolated time domains|
[ Thread Index |
| More chrony.tuxfamily.org/chrony-users Archives
On Tue, 3, Dec 2013, Bill Unruh wrote:
> On Tue, 3 Dec 2013, Miroslav Lichvar wrote:
> > On Tue, Dec 03, 2013 at 12:24:43AM +0000, Chris Dore wrote:
> >> This client was previously synchronized with the external NTP servers
> which is why it is reporting that it is in synch with NTP time. Since the master
> server is about 3 hours behind NTP time I need this client to say it is about 3
> hours ahead of the master server.
> >> While this example is a bit contrived, I think it illustrates that the chronyd
> running on the master server is telling the clients the external NTP time
> when I really want it to tell the clients the local system time.
> >> Is it possible to configure chronyd to always serve it's local clock,
> regardless of the state of the external/remote servers?
> > This issue was discussed here some time ago. Currently, there is no
> > option in chronyd which would allow serving the raw system time.
> > It wouldn't be very difficult to implement, but I'm not sure if it
> > would meet your requirements on the maximum error between the
> > As Bill wrote in his reply, the clients will not notice the change in
> > the frequency at the same time and their response may be different, so
> > their clocks won't be close until they all caught up with the server
> > slew and then the situation would repeat when the server stops the
> > correction.
> > I think a better solution to this problem would be an approach similar
> > to the Google's NTP leap second smearing. The jump in time is smeared
> > with a cosine function over very long interval, so that the frequency
> > observed in the NTP time changes slowly and the clients can stay in
> > sync during the whole correction.
> But if one, say, put in a 10PPM limit on how fast the error was corrected
> (instead of chrony's 100,000 PPM limit, but more likely about 1000PPM) then
> it would take about 300,000 hrs, which is years, to fix a 3 hour error. That
> would be absurd. Ie, I do not think limiting chrony to slew slowly enough that
> the clients can keep up (with some error) is the answer.
> For me, having chrony report its best estimate of UTC rather than its local
> time really is the only way to go. To slew the master slowly enough so that
> the clients can keep up (presumably they would not have the slew limit
> implimented) does not seem like a good option. ntpd of course gets around
> this by jumping the clock (infinite slew). But that has its problems as well.
> I think that Dore really needs to think his requirements through.
That answered another one of my questions...what chrony's max slew rate is.
I should add that I'm quite ok with a very slow rate, it is perfectly acceptable if a system will never sync with UTC in my lifetime. ntpd's max of 500 PPM seemed to work fine in my system testing. I should also add that we will be stepping these systems to match UTC eventually, it just can't be done at any time and instead needs to be coordinated with activities at each individual remote site. In the meantime I'm trying to get the time services up and running in a way that will not interfere with operations and at the same time start working on the problem for us in the background. The remote sites that have a small offset from UTC would self-correct in a relatively short period of time while we manually deal with the ones that have much greater offsets.
My worry with allowing each individual system at a site to self-adjust to UTC is that if they choose different rates they will drift apart from each other, which will break operations. For reference, the time tolerance between nodes isn't too strict, I have about a second to play with.
Thanks for the continued discussion, I'm in no way an expert on this stuff so I appreciate your feedback.
To unsubscribe email chrony-users-request@xxxxxxxxxxxxxxxxxxxx
with "unsubscribe" in the subject.
For help email chrony-users-request@xxxxxxxxxxxxxxxxxxxx
with "help" in the subject.
Trouble? Email listmaster@xxxxxxxxxxxxxxxxxxxx.