[chrony-users] RE: Chrony is making NTP Server offline on the system boot

[ Thread Index | Date Index | More chrony.tuxfamily.org/chrony-users Archives ]


I'm still digging through this issue, but i wanted to share what I've found so far looking at the journal log when chrony debug was enabled:

With Chrony debug enabled
In the journal log you can see when systemd starts the NTP client/server, its after this that the NetworkManager dispatcher would run the 20-chrony script:
Oct 28 05:09:46 gsvlfl4plb02va systemd[1]: Starting NTP client/server...
....
Oct 28 05:09:46 gsvlfl4plb02va systemd[1]: Started NTP client/server.

I can see from the dmesg log the eth1 port is up before the NTP client/server is started:
demsg
[Fri Oct 28 05:09:45 2022] vmxnet3 0000:05:00.0 eth1: NIC Link is Up 10000 Mbps
[Fri Oct 28 05:09:45 2022] vmxnet3 0000:0b:00.0: # of Tx queues : 8, # of Rx queues : 8

So at the point 20-chrony would have run the eth1 link was up so 20-chrony should not have taken the NTP source offline

At this point we can see chronyd continually retrying to connect to both NTP sources, which is what is expected but for some reason it marks the NTP sources as offline:

Oct 28 05:09:50 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:50Z ntp_io.c:316:(connect_socket) Could not connect NTP socket to 10.37.9.162:123 : Network is unreachable
Oct 28 05:09:50 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:50Z ntp_io.c:801:(NIO_SendPacket) No socket to send to 10.37.9.162:123
Oct 28 05:09:51 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:51Z ntp_core.c:1176:(transmit_timeout) Transmit timeout for [172.27.1.96:123]
Oct 28 05:09:51 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:51Z ntp_io.c:316:(connect_socket) Could not connect NTP socket to 172.27.1.96:123 : Network is unreachable
Oct 28 05:09:51 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:51Z ntp_io.c:801:(NIO_SendPacket) No socket to send to 172.27.1.96:123
Oct 28 05:09:52 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:52Z cmdmon.c:1330:(read_from_cmd_socket) Received 28 bytes from /var/run/chrony/chronyc.2173.sock fd 6
Oct 28 05:09:52 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:52Z ntp_io.c:316:(connect_socket) Could not connect NTP socket to 10.37.9.162:123 : Network is unreachable
Oct 28 05:09:52 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:52Z ntp_core.c:2357:(NCR_SetConnectivity) Source 10.37.9.162 offline
Oct 28 05:09:52 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:52Z ntp_io.c:316:(connect_socket) Could not connect NTP socket to 172.27.1.96:123 : Network is unreachable
Oct 28 05:09:52 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:52Z ntp_core.c:2357:(NCR_SetConnectivity) Source 172.27.1.96 offline

In normal working sites chrony keeps retrying to connect to the source and doesn't mark the source offline. In fact I can see in the logs after the NTP source is marked offline chrony still trys to connect to the source and eventually the source responds:

Oct 28 05:09:59 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:59Z ntp_io.c:906:(NIO_SendPacket) Sent 48 bytes to 172.27.1.96:123 from [UNSPEC] fd 7
Oct 28 05:09:59 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:59Z ntp_io.c:704:(process_message) Received 48 bytes from 172.27.1.96:123 to 107.76.43.34 fd=7 if=3 tss=1 delay=0.000046627
Oct 28 05:09:59 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:59Z ntp_core.c:1803:(receive_packet) NTP packet lvm=44 stratum=1 poll=6 prec=-25 root_delay=0.000000 root_disp=0.001617 refid=414c4e31 
Oct 28 05:09:59 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:59Z ntp_core.c:1808:(receive_packet) reference=1666933783.133786344 origin=2732877249.633040371 receive=1666933799.173438300 transmit=1666933799.173469351
Oct 28 05:09:59 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:59Z ntp_core.c:1811:(receive_packet) offset=-0.046388364 delay=0.030132085 dispersion=0.000060 root_delay=0.030132 root_dispersion=0.001678
Oct 28 05:09:59 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:59Z ntp_core.c:1814:(receive_packet) remote_interval=0.000031051 local_interval=0.030163136 response_time=0.000031051 txs=D rxs=K
Oct 28 05:09:59 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:59Z ntp_core.c:1819:(receive_packet) test123=111 test567=111 testABCD=1111 kod_rate=0 interleaved=0 presend=0 valid=1 good=1 updated=2
Oct 28 05:09:59 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:59Z sources.c:342:(SRC_AccumulateSample) ip=[172.27.1.96] t=1666933799.219842189 ofs=0.046388 del=0.030132 disp=0.001678 str=1
Oct 28 05:09:59 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:59Z sourcestats.c:699:(SST_GetSelectionData) n=1 off=0.046388 dist=0.016774 sd=4..000000 first_ago=0.015128 last_ago=0.015128 selok=0
Oct 28 05:09:59 gsvlfl4plb02va chronyd[925]: 2022-10-28T05:09:59Z sources.c:775:(SRC_SelectSource) badstat=2 sel=0 badstat_reach=1 sel_reach=0 size=0 max_reach_ago=0.000000

This may be why we see the sources marked as online when "chronyc activity" was run...but why it still continued to try the sources after they are marked offline i'm not sure

Since I could see the source file and function where the offline was done i went to the chrony code to see what it was doing. Below are snippets from the code:
ntp_core.c (chrony/ntp_core.c at 3.5-stable * mlichvar/chrony * GitHub)
int auto_offline; /* If 1, automatically go offline when requests
cannot be sent */

/* ================================================== /
/ Timeout handler for transmitting to a source. */

static void
transmit_timeout(void *arg)
{
NCR_Instance inst = (NCR_Instance) arg;

....

/* With auto_offline take the source offline if sending failed */
if (!sent && inst->auto_offline) <------ auto_offline must have be set since we enter NCR_SeConnectivity() function
NCR_SetConnectivity(inst, SRC_OFFLINE);

....
}

void
NCR_SetConnectivity(NCR_Instance inst, SRC_Connectivity connectivity)
{
....

case SRC_OFFLINE:
switch (inst->opmode) {
case MD_ONLINE:
LOG(LOGS_INFO, "Source %s offline", s);
take_offline(inst);
break;
case MD_OFFLINE:
break;
case MD_BURST_WAS_ONLINE:
inst->opmode = MD_BURST_WAS_OFFLINE;
LOG(LOGS_INFO, "Source %s offline", s); <----- This is line 2357 seen in the journal log above
break;
case MD_BURST_WAS_OFFLINE:
break;
default:
assert(0);
}
break;
default:
assert(0);
}

It appears somehow the auto_offline is getting set to one even though it is not configured in the chrony.conf. My guess is there's a bug somewhere in the chrony code that's setting the auto_offline to 1. I'm still digging through the code to see what i can find


-- 
Thanks & Regards,
Iqbal Singh

-----Original Message-----
From: Iqbal Singh Aulakh -X (iaulakh - HCL AMERICA INC at Cisco) <iaulakh@xxxxxxxxx> 
Sent: January 8, 2023 12:34 AM
To: chrony-users@xxxxxxxxxxxxxxxxxxxx; chrony-dev-request@xxxxxxxxxxxxxxxxxxxx
Cc: Kiran Kumar Pamula -X (kpamula - HCL TECHNOLOGIES LIMITED at Cisco) <kpamula@xxxxxxxxx>
Subject: [chrony-users] Chrony is making NTP Server offline on the system boot

Hello

We are having issue with chrony and it's making ntp server offline only on system boot.
Need hand and feet support in this regard.


Issue:  Chrony Marking NTP servers as offline Upon reboot of the nodes.
We do not see anything in the message log or puppet logs.

Customers' Multiple Data centers/Sites got affected by this issue.
Due to the marking of NTP offline, the clock went out of sync, resulting in a loss of traffic and causing a nationwide outage.
Output from some of the affected nodes.
This is not limited to CentOS 7, CentOS 8 or Alma Linux and happened in multiple environments.

The last reported issue was in CentOS 7
Currently, we captured the data on CentOS; they are the affected nodes in the customer's setup.

Note: In our implementation, we are not managing network interfaces with Network Manager and Looked into the following.
And it did not make any difference.

Another Note: Some nodes are affected in Multiple Data Centers, and the rest are fine.

In affected nodes, it's reproducible upon reboot and where we have not seen this issue, even in the same env on different nodes.
it's not happening over there.

We tried the following link from red hat, but it was also not applicable.

https://access.redhat.com/solutions/3968471

We made changes to our puppet to bring up interfaces only after network interfaces are up but it didn't make any difference.

Here is the output 

[qns@pcc01 ~]$ uname -a
Linux pcc01 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[qns@pcc01 ~]$ cat /etc/*-release
CentOS Linux release 7.6.1810 (Core)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL=https://www.centos.org/
BUG_REPORT_URL=https://bugs.centos.org/
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
CentOS Linux release 7.6.1810 (Core)
CentOS Linux release 7.6.1810 (Core)

[qns@pcc01 ~]$ rpm -qa | sort -i | grep -i '(chrony|ntp)'
chrony-3.2-2.el7.x86_64
fontpackages-filesystem-1.44-8.el7.noarch
ntp-4.2.6p5-28.el7.centos.x86_64
ntpdate-4.2.6p5-28.el7.centos.x86_64

[qns@pcc01 ~]$ ps auxwwwf | grep -i '(chrony|ntp)'
ntp 30958 0.0 0.0 25720 1896 ? Ss 2021 0:50 /usr/sbin/ntpd -u ntp:ntp -g qns 3827 0.0 0.0 112708 1012 pts/0 S+ 19:30 0:00 _ grep --color=auto -i (chrony|ntp)
[qns@pcc01 ~]$ date
Thu Sep 1 19:30:04 UTC 2022

[qns@pps02 ~]$ chronyc sources
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample ^? lb01 2 10 0 44d +3581ms[+3581ms] +/- 16.0s ^? lb02 3 10 0 47d +46us[ +46us] +/- 15.9s

[qns@pps02 ~]$ chronyc tracking
Reference ID : AC1AF20C (lb02)
Stratum : 4
Ref time (UTC) : Thu Jun 09 10:20:05 2022 System time : 0.000000153 seconds slow of NTP time Last offset : -0.000002489 seconds RMS offset : 0.187102452 seconds Frequency : 17.639 ppm slow Residual freq : -0.000 ppm Skew : 0.002 ppm Root delay : 0.038741432 seconds Root dispersion : 7.168707848 seconds Update interval : 1031.8 seconds Leap status : Normal

[qns@pps02 ~]$ chronyc sourcestats
210 Number of sources = 2
Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
lb01 18 11 92m +6.918 0.022 +30.4s 42us
lb02 64 33 18h +0.000 0.000 +885us 18us
[qns@lb01 log]$ chronyc sources
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample
^* NTP-server1.> 1 6 0 50d -25us[ -161us] +/- 17ms
^- NTP-server2.> 2 6 0 50d +685us[ +685us] +/- 34ms

[qns@lb01
log]$ chronyc tracking
Reference ID : 9BAED650 (NTP-server1.com) Stratum : 2 Ref time (UTC) : Wed Jun 08 06:58:03 2022 System time : 0.000000028 seconds slow of NTP time Last offset : -0.000135603 seconds RMS offset : 0.000135603 seconds Frequency : 15.377 ppm slow Residual freq : -31.511 ppm Skew : 0.026 ppm Root delay : 0.030828953 seconds Root dispersion : 142.074813843 seconds Update interval : 2.1 seconds Leap status : Normal

[qns@lb01 log]$ chronyc sourcestats
210 Number of sources = 2
Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev NTP-server1.> 4 4 6 -31.511 1011.791 -137.6s 149us NTP-server2.> 4 4 6 -179.900 1376.575 -785.5s 152us

Thanks
Iqbal Singh 



--
To unsubscribe email chrony-users-request@xxxxxxxxxxxxxxxxxxxx
with "unsubscribe" in the subject.
For help email chrony-users-request@xxxxxxxxxxxxxxxxxxxx
with "help" in the subject.
Trouble?  Email listmaster@xxxxxxxxxxxxxxxxxxxx.


--
To unsubscribe email chrony-users-request@xxxxxxxxxxxxxxxxxxxx
with "unsubscribe" in the subject.
For help email chrony-users-request@xxxxxxxxxxxxxxxxxxxx
with "help" in the subject.
Trouble?  Email listmaster@xxxxxxxxxxxxxxxxxxxx.


Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/