I don't know why (but I suppose is related to storage speed) the virtual machines tend
to present a skew in the clock from some days to a century forward (2177)
I see in the journal of the engine:
Mar 28 13:19:40 ovirt-engine.ovirt NetworkManager[1158]: <info> [1680009580.2045]
dhcp4 (eth0): state changed new lease, address=192.168.123.20
Mar 28 13:24:40 ovirt-engine.ovirt NetworkManager[1158]: <info> [1680009880.2042]
dhcp4 (eth0): state changed new lease, address=192.168.123.20
Mar 28 13:29:40 ovirt-engine.ovirt NetworkManager[1158]: <info> [1680010180.2039]
dhcp4 (eth0): state changed new lease, address=192.168.123.20
Apr 01 08:15:42 ovirt-engine.ovirt chronyd[1072]: Forward time jump detected!
Apr 01 08:15:42 ovirt-engine.ovirt NetworkManager[1158]: <info> [1680336942.4396]
dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Apr 01 08:15:42 ovirt-engine.ovirt chronyd[1072]: Can't synchronise: no selectable
sources
When this happens in the hosted-engine tipically:
1. the DWH became unconsistent as I stated here:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/KPW5FFKG3AI6...
or
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/WUNZUSZ2ARRL...
2. the skew causes the engine to kick off the nodes that appears "down" in
"connecting" state
This compromises all the task in pending state and raise countermeasures to the
ovirt-engine manager and also vdsm daemon.
I currently tried to put in engine's crontab every 5 minutes a "hwclock
--hctosys" as it seem the hwclock don't skew