problems testing 4.3.10 to 4.4.8 upgrade SHE
by Gianluca Cecchi
Hello,
I'm testing what in object in a test env with novirt1 and novirt2 as hosts.
First reinstalled host is novirt2
For this I downloaded the 4.4.8 iso of the node:
https://resources.ovirt.org/pub/ovirt-4.4/iso/ovirt-node-ng-installer/4.4...
before running the restore command for the first scratched node I
pre-installed the appliance rpm on it and I got:
ovirt-engine-appliance-4.4-20210818155544.1.el8.x86_64
I selected to pause an d I arrived here with local vm engine completing its
setup:
INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add host]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Include after_add_host
tasks files]
[ INFO ] You can now connect to
https://novirt2.localdomain.local:6900/ovirt-engine/ and check the status
of this host and eventually remediate it, please continue only when the
host is listed as 'up'
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock
file]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until
/tmp/ansible.4_o6a2wo_he_setup_lock is removed, delete it once ready to
proceed]
But connecting t the provided
https://novirt2.localdomain.local:6900/ovirt-engine/ url
I see that only the still 4.3.10 host results up while novirt2 is not
resp[onsive
vm situation:
https://drive.google.com/file/d/1OwHHzK0owU2HWZqvHFaLLbHVvjnBhRRX/view?us...
storage situation:
https://drive.google.com/file/d/1D-rmlpGsKfRRmYx2avBk_EYCG7XWMXNq/view?us...
hosts situation:
https://drive.google.com/file/d/1yrmfYF6hJFzKaG54Xk0Rhe2kY-TIcUvA/view?us...
In engine.log I see
2021-08-25 09:14:38,548+02 ERROR
[org.ovirt.engine.core.vdsbroker.HostDevListByCapsVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-4) [5f4541ee] Command
'HostDevListByCapsVDSCommand(HostName = novirt2.localdomain.local,
VdsIdAndVdsVDSCommandParametersBase:{hostId='ca9ff6f7-5a7c-4168-9632-998c52f76cfa',
vds='Host[novirt2.localdomain.local,ca9ff6f7-5a7c-4168-9632-998c52f76cfa]'})'
execution failed: java.net.ConnectException: Connection refused
and continuouslly this message...
I also tried to restart vdsmd on novit2 but nothing changed.
Do I have to restart the HA daemons on novirt2?
Any insight?
Thanks
Gianluca
3 years, 3 months
Error when trying to change master storage domain
by Matthew Benstead
Hello,
I'm trying to decommission the old master storage domain in ovirt, and
replace it with a new one. All of the VMs have been migrated off of the
old master, and everything has been running on the new storage domain
for a couple months. But when I try to put the old domain into
maintenance mode I get an error.
Old Master: vm-storage-ssd
New Domain: vm-storage-ssd2
The error is:
Failed to Reconstruct Master Domain for Data Center EDC2
As well as:
Sync Error on Master Domain between Host daccs01 and oVirt Engine.
Domain: vm-storage-ssd is marked as Master in oVirt Engine database but
not on the Storage side. Please consult with Support on how to fix this
issue.
2021-07-28 11:41:34,870-07 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
(EE-ManagedThreadFactory-engine-Thread-23) [] Master domain version is
not in sync between DB and VDSM. Domain vm-storage-ssd
marked as master, but the version in DB: 283 and in VDSM: 280
And:
Not stopping SPM on vds daccs01, pool id
f72ec125-69a1-4c1b-a5e1-313fcb70b6ff as there are uncleared tasks Task
'5fa9edf0-56c3-40e4-9327-47bf7764d28d', status 'finished'
After a couple minutes all the domains are marked as active again and
things continue, but vm-storage-ssd is still listed as the master
domain. Any thoughts?
This is on 4.3.10.4-1.el7 on CentOS 7.
engine=# SELECT storage_name, storage_pool_id, storage, status FROM
storage_pool_with_storage_domain ORDER BY storage_name;
storage_name | storage_pool_id
| storage | status
-----------------------+--------------------------------------+----------------------------------------+--------
compute1-iscsi-ssd | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
yvUESE-yWUv-VIWL-qX90-aAq7-gK0I-EqppRL | 1
compute7-iscsi-ssd | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
8ekHdv-u0RJ-B0FO-LUUK-wDWs-iaxb-sh3W3J | 1
export-domain-storage | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
d3932528-6844-481a-bfed-542872ace9e5 | 1
iso-storage | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
f800b7a6-6a0c-4560-8476-2f294412d87d | 1
vm-storage-7200rpm | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
a0bff472-1348-4302-a5c7-f1177efa45a9 | 1
vm-storage-ssd | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
95acd9a4-a6fb-4208-80dd-1c53d6aacad0 | 1
vm-storage-ssd2 | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
829d0600-c3f7-4dae-a749-d7f05c6a6ca4 | 1
(7 rows)
Thanks,
-Matthew
--
3 years, 3 months
Poor gluster performances over 10Gbps network
by Mathieu Valois
Sorry for double post but I don't know if this mail has been received.
Hello everyone,
I know this issue was already treated on this mailing list. However none
of the proposed solutions is satisfying me.
Here is my situation : I've got 3 hyperconverged gluster ovirt nodes,
with 6 network interfaces, bounded in bunches of 2 (management, VMs and
gluster). The gluster network is on a dedicated bound where the 2
interfaces are directly connected to the 2 other ovirt nodes. Gluster is
apparently using it :
# gluster volume status vmstore
Status of volume: vmstore
Gluster process TCP Port RDMA Port
Online Pid
------------------------------------------------------------------------------
Brick gluster-ov1:/gluster_bricks
/vmstore/vmstore 49152 0 Y 3019
Brick gluster-ov2:/gluster_bricks
/vmstore/vmstore 49152 0 Y 3009
Brick gluster-ov3:/gluster_bricks
/vmstore/vmstore
where 'gluster-ov{1,2,3}' are domain names referencing nodes in the
gluster network. This networks has 10Gbps capabilities :
# iperf3 -c gluster-ov3
Connecting to host gluster-ov3, port 5201
[ 5] local 10.20.0.50 port 46220 connected to 10.20.0.51 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.16 GBytes 9.92 Gbits/sec 17 900 KBytes
[ 5] 1.00-2.00 sec 1.15 GBytes 9.90 Gbits/sec 0 900 KBytes
[ 5] 2.00-3.00 sec 1.15 GBytes 9.90 Gbits/sec 4 996 KBytes
[ 5] 3.00-4.00 sec 1.15 GBytes 9.90 Gbits/sec 1 996 KBytes
[ 5] 4.00-5.00 sec 1.15 GBytes 9.89 Gbits/sec 0 996 KBytes
[ 5] 5.00-6.00 sec 1.15 GBytes 9.90 Gbits/sec 0 996 KBytes
[ 5] 6.00-7.00 sec 1.15 GBytes 9.90 Gbits/sec 0 996 KBytes
[ 5] 7.00-8.00 sec 1.15 GBytes 9.91 Gbits/sec 0 996 KBytes
[ 5] 8.00-9.00 sec 1.15 GBytes 9.90 Gbits/sec 0 996 KBytes
[ 5] 9.00-10.00 sec 1.15 GBytes 9.90 Gbits/sec 0 996 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 11.5 GBytes 9.90 Gbits/sec 22
sender
[ 5] 0.00-10.04 sec 11.5 GBytes 9.86
Gbits/sec receiver
iperf Done.
However, VMs stored on the vmstore gluster volume has poor write
performances, oscillating between 100KBps and 30MBps. I almost always
observe a write spike (180Mbps) at the beginning until around 500MB
written, then it drastically falls at 10MBps, sometimes even less
(100KBps). Hypervisors have 32 threads (2 sockets, 8 cores per socket, 2
threads per core).
Here is the volume settings :
Volume Name: vmstore
Type: Replicate
Volume ID: XXX
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster-ov1:/gluster_bricks/vmstore/vmstore
Brick2: gluster-ov2:/gluster_bricks/vmstore/vmstore
Brick3: gluster-ov3:/gluster_bricks/vmstore/vmstore
Options Reconfigured:
performance.io-thread-count: 32 # was 16 by default.
cluster.granular-entry-heal: enable
storage.owner-gid: 36
storage.owner-uid: 36
cluster.lookup-optimize: off
server.keepalive-count: 5
server.keepalive-interval: 2
server.keepalive-time: 10
server.tcp-user-timeout: 20
network.ping-timeout: 30
server.event-threads: 4
client.event-threads: 8 # was 4 by default
cluster.choose-local: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
When I naively write directly on the logical volume, which is mounted on
a material RAID5 3-disks array, I have interesting performances:
# dd if=/dev/zero of=a bs=4M count=2048
2048+0 records in
2048+0 records out
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 17.2485 s, 498 MB/s
#urandom gives around 200MBps
Moreover, hypervisors have SSD which have been configured as lvcache,
but I'm unsure how to test it efficiently.
I can't find where is the problem, as every piece of the chain is
apparently doing well ...
Thanks anyone for helping me :)
--
téïcée <https://www.teicee.com/?pk_campaign=Email> *Mathieu Valois*
Bureau Caen: Quartier Kœnig - 153, rue Géraldine MOCK - 14760
Bretteville-sur-Odon
Bureau Vitré: Zone de la baratière - 12, route de Domalain - 35500 Vitré
02 72 34 13 20 | www.teicee.com <https://www.teicee.com/?pk_campaign=Email>
téïcée sur facebook <https://www.facebook.com/teicee> téïcée sur twitter
<https://twitter.com/Teicee_fr> téïcée sur linkedin
<https://www.linkedin.com/company/t-c-e> téïcée sur viadeo
<https://fr.viadeo.com/fr/company/teicee> Datadocké
3 years, 3 months
about the certification
by CuiTao
I wanted to understand how certificates work in OLVM, including the
following:
What certificates exist in the OLVM system?
Second, what are the expiration dates of these certificates?
Third, how to view the remaining time of the certificate and regenerate the
certificate before expiration?
Thanks!
3 years, 3 months
Redeploying hosted engine from backup
by Artem Tambovskiy
Hello,
Just had an issue with my cluster with a self-hosted engine (hosted-engine
is not coming up) and decided to redeploy it as I have a backup.
Just tried a hosted-engine --deploy --restore-from-file=engine.backup
But the script is asking questions like DC name, Cluster name which I can't
recall correctly.
What will be the consequence of the wrong answer? Is there any chance to
get this info from the hosts or backup file?
--
Regards,
Artem
3 years, 3 months
how to remove a failed backup operation
by Gianluca Cecchi
Hello,
I was trying incremental backup with the provided
/usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py and began
using the "full" option.
But I specified an incorrect dir and during backup I got error due to
filesystem full
[ 156.7 ] Creating image transfer for disk
'33b0f6fb-a855-465d-a628-5fce9b64496a'
[ 157.8 ] Image transfer 'ccc386d3-9f9d-4727-832a-56d355d60a95' is ready
--- Logging error ---, 105.02 seconds, 147.48 MiB/s
Traceback (most recent call last):
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py",
line 242, in _run
handler.copy(req)
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/io.py",
line 286, in copy
self._src.write_to(self._dst, req.length, self._buf)
File
"/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/http.py",
line 216, in write_to
writer.write(view[:n])
File
"/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/nbd.py",
line 118, in write
self._client.write(self._position, buf)
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py",
line 445, in write
self._recv_reply(cmd)
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py",
line 980, in _recv_reply
if self._recv_reply_chunk(cmd):
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py",
line 1031, in _recv_reply_chunk
self._handle_error_chunk(length, flags)
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py",
line 1144, in _handle_error_chunk
raise ReplyError(code, message)
ovirt_imageio._internal.nbd.ReplyError: Writing to file failed: [Error 28]
No space left on device
Now if I try the same backup command (so with "full" option) and I get
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is
"[Cannot backup VM. The VM is during a backup operation.]". HTTP response
code is 409.
How can I clean the situation?
BTW: the parameter to put into ovirt.conf is backup-dir or backup_dir or
what?
Thanks,
Gianluca
3 years, 3 months
Re: Poor gluster performances over 10Gbps network
by Mathieu Valois
Default shard size was 64MB. I've tested to create another volume with a
512MB shard size and settings you provided.
I observe a performance improvement with `dd`, which does a sequential
write (from 1MBps to 50MBps). However, I find VM still slow : as an
example, `apt upgrade` is very slow to unpack packages, more than 1 hour
for around 400MB (90 packages).
Le 08/09/2021 à 11:34, Alex K a écrit :
>
>
> On Wed, Sep 8, 2021 at 12:33 PM Alex K <rightkicktech(a)gmail.com
> <mailto:rightkicktech@gmail.com>> wrote:
>
>
>
> On Wed, Sep 8, 2021 at 12:15 PM Mathieu Valois <mvalois(a)teicee.com
> <mailto:mvalois@teicee.com>> wrote:
>
> Sorry for double post but I don't know if this mail has been
> received.
>
> Hello everyone,
>
> I know this issue was already treated on this mailing list.
> However none of the proposed solutions is satisfying me.
>
> Here is my situation : I've got 3 hyperconverged gluster ovirt
> nodes, with 6 network interfaces, bounded in bunches of 2
> (management, VMs and gluster). The gluster network is on a
> dedicated bound where the 2 interfaces are directly connected
> to the 2 other ovirt nodes. Gluster is apparently using it :
>
> # gluster volume status vmstore
> Status of volume: vmstore
> Gluster process TCP Port RDMA
> Port Online Pid
> ------------------------------------------------------------------------------
> Brick gluster-ov1:/gluster_bricks
> /vmstore/vmstore 49152 0 Y 3019
> Brick gluster-ov2:/gluster_bricks
> /vmstore/vmstore 49152 0 Y 3009
> Brick gluster-ov3:/gluster_bricks
> /vmstore/vmstore
>
> where 'gluster-ov{1,2,3}' are domain names referencing nodes
> in the gluster network. This networks has 10Gbps capabilities :
>
> # iperf3 -c gluster-ov3
> Connecting to host gluster-ov3, port 5201
> [ 5] local 10.20.0.50 port 46220 connected to 10.20.0.51
> port 5201
> [ ID] Interval Transfer Bitrate Retr Cwnd
> [ 5] 0.00-1.00 sec 1.16 GBytes 9.92 Gbits/sec
> 17 900 KBytes
> [ 5] 1.00-2.00 sec 1.15 GBytes 9.90 Gbits/sec
> 0 900 KBytes
> [ 5] 2.00-3.00 sec 1.15 GBytes 9.90 Gbits/sec
> 4 996 KBytes
> [ 5] 3.00-4.00 sec 1.15 GBytes 9.90 Gbits/sec
> 1 996 KBytes
> [ 5] 4.00-5.00 sec 1.15 GBytes 9.89 Gbits/sec
> 0 996 KBytes
> [ 5] 5.00-6.00 sec 1.15 GBytes 9.90 Gbits/sec
> 0 996 KBytes
> [ 5] 6.00-7.00 sec 1.15 GBytes 9.90 Gbits/sec
> 0 996 KBytes
> [ 5] 7.00-8.00 sec 1.15 GBytes 9.91 Gbits/sec
> 0 996 KBytes
> [ 5] 8.00-9.00 sec 1.15 GBytes 9.90 Gbits/sec
> 0 996 KBytes
> [ 5] 9.00-10.00 sec 1.15 GBytes 9.90 Gbits/sec
> 0 996 KBytes
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval Transfer Bitrate Retr
> [ 5] 0.00-10.00 sec 11.5 GBytes 9.90 Gbits/sec
> 22 sender
> [ 5] 0.00-10.04 sec 11.5 GBytes 9.86
> Gbits/sec receiver
>
> iperf Done.
>
> However, VMs stored on the vmstore gluster volume has poor
> write performances, oscillating between 100KBps and 30MBps. I
> almost always observe a write spike (180Mbps) at the beginning
> until around 500MB written, then it drastically falls at
> 10MBps, sometimes even less (100KBps). Hypervisors have 32
> threads (2 sockets, 8 cores per socket, 2 threads per core).
>
> Here is the volume settings :
>
> Volume Name: vmstore
> Type: Replicate
> Volume ID: XXX
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster-ov1:/gluster_bricks/vmstore/vmstore
> Brick2: gluster-ov2:/gluster_bricks/vmstore/vmstore
> Brick3: gluster-ov3:/gluster_bricks/vmstore/vmstore
> Options Reconfigured:
> performance.io-thread-count: 32 # was 16 by default.
> cluster.granular-entry-heal: enable
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.lookup-optimize: off
> server.keepalive-count: 5
> server.keepalive-interval: 2
> server.keepalive-time: 10
> server.tcp-user-timeout: 20
> network.ping-timeout: 30
> server.event-threads: 4
> client.event-threads: 8 # was 4 by default
> cluster.choose-local: off
> features.shard: on
> cluster.shd-wait-qlength: 10000
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.eager-lock: enable
> performance.strict-o-direct: on
> network.remote-dio: off
> performance.low-prio-threads: 32
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> auth.allow: *
> user.cifs: off
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: on
>
> I see you have not enabled sharding for the gluster volume that is
> hosting VMs. I usually set a shard size of 512MB. Note though that
> changing shard size in a live system will corrupt your data.
> You need to wipe and set again.
>
> I mean you have not changed the default sharde size which is I think 4MB.
>
>
> I've seen also good perf with the following tweaks:
>
> gluster volume set vms remote-dio enable
>
> gluster volume set vms performance.write-behind-window-size 8MB
>
> gluster volume set vms performance.cache-size 1GB
>
> gluster volume set vms performance.io-thread-count 16
>
> gluster volume set vms performance.readdir-ahead on
>
> gluster volume set vms client.event-threads 8
>
> gluster volume set vms server.event-threads 8
>
> gluster volume set engine features.shard-block-size 512MB
>
> When I naively write directly on the logical volume, which is
> mounted on a material RAID5 3-disks array, I have interesting
> performances:
>
> # dd if=/dev/zero of=a bs=4M count=2048
> 2048+0 records in
> 2048+0 records out
> 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 17.2485 s, 498
> MB/s #urandom gives around 200MBps
>
> Moreover, hypervisors have SSD which have been configured as
> lvcache, but I'm unsure how to test it efficiently.
>
> I can't find where is the problem, as every piece of the chain
> is apparently doing well ...
>
> Thanks anyone for helping me :)
>
> --
> téïcée <https://www.teicee.com/?pk_campaign=Email> *Mathieu
> Valois*
>
> Bureau Caen: Quartier Kœnig - 153, rue Géraldine MOCK - 14760
> Bretteville-sur-Odon
> Bureau Vitré: Zone de la baratière - 12, route de Domalain -
> 35500 Vitré
> 02 72 34 13 20 | www.teicee.com
> <https://www.teicee.com/?pk_campaign=Email>
>
> téïcée sur facebook <https://www.facebook.com/teicee> téïcée
> sur twitter <https://twitter.com/Teicee_fr> téïcée sur
> linkedin <https://www.linkedin.com/company/t-c-e> téïcée sur
> viadeo <https://fr.viadeo.com/fr/company/teicee> Datadocké
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org
> <mailto:users-leave@ovirt.org>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> <https://www.ovirt.org/privacy-policy.html>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> <https://www.ovirt.org/community/about/community-guidelines/>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BTY7TJLMQUJ...
> <https://lists.ovirt.org/archives/list/users@ovirt.org/message/BTY7TJLMQUJ...>
>
--
téïcée <https://www.teicee.com/?pk_campaign=Email> *Mathieu Valois*
Bureau Caen: Quartier Kœnig - 153, rue Géraldine MOCK - 14760
Bretteville-sur-Odon
Bureau Vitré: Zone de la baratière - 12, route de Domalain - 35500 Vitré
02 72 34 13 20 | www.teicee.com <https://www.teicee.com/?pk_campaign=Email>
téïcée sur facebook <https://www.facebook.com/teicee> téïcée sur twitter
<https://twitter.com/Teicee_fr> téïcée sur linkedin
<https://www.linkedin.com/company/t-c-e> téïcée sur viadeo
<https://fr.viadeo.com/fr/company/teicee> Datadocké
3 years, 3 months
data storage domain iso upload problem
by csabany@freemail.hu
Hi,
I managed a ovirt 4.4.7 for production systems.
Last week i removed the Master storage domain (moved tamplates, vm-s well, unattached, etc), but i forgot to move isos.
Now, when i upload a new iso to a data storage domain, the system show it, but it's unbootable:
"could not read from cdrom code 0005"
thanks
csabany
3 years, 3 months
Re: Time Drift Issues
by Marcos Sungaila
Hi Nur,
Yes, oVirt Node comes with an NTP client installed and started. You need to configure the servers it will use as external sources.
Yes, it will be preserved in a host upgrade.
Regards,
Marcos
From: Nur Imam Febrianto <nur_imam(a)outlook.com>
Sent: quarta-feira, 8 de setembro de 2021 08:09
To: Marcos Sungaila <marcos.sungaila(a)oracle.com>; oVirt Users <users(a)ovirt.org>
Subject: [External] : RE: Time Drift Issues
Hi Marcos,
Want to clarify one thing. If I'm using and oVirt Node based host (not EL Linux Based), does it already configured with ntp client or can I just configure any ntp client on the host ? If I just configure ntp client on the host either chronyc or ntpd, will it persist after upgrading the node ?
Thanks before.
Regards,
Nur Imam Febrianto
Sent from Mail<https://urldefense.com/v3/__https:/go.microsoft.com/fwlink/?LinkId=550986...> for Windows
From: Marcos Sungaila<mailto:marcos.sungaila@oracle.com>
Sent: Tuesday, 07 September 2021 20:54
To: Nur Imam Febrianto<mailto:nur_imam@outlook.com>; oVirt Users<mailto:users@ovirt.org>
Subject: RE: Time Drift Issues
Hi Nur,
ntpd has a 300sec limit for synchronization. If you want a more flexible NTP client, use chronyd.
If you prefer to use ntpd, you should have ntpdate as a boot client and ntpd as a runtime client.
When the server boots, the ntpdate client will sync time no matter the difference. As a boot client, it runs only once and exits. Then the ntpd service starts, and keep your server clock in sync with an external source.
On distributions like CentOS and similar, ntpdate reads the external time sources configuration from ntpd.conf. You should need booth packages and services installed and enabled:
I prefer chronyd since it does not need any extra service or procedure to keep clock synchronization.
Regards,
Marcos
From: Nur Imam Febrianto <nur_imam(a)outlook.com<mailto:nur_imam@outlook.com>>
Sent: terça-feira, 7 de setembro de 2021 01:08
To: oVirt Users <users(a)ovirt.org<mailto:users@ovirt.org>>
Subject: [External] : [ovirt-users] Time Drift Issues
Hi All,
Recently I got an warning in our cluster about time drift :
Host xxxxx has time-drift of 672 seconds while maximum configured value is 300 seconds.
What should I do to address this issue ? Should I reconfigure / configure ntp client on all host ?
Thanks before.
Regards,
Nur Imam Febrianto
Sent from Mail<https://urldefense.com/v3/__https:/apac01.safelinks.protection.outlook.co...> for Windows
3 years, 3 months