Gluster and few iSCSI Datastores in one Data Center

I decided to add another cluster to the existing data center (Enable Virt Service + Enable Gluster Service). Three nodes. But after installing the nodes (without errors) Ovirt Engine loses them cyclically. From the logs you can see that this happens when you try to interrogate the connected block devices. These block devices are LVM for VM. I use several Datastores submitted via iSCSI. About 2000 virtual machines and more than 2300 LVMs. All in the neighbor cluster (one data center). Logs in https://drive.google.com/open?id=1ja7Usxx5YCFDgjD2X51z9tzn_ycPoC2g Why is this happening and what can be done in this case?

There seem to be communication issues between vdsmd and supervdsmd services. Can you check the status of both on the nodes? Perhaps try restarting these On Tue, Apr 23, 2019 at 6:01 PM <toslavik@yandex.ru> wrote:
I decided to add another cluster to the existing data center (Enable Virt Service + Enable Gluster Service). Three nodes. But after installing the nodes (without errors) Ovirt Engine loses them cyclically. From the logs you can see that this happens when you try to interrogate the connected block devices. These block devices are LVM for VM. I use several Datastores submitted via iSCSI. About 2000 virtual machines and more than 2300 LVMs. All in the neighbor cluster (one data center). Logs in https://drive.google.com/open?id=1ja7Usxx5YCFDgjD2X51z9tzn_ycPoC2g
Why is this happening and what can be done in this case? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NNPWWJWJEIZTHR...

Hello Sahina. Of course, both services work. Restarting services gives nothing. This is confirmed by the logs. If you run the command: tailf /var/log/vdsm/supervdsm.log you can see how the lines run at a speed of about 1000 lines / sec. In an hour, supervdsm.log becomes larger than 100MB. If node move to Maitenance, then the anomalous expansion of the file supervdsm.log stops and resumes after activate node.
participants (2)
-
Sahina Bose
-
toslavik@yandex.ru