Hi,
I think it may be a problem with vdsm and gluster 10, I’ve reported a similar issue in
another thread. Vdsm is throwing an exception when parsing the XML from the gluster volume
info when using the latest gluster version 10. This is particularly bad when the gluster
server updates have been completed by moving the op-version, as it’s basically
irreversible and it’s not even possible to easily downgrade gluster. If you downgrade to
or use a gluster 8 works in fact.
I’m digging the code of vdsm to see if I can find the root cause.
Cheers,
Alessandro
Il giorno 25 apr 2022, alle ore 09:24, diego.ercolani(a)ssis.sm ha
scritto:
Hello, I have an issue probably related to my particular implementation but I think some
controls are missing
Here is the story.
I have a cluster of two nodes 4.4.10.3 with an upgraded kernel as the cpu (Ryzen 5)
suffer from an incompatibility issue with the kernel provided by 4.4.10.x series.
On each node there are three glusterfs "partitions" in replica mode, one for
the hosted_engine, the other two are for user usage.
The third node was an old i3 workstation only used to provide the arbiter partition to
the glusterfs cluster.
I installed a new server (ryzen processor) with 4.5.0 and successfully installed
glusterfs 10.1 and inserted the arbiter bricks implemented on glusterfs 10.1 while the
replica bricks are 8.6 after removing the old i3 provided bricks.
I successfully imported the new node in the ovirt engine (after updating the engine to
4.5)
The problem is that the ovirt-ha-broker doesn't start complaining that is not
possible to connect the storage. (I suppose the hosted_engine storage) so I did some digs
that I'm going to show here:
####
1. The node seem to be correctly configured:
[root@ovirt-node3 devices]# vdsm-tool validate-config
SUCCESS: ssl configured to true. No conflicts
[root@ovirt-node3 devices]# vdsm-tool configure
Checking configuration status...
libvirt is already configured for vdsm
SUCCESS: ssl configured to true. No conflicts
sanlock is configured for vdsm
Managed volume database is already configured
lvm is configured for vdsm
Current revision of multipath.conf detected, preserving
Running configure...
Done configuring modules to VDSM.
[root@ovirt-node3 devices]# vdsm-tool validate-config
SUCCESS: ssl configured to true. No conflicts
####
2. the node refuses to mount via hosted-engine (same error in broker.log)
[root@ovirt-node3 devices]# hosted-engine --connect-storage
Traceback (most recent call last):
File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/connect_storage_server.py",
line 30, in <module>
timeout=ohostedcons.Const.STORAGE_SERVER_TIMEOUT,
File
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/client/client.py", line
312, in connect_storage_server
sserver.connect_storage_server(timeout=timeout)
File
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_server.py",
line 451, in connect_storage_server
'Connection to storage server failed'
RuntimeError: Connection to storage server failed
#####
3. manually mount of glusterfs work correctly
[root@ovirt-node3 devices]# grep storage /etc/ovirt-hosted-engine/hosted-engine.conf
storage=ovirt-node2.ovirt:/gveng
# The following are used only for iSCSI storage
[root@ovirt-node3 devices]#
[root@ovirt-node3 devices]# mount -t glusterfs ovirt-node2.ovirt:/gveng /mnt/tmp/
[root@ovirt-node3 devices]# ls -l /mnt/tmp
total 0
drwxr-xr-x. 6 vdsm kvm 64 Dec 15 19:04 7b8f1cc9-e3de-401f-b97f-8c281ca30482
What else should I control? Thank you and sorry for the long message
Diego
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4LGBUOEBV7Y...