
What I would like is to get a working config consisting of 2 storage servers, 2 hosts and a management server. Storage connected to a pair of 10G switches and the public side of the servers and VMs connected to a pair of access switches. For that I need: - bonding - separate networks for storage and ovirtmgmt - storage is using gluster Ideal would be to do all configuration from the webui.
Items 1 and 2 need a version of the DC/Cluster of 3.2 but then I'm stopped from going any further because the version of vdsmd on the storage server isn't compatible with the DC/Cluster version.
can you please explain what's the issue of compatibility between the two Message in the webui console is Host st01 is compatible with versions (3.0,3.1) and cannot join Cluster Default which is set to version 3.2
(vdsm nightly should give a 3.2 compatibility)? Where does that nightly come from? On the management server I have: [root@mgmt01 /]# rpm -aq | grep vdsm vdsm-bootstrap-4.10.1-0.129.git2c2c228.fc17.noarch Which is newer than the vdsm on the hosts which is: [root@st01 ~]# rpm -aq | grep vdsm vdsm-python-4.10.0-10.fc17.x86_64 vdsm-xmlrpc-4.10.0-10.fc17.noarch vdsm-gluster-4.10.0-10.fc17.noarch vdsm-cli-4.10.0-10.fc17.noarch vdsm-4.10.0-10.fc17.x86_64 vdsm-rest-4.10.0-10.fc17.noarch
But I just put st01 into maintanance and did a reinstall but no version update of vdsm. I still have the vdsm-bootstrap logs in /tmp if needed. Thanks, Joop