
I continue my quest to get a working version of the latest oVirt and started with a clean setup. What I would like is to get a working config consisting of 2 storage servers, 2 hosts and a management server. Storage connected to a pair of 10G switches and the public side of the servers and VMs connected to a pair of access switches. For that I need: - bonding - separate networks for storage and ovirtmgmt - storage is using gluster Ideal would be to do all configuration from the webui. Items 1 and 2 need a version of the DC/Cluster of 3.2 but then I'm stopped from going any further because the version of vdsmd on the storage server isn't compatible with the DC/Cluster version. How can I proceed with my 3.2 testing, or does someone have a better plan of getting this setup workign. Thanks in advance, Joop

On 11/15/2012 03:54 PM, Joop wrote:
I continue my quest to get a working version of the latest oVirt and started with a clean setup.
What I would like is to get a working config consisting of 2 storage servers, 2 hosts and a management server. Storage connected to a pair of 10G switches and the public side of the servers and VMs connected to a pair of access switches. For that I need: - bonding - separate networks for storage and ovirtmgmt - storage is using gluster Ideal would be to do all configuration from the webui.
Items 1 and 2 need a version of the DC/Cluster of 3.2 but then I'm stopped from going any further because the version of vdsmd on the storage server isn't compatible with the DC/Cluster version.
can you please explain what's the issue of compatibility between the two (vdsm nightly should give a 3.2 compatibility)?
How can I proceed with my 3.2 testing, or does someone have a better plan of getting this setup workign.
Thanks in advance,
Joop
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

What I would like is to get a working config consisting of 2 storage servers, 2 hosts and a management server. Storage connected to a pair of 10G switches and the public side of the servers and VMs connected to a pair of access switches. For that I need: - bonding - separate networks for storage and ovirtmgmt - storage is using gluster Ideal would be to do all configuration from the webui.
Items 1 and 2 need a version of the DC/Cluster of 3.2 but then I'm stopped from going any further because the version of vdsmd on the storage server isn't compatible with the DC/Cluster version.
can you please explain what's the issue of compatibility between the two Message in the webui console is Host st01 is compatible with versions (3.0,3.1) and cannot join Cluster Default which is set to version 3.2
(vdsm nightly should give a 3.2 compatibility)? Where does that nightly come from? On the management server I have: [root@mgmt01 /]# rpm -aq | grep vdsm vdsm-bootstrap-4.10.1-0.129.git2c2c228.fc17.noarch Which is newer than the vdsm on the hosts which is: [root@st01 ~]# rpm -aq | grep vdsm vdsm-python-4.10.0-10.fc17.x86_64 vdsm-xmlrpc-4.10.0-10.fc17.noarch vdsm-gluster-4.10.0-10.fc17.noarch vdsm-cli-4.10.0-10.fc17.noarch vdsm-4.10.0-10.fc17.x86_64 vdsm-rest-4.10.0-10.fc17.noarch
But I just put st01 into maintanance and did a reinstall but no version update of vdsm. I still have the vdsm-bootstrap logs in /tmp if needed. Thanks, Joop

On 11/15/2012 04:53 PM, Joop wrote:
What I would like is to get a working config consisting of 2 storage servers, 2 hosts and a management server. Storage connected to a pair of 10G switches and the public side of the servers and VMs connected to a pair of access switches. For that I need: - bonding - separate networks for storage and ovirtmgmt - storage is using gluster Ideal would be to do all configuration from the webui.
Items 1 and 2 need a version of the DC/Cluster of 3.2 but then I'm stopped from going any further because the version of vdsmd on the storage server isn't compatible with the DC/Cluster version.
can you please explain what's the issue of compatibility between the two Message in the webui console is Host st01 is compatible with versions (3.0,3.1) and cannot join Cluster Default which is set to version 3.2
(vdsm nightly should give a 3.2 compatibility)? Where does that nightly come from? On the management server I have: [root@mgmt01 /]# rpm -aq | grep vdsm vdsm-bootstrap-4.10.1-0.129.git2c2c228.fc17.noarch Which is newer than the vdsm on the hosts which is: [root@st01 ~]# rpm -aq | grep vdsm vdsm-python-4.10.0-10.fc17.x86_64 vdsm-xmlrpc-4.10.0-10.fc17.noarch vdsm-gluster-4.10.0-10.fc17.noarch vdsm-cli-4.10.0-10.fc17.noarch vdsm-4.10.0-10.fc17.x86_64 vdsm-rest-4.10.0-10.fc17.noarch
But I just put st01 into maintanance and did a reinstall but no version update of vdsm. I still have the vdsm-bootstrap logs in /tmp if needed.
Thanks,
Joop
danken, which version of vdsm upstream is 3.2 compatible? where can users get it? thanks, Itamar

On Thu, Nov 15, 2012 at 08:03:51PM +0200, Itamar Heim wrote:
On 11/15/2012 04:53 PM, Joop wrote:
What I would like is to get a working config consisting of 2 storage servers, 2 hosts and a management server. Storage connected to a pair of 10G switches and the public side of the servers and VMs connected to a pair of access switches. For that I need: - bonding - separate networks for storage and ovirtmgmt - storage is using gluster Ideal would be to do all configuration from the webui.
Items 1 and 2 need a version of the DC/Cluster of 3.2 but then I'm stopped from going any further because the version of vdsmd on the storage server isn't compatible with the DC/Cluster version.
can you please explain what's the issue of compatibility between the two Message in the webui console is Host st01 is compatible with versions (3.0,3.1) and cannot join Cluster Default which is set to version 3.2
(vdsm nightly should give a 3.2 compatibility)? Where does that nightly come from? On the management server I have: [root@mgmt01 /]# rpm -aq | grep vdsm vdsm-bootstrap-4.10.1-0.129.git2c2c228.fc17.noarch Which is newer than the vdsm on the hosts which is: [root@st01 ~]# rpm -aq | grep vdsm vdsm-python-4.10.0-10.fc17.x86_64 vdsm-xmlrpc-4.10.0-10.fc17.noarch vdsm-gluster-4.10.0-10.fc17.noarch vdsm-cli-4.10.0-10.fc17.noarch vdsm-4.10.0-10.fc17.x86_64 vdsm-rest-4.10.0-10.fc17.noarch
But I just put st01 into maintanance and did a reinstall but no version update of vdsm. I still have the vdsm-bootstrap logs in /tmp if needed.
Thanks,
Joop
danken, which version of vdsm upstream is 3.2 compatible? where can users get it?
We haven't had an ovirt-3.2 release yet. Users can build it themselves from master branch's vdsm-4.10.2-something. Federico, could you build a test-only package for fedora 18, based on master?

On 11/15/2012 11:52 PM, Dan Kenigsberg wrote:
On Thu, Nov 15, 2012 at 08:03:51PM +0200, Itamar Heim wrote:
On 11/15/2012 04:53 PM, Joop wrote:
What I would like is to get a working config consisting of 2 storage servers, 2 hosts and a management server. Storage connected to a pair of 10G switches and the public side of the servers and VMs connected to a pair of access switches. For that I need: - bonding - separate networks for storage and ovirtmgmt - storage is using gluster Ideal would be to do all configuration from the webui.
Items 1 and 2 need a version of the DC/Cluster of 3.2 but then I'm stopped from going any further because the version of vdsmd on the storage server isn't compatible with the DC/Cluster version.
can you please explain what's the issue of compatibility between the two Message in the webui console is Host st01 is compatible with versions (3.0,3.1) and cannot join Cluster Default which is set to version 3.2
(vdsm nightly should give a 3.2 compatibility)? Where does that nightly come from? On the management server I have: [root@mgmt01 /]# rpm -aq | grep vdsm vdsm-bootstrap-4.10.1-0.129.git2c2c228.fc17.noarch Which is newer than the vdsm on the hosts which is: [root@st01 ~]# rpm -aq | grep vdsm vdsm-python-4.10.0-10.fc17.x86_64 vdsm-xmlrpc-4.10.0-10.fc17.noarch vdsm-gluster-4.10.0-10.fc17.noarch vdsm-cli-4.10.0-10.fc17.noarch vdsm-4.10.0-10.fc17.x86_64 vdsm-rest-4.10.0-10.fc17.noarch
But I just put st01 into maintanance and did a reinstall but no version update of vdsm. I still have the vdsm-bootstrap logs in /tmp if needed.
Thanks,
Joop
danken, which version of vdsm upstream is 3.2 compatible? where can users get it?
We haven't had an ovirt-3.2 release yet. Users can build it themselves from master branch's vdsm-4.10.2-something. Federico, could you build a test-only package for fedora 18, based on master?
true, but nightly of ovirt engine includes 3.2 (actually just waiting for a long overdue patch to change the version to 3.2). isn't there a nightly vdsm build as well which reports 3.2 version?

----- Original Message -----
From: "Dan Kenigsberg" <danken@redhat.com> To: "Itamar Heim" <iheim@redhat.com>, "Federico Simoncelli" <fsimonce@redhat.com> Cc: "Joop" <jvdwege@xs4all.nl>, users@ovirt.org, "Moran Goldboim" <mgoldboi@redhat.com>, "Ofer Schreiber" <oschreib@redhat.com> Sent: Thursday, November 15, 2012 10:52:39 PM Subject: Re: [Users] Testing oVirt nightly
On Thu, Nov 15, 2012 at 08:03:51PM +0200, Itamar Heim wrote:
On 11/15/2012 04:53 PM, Joop wrote:
What I would like is to get a working config consisting of 2 storage servers, 2 hosts and a management server. Storage connected to a pair of 10G switches and the public side of the servers and VMs connected to a pair of access switches. For that I need: - bonding - separate networks for storage and ovirtmgmt - storage is using gluster Ideal would be to do all configuration from the webui.
Items 1 and 2 need a version of the DC/Cluster of 3.2 but then I'm stopped from going any further because the version of vdsmd on the storage server isn't compatible with the DC/Cluster version.
can you please explain what's the issue of compatibility between the two Message in the webui console is Host st01 is compatible with versions (3.0,3.1) and cannot join Cluster Default which is set to version 3.2
(vdsm nightly should give a 3.2 compatibility)? Where does that nightly come from? On the management server I have: [root@mgmt01 /]# rpm -aq | grep vdsm vdsm-bootstrap-4.10.1-0.129.git2c2c228.fc17.noarch Which is newer than the vdsm on the hosts which is: [root@st01 ~]# rpm -aq | grep vdsm vdsm-python-4.10.0-10.fc17.x86_64 vdsm-xmlrpc-4.10.0-10.fc17.noarch vdsm-gluster-4.10.0-10.fc17.noarch vdsm-cli-4.10.0-10.fc17.noarch vdsm-4.10.0-10.fc17.x86_64 vdsm-rest-4.10.0-10.fc17.noarch
But I just put st01 into maintanance and did a reinstall but no version update of vdsm. I still have the vdsm-bootstrap logs in /tmp if needed.
danken, which version of vdsm upstream is 3.2 compatible? where can users get it?
We haven't had an ovirt-3.2 release yet. Users can build it themselves from master branch's vdsm-4.10.2-something. Federico, could you build a test-only package for fedora 18, based on master?
It looks like Joop is running vdsm on fc17 (from the rpm versions). I just did a test-only build of vdsm-4.10.1 for fc17 that can be found here: http://fsimonce.fedorapeople.org/vdsm-3.2-f17/x86_64/ For fc18 there's already a build here (vdsm-4.10.1-1.gitgf2f6683.fc18): http://koji.fedoraproject.org/koji/packageinfo?packageID=12944 I intend to do a newer build for fc18 in the next days (if that's what you were asking). -- Federico

Federico Simoncelli wrote:
----- Original Message -----
It looks like Joop is running vdsm on fc17 (from the rpm versions). I just did a test-only build of vdsm-4.10.1 for fc17 that can be found here:
http://fsimonce.fedorapeople.org/vdsm-3.2-f17/x86_64/
For fc18 there's already a build here (vdsm-4.10.1-1.gitgf2f6683.fc18):
http://koji.fedoraproject.org/koji/packageinfo?packageID=12944
I intend to do a newer build for fc18 in the next days (if that's what you were asking).
I'll be switching to FC18 to further test 3.2 functionality. First impressions are that thing looks good and if I encounter problems I'll let you know, thanks for your patience with me. Joop
participants (4)
-
Dan Kenigsberg
-
Federico Simoncelli
-
Itamar Heim
-
Joop