[Users] EL 6.3+ nodes and 3.1 compatibility?

With ovirt 3.1 EL 6.3 nodes will only work with a data center set to 3.0 compatibility mode. Is there any reason(s) why EL 6.3+ nodes would not be allowed in a 3.1 compatibility level cluster? This assumes non use of gluster since adding vdsm-gluster to an EL 6 node requires some work package update/version wise (not impossible). - DHC

On Aug 10, 2012, at 08:02 , Dead Horse wrote:
With ovirt 3.1 EL 6.3 nodes will only work with a data center set to 3.0 compatibility mode. Is there any reason(s) why EL 6.3+ nodes would not be allowed in a 3.1 compatibility level cluster? Why would they only work in 3.0 compatibility? 3.1 nodes work in 3.1 cluster level because, well, they are the same version. Or I didn't get the question and please elaborate:)
This assumes non use of gluster since adding vdsm-gluster to an EL 6 node requires some work package update/version wise (not impossible).
- DHC _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 08/10/2012 09:16 AM, Michal Skrivanek wrote:
On Aug 10, 2012, at 08:02 , Dead Horse wrote:
With ovirt 3.1 EL 6.3 nodes will only work with a data center set to 3.0 compatibility mode. Is there any reason(s) why EL 6.3+ nodes would not be allowed in a 3.1 compatibility level cluster? Why would they only work in 3.0 compatibility? 3.1 nodes work in 3.1 cluster level because, well, they are the same version. Or I didn't get the question and please elaborate:)
which version of VDSM do you have on these nodes? what does 'vdsClient -s 0 getVdsCaps' return on them?
This assumes non use of gluster since adding vdsm-gluster to an EL 6 node requires some work package update/version wise (not impossible).
- DHC _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

When I try adding an EL 6.3 based node to an ovirt 3.1 cluster, ovirt complains that the host must be run in 3.0 compatibility mode and then bombs out. The version of VDSM on the node is: vdsm-4.9-113.1.el6.x86_64 vdsm-cli-4.9-113.1.el6.x86_64 Other misc versions: kernel-2.6.32-279.2.1.el6.x86_ 64 libvirt-0.9.10-21.el6_3.3.x86_64 libvirt-python-0.9.10-21.el6_3.3.x86_64 qemu-img-0.12.1.2-2.295.el6_3.1.x86_64 qemu-kvm-0.12.1.2-2.295.el6_3.1.x86_64 gpxe-roms-qemu-0.9.7-6.9.el6.noarch qemu-kvm-tools-0.12.1.2-2.295.el6_3.1.x86_64 spice-server-0.10.1-10.el6.x86_64 'vdsClient -s 0 getVdsCaps' is does not seem to work since I have SSL disabled for vdsm so here is the output from the vdsm.log: Thread-13::DEBUG::2012-08-10 00:46:26,202::clientIF::59::vds::(wrapper) return getVdsCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:decf7eb6ff3f'}], 'FC': []}, 'packages2': {'kernel': {'release': '279.2.1.el6.x86_64', 'buildtime': 1342732135.0, 'version': '2.6.32'}, 'spice-server': {'release': '10.el6', 'buildtime': '1340343617', 'version': '0.10.1'}, 'vdsm': {'release': '113.1.el6', 'buildtime': '1344576418', 'version': '4.9'}, 'qemu-kvm': {'release': '2.295.el6_3.1', 'buildtime': '1343643134', 'version': '0.12.1.2'}, 'libvirt': {'release': '21.el6_3.3', 'buildtime': '1342618149', 'version': '0.9.10'}, 'qemu-img': {'release': '2.295.el6_3.1', 'buildtime': '1343643134', 'version': '0.12.1.2'}}, 'cpuModel': 'Intel(R) Xeon(R) CPU E5540 @ 2.53GHz', 'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'addr': '192.168.1.22', 'cfg': {'IPV6INIT': 'no', 'IPADDR': '192.168.1.22', 'DELAY': '0', 'PREFIX': '24', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ports': ['eth0'], 'netmask': '255.255.255.0', 'stp': 'off', 'gateway': '192.168.1.1'}, 'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'ports': ['virbr0-nic'], 'netmask': '255.255.255.0', 'stp': 'on', 'gateway': '0.0.0.0'}}, 'uuid': 'FF200008-FFFF-FFFF-FFFF-00212883B51A_00:21:28:83:B5:16', 'nics': {'eth3': {'hwaddr': '00:21:28:83:B5:19', 'netmask': '', 'speed': 0, 'addr': ''}, 'eth2': {'hwaddr': '00:21:28:83:B5:18', 'netmask': '', 'speed': 0, 'addr': ''}, 'eth1': {'hwaddr': '00:21:28:83:B5:17', 'netmask': '', 'speed': 0, 'addr': ''}, 'eth0': {'hwaddr': '00:21:28:83:B5:16', 'netmask': '', 'speed': 1000, 'addr': ''}}, 'software_revision': '113.1', 'management_ip': '', 'clusterLevels': ['3.0'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_486,model_pentium,model_pentium2,model_pentium3,model_pentiumpro,model_qemu32,model_cpu64-rhel5,model_cpu64-rhel6,model_kvm32,model_coreduo,model_kvm64,model_core2duo,model_n270,model_Conroe,model_Penryn,model_Nehalem,model_Opteron_G1', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:decf7eb6ff3f', 'memSize': '36140', 'reservedMem': '256', 'bondings': {'bond4': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond0': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond1': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond2': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond3': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}}, 'software_version': '4.9', 'cpuSpeed': '1600.000', 'cpuSockets': '2', 'vlans': {}, 'cpuCores': '8', 'kvmEnabled': 'true', 'guestOverhead': '65', 'supportedRHEVMs': ['3.0'], 'version_name': 'Snow Man', 'emulatedMachines': [u'rhel6.3.0', u'pc', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '1', 'version': '6.3', 'name': 'RHEL'}}} Thread-14::DEBUG::2012-08-10 00:46:26,335::clientIF::54::vds::(wrapper) [192.168.1.20]::call getVdsCapabilities with () {} Thread-14::DEBUG::2012-08-10 00:46:26,353::utils::579::Storage.Misc.excCmd::(execCmd) '/bin/rpm -q --qf "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" qemu-kvm' (cwd None) - DHC On Fri, Aug 10, 2012 at 6:41 AM, Itamar Heim <iheim@redhat.com> wrote:
On 08/10/2012 09:16 AM, Michal Skrivanek wrote:
On Aug 10, 2012, at 08:02 , Dead Horse wrote:
With ovirt 3.1 EL 6.3 nodes will only work with a data center set to 3.0
compatibility mode. Is there any reason(s) why EL 6.3+ nodes would not be allowed in a 3.1 compatibility level cluster?
Why would they only work in 3.0 compatibility? 3.1 nodes work in 3.1 cluster level because, well, they are the same version. Or I didn't get the question and please elaborate:)
which version of VDSM do you have on these nodes? what does 'vdsClient -s 0 getVdsCaps' return on them?
This assumes non use of gluster since adding vdsm-gluster to an EL 6 node
requires some work package update/version wise (not impossible).
- DHC ______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>

On 08/10/2012 04:21 PM, Dead Horse wrote:
When I try adding an EL 6.3 based node to an ovirt 3.1 cluster, ovirt complains that the host must be run in 3.0 compatibility mode and then bombs out.
you should create a 3.0 dc/cluster for this version of vdsm. RHEL does not yet have a VDSM version with 3.1 support.
The version of VDSM on the node is: vdsm-4.9-113.1.el6.x86_64 vdsm-cli-4.9-113.1.el6.x86_64
Other misc versions: kernel-2.6.32-279.2.1.el6.x86_ 64 libvirt-0.9.10-21.el6_3.3.x86_64 libvirt-python-0.9.10-21.el6_3.3.x86_64 qemu-img-0.12.1.2-2.295.el6_3.1.x86_64 qemu-kvm-0.12.1.2-2.295.el6_3.1.x86_64 gpxe-roms-qemu-0.9.7-6.9.el6.noarch qemu-kvm-tools-0.12.1.2-2.295.el6_3.1.x86_64 spice-server-0.10.1-10.el6.x86_64
'vdsClient -s 0 getVdsCaps' is does not seem to work since I have SSL disabled for vdsm so here is the output from the vdsm.log: Thread-13::DEBUG::2012-08-10 00:46:26,202::clientIF::59::vds::(wrapper) return getVdsCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:decf7eb6ff3f'}], 'FC': []}, 'packages2': {'kernel': {'release': '279.2.1.el6.x86_64', 'buildtime': 1342732135.0, 'version': '2.6.32'}, 'spice-server': {'release': '10.el6', 'buildtime': '1340343617', 'version': '0.10.1'}, 'vdsm': {'release': '113.1.el6', 'buildtime': '1344576418', 'version': '4.9'}, 'qemu-kvm': {'release': '2.295.el6_3.1', 'buildtime': '1343643134', 'version': '0.12.1.2'}, 'libvirt': {'release': '21.el6_3.3', 'buildtime': '1342618149', 'version': '0.9.10'}, 'qemu-img': {'release': '2.295.el6_3.1', 'buildtime': '1343643134', 'version': '0.12.1.2'}}, 'cpuModel': 'Intel(R) Xeon(R) CPU E5540 @ 2.53GHz', 'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'addr': '192.168.1.22', 'cfg': {'IPV6INIT': 'no', 'IPADDR': '192.168.1.22', 'DELAY': '0', 'PREFIX': '24', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ports': ['eth0'], 'netmask': '255.255.255.0', 'stp': 'off', 'gateway': '192.168.1.1'}, 'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'ports': ['virbr0-nic'], 'netmask': '255.255.255.0', 'stp': 'on', 'gateway': '0.0.0.0'}}, 'uuid': 'FF200008-FFFF-FFFF-FFFF-00212883B51A_00:21:28:83:B5:16', 'nics': {'eth3': {'hwaddr': '00:21:28:83:B5:19', 'netmask': '', 'speed': 0, 'addr': ''}, 'eth2': {'hwaddr': '00:21:28:83:B5:18', 'netmask': '', 'speed': 0, 'addr': ''}, 'eth1': {'hwaddr': '00:21:28:83:B5:17', 'netmask': '', 'speed': 0, 'addr': ''}, 'eth0': {'hwaddr': '00:21:28:83:B5:16', 'netmask': '', 'speed': 1000, 'addr': ''}}, 'software_revision': '113.1', 'management_ip': '', 'clusterLevels': ['3.0'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_486,model_pentium,model_pentium2,model_pentium3,model_pentiumpro,model_qemu32,model_cpu64-rhel5,model_cpu64-rhel6,model_kvm32,model_coreduo,model_kvm64,model_core2duo,model_n270,model_Conroe,model_Penryn,model_Nehalem,model_Opteron_G1', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:decf7eb6ff3f', 'memSize': '36140', 'reservedMem': '256', 'bondings': {'bond4': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond0': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond1': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond2': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond3': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}}, 'software_version': '4.9', 'cpuSpeed': '1600.000', 'cpuSockets': '2', 'vlans': {}, 'cpuCores': '8', 'kvmEnabled': 'true', 'guestOverhead': '65', 'supportedRHEVMs': ['3.0'], 'version_name': 'Snow Man', 'emulatedMachines': [u'rhel6.3.0', u'pc', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '1', 'version': '6.3', 'name': 'RHEL'}}} Thread-14::DEBUG::2012-08-10 00:46:26,335::clientIF::54::vds::(wrapper) [192.168.1.20]::call getVdsCapabilities with () {} Thread-14::DEBUG::2012-08-10 00:46:26,353::utils::579::Storage.Misc.excCmd::(execCmd) '/bin/rpm -q --qf "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" qemu-kvm' (cwd None)
- DHC
On Fri, Aug 10, 2012 at 6:41 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 08/10/2012 09:16 AM, Michal Skrivanek wrote:
On Aug 10, 2012, at 08:02 , Dead Horse wrote:
With ovirt 3.1 EL 6.3 nodes will only work with a data center set to 3.0 compatibility mode. Is there any reason(s) why EL 6.3+ nodes would not be allowed in a 3.1 compatibility level cluster?
Why would they only work in 3.0 compatibility? 3.1 nodes work in 3.1 cluster level because, well, they are the same version. Or I didn't get the question and please elaborate:)
which version of VDSM do you have on these nodes? what does 'vdsClient -s 0 getVdsCaps' return on them?
This assumes non use of gluster since adding vdsm-gluster to an EL 6 node requires some work package update/version wise (not impossible).
- DHC _________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>

Rebuilding vdsm - "ovirt-3.1" from GIT and according dependent packages did the trick to get the node added into the 3.1 cluster (EG: logrotate, gluster, etc). Next I ran into the issue with the current RHEL 6.3 sanlock not starting due to wdmd failing startup due to a watchdog not being loaded (https://bugzilla.redhat.com/show_bug.cgi?id=832935). Fixed that by building and installing a newer sanlock where this was fixed. - DHC On Sat, Aug 11, 2012 at 4:31 PM, Itamar Heim <iheim@redhat.com> wrote:
On 08/10/2012 04:21 PM, Dead Horse wrote:
When I try adding an EL 6.3 based node to an ovirt 3.1 cluster, ovirt complains that the host must be run in 3.0 compatibility mode and then bombs out.
you should create a 3.0 dc/cluster for this version of vdsm. RHEL does not yet have a VDSM version with 3.1 support.
The version of VDSM on the node is: vdsm-4.9-113.1.el6.x86_64 vdsm-cli-4.9-113.1.el6.x86_64
Other misc versions: kernel-2.6.32-279.2.1.el6.x86_ 64 libvirt-0.9.10-21.el6_3.3.x86_**64 libvirt-python-0.9.10-21.el6_**3.3.x86_64 qemu-img-0.12.1.2-2.295.el6_3.**1.x86_64 qemu-kvm-0.12.1.2-2.295.el6_3.**1.x86_64 gpxe-roms-qemu-0.9.7-6.9.el6.**noarch qemu-kvm-tools-0.12.1.2-2.295.**el6_3.1.x86_64 spice-server-0.10.1-10.el6.**x86_64
'vdsClient -s 0 getVdsCaps' is does not seem to work since I have SSL disabled for vdsm so here is the output from the vdsm.log: Thread-13::DEBUG::2012-08-10 00:46:26,202::clientIF::59::**vds::(wrapper) return getVdsCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:**decf7eb6ff3f'}], 'FC': []}, 'packages2': {'kernel': {'release': '279.2.1.el6.x86_64', 'buildtime': 1342732135.0, 'version': '2.6.32'}, 'spice-server': {'release': '10.el6', 'buildtime': '1340343617', 'version': '0.10.1'}, 'vdsm': {'release': '113.1.el6', 'buildtime': '1344576418', 'version': '4.9'}, 'qemu-kvm': {'release': '2.295.el6_3.1', 'buildtime': '1343643134', 'version': '0.12.1.2'}, 'libvirt': {'release': '21.el6_3.3', 'buildtime': '1342618149', 'version': '0.9.10'}, 'qemu-img': {'release': '2.295.el6_3.1', 'buildtime': '1343643134', 'version': '0.12.1.2'}}, 'cpuModel': 'Intel(R) Xeon(R) CPU E5540 @ 2.53GHz', 'hooks': {}, 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'addr': '192.168.1.22', 'cfg': {'IPV6INIT': 'no', 'IPADDR': '192.168.1.22', 'DELAY': '0', 'PREFIX': '24', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ports': ['eth0'], 'netmask': '255.255.255.0', 'stp': 'off', 'gateway': '192.168.1.1'}, 'virbr0': {'addr': '192.168.122.1', 'cfg': {}, 'ports': ['virbr0-nic'], 'netmask': '255.255.255.0', 'stp': 'on', 'gateway': '0.0.0.0'}}, 'uuid': 'FF200008-FFFF-FFFF-FFFF-**00212883B51A_00:21:28:83:B5:**16', 'nics': {'eth3': {'hwaddr': '00:21:28:83:B5:19', 'netmask': '', 'speed': 0, 'addr': ''}, 'eth2': {'hwaddr': '00:21:28:83:B5:18', 'netmask': '', 'speed': 0, 'addr': ''}, 'eth1': {'hwaddr': '00:21:28:83:B5:17', 'netmask': '', 'speed': 0, 'addr': ''}, 'eth0': {'hwaddr': '00:21:28:83:B5:16', 'netmask': '', 'speed': 1000, 'addr': ''}}, 'software_revision': '113.1', 'management_ip': '', 'clusterLevels': ['3.0'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,**mce,cx8,apic,mtrr,pge,mca,** cmov,pat,pse36,clflush,dts,**acpi,mmx,fxsr,sse,sse2,ss,ht,** tm,pbe,syscall,nx,rdtscp,lm,**constant_tsc,arch_perfmon,** pebs,bts,rep_good,xtopology,**nonstop_tsc,aperfmperf,pni,** dtes64,monitor,ds_cpl,vmx,est,**tm2,ssse3,cx16,xtpr,pdcm,dca,** sse4_1,sse4_2,popcnt,lahf_lm,**ida,dts,tpr_shadow,vnmi,** flexpriority,ept,vpid,model_**486,model_pentium,model_** pentium2,model_pentium3,model_**pentiumpro,model_qemu32,model_** cpu64-rhel5,model_cpu64-rhel6,**model_kvm32,model_coreduo,** model_kvm64,model_core2duo,**model_n270,model_Conroe,model_** Penryn,model_Nehalem,model_**Opteron_G1', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:**decf7eb6ff3f', 'memSize': '36140', 'reservedMem': '256', 'bondings': {'bond4': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond0': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond1': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond2': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond3': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}}, 'software_version': '4.9', 'cpuSpeed': '1600.000', 'cpuSockets': '2', 'vlans': {}, 'cpuCores': '8', 'kvmEnabled': 'true', 'guestOverhead': '65', 'supportedRHEVMs': ['3.0'], 'version_name': 'Snow Man', 'emulatedMachines': [u'rhel6.3.0', u'pc', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '1', 'version': '6.3', 'name': 'RHEL'}}} Thread-14::DEBUG::2012-08-10 00:46:26,335::clientIF::54::**vds::(wrapper) [192.168.1.20]::call getVdsCapabilities with () {} Thread-14::DEBUG::2012-08-10 00:46:26,353::utils::579::**Storage.Misc.excCmd::(execCmd) '/bin/rpm -q --qf "%{NAME}\t%{VERSION}\t%{**RELEASE}\t%{BUILDTIME}\n" qemu-kvm' (cwd None)
- DHC
On Fri, Aug 10, 2012 at 6:41 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 08/10/2012 09:16 AM, Michal Skrivanek wrote:
On Aug 10, 2012, at 08:02 , Dead Horse wrote:
With ovirt 3.1 EL 6.3 nodes will only work with a data center set to 3.0 compatibility mode. Is there any reason(s) why EL 6.3+ nodes would not be allowed in a 3.1 compatibility level cluster?
Why would they only work in 3.0 compatibility? 3.1 nodes work in 3.1 cluster level because, well, they are the same version. Or I didn't get the question and please elaborate:)
which version of VDSM do you have on these nodes? what does 'vdsClient -s 0 getVdsCaps' return on them?
This assumes non use of gluster since adding vdsm-gluster to an EL 6 node requires some work package update/version wise (not impossible).
- DHC ______________________________**___________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
______________________________**___________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>

This is a multi-part message in MIME format. --------------080901020607090608000809 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 08/10/2012 02:02 AM, Dead Horse wrote:
With ovirt 3.1 EL 6.3 nodes will only work with a data center set to 3.0 compatibility mode. Is there any reason(s) why EL 6.3+ nodes would not be allowed in a 3.1 compatibility level cluster? This assumes non use of gluster since adding vdsm-gluster to an EL 6 node requires some work package update/version wise (not impossible).
- DHC
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users What repo are you using? What build number? I have used both gluster and 3.1 compatibily nodes on top of CentOS 6.3. http://www.middleswarth.net/content/installing-ovirt-31-and-glusterfs-using-...
-- Thanks Robert Middleswarth @rmiddle (twitter/IRC) --------------080901020607090608000809 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">On 08/10/2012 02:02 AM, Dead Horse wrote:<br> </div> <blockquote cite="mid:CAEWPe=oFM_u4Q4ZexOs0QXBBVSOx3WueLR7cRzV5ZPqRGjOmow@mail.gmail.com" type="cite">With ovirt 3.1 EL 6.3 nodes will only work with a data center set to 3.0 compatibility mode. Is there any reason(s) why EL 6.3+ nodes would not be allowed in a 3.1 compatibility level cluster? This assumes non use of gluster since adding vdsm-gluster to an EL 6 node requires some work package update/version wise (not impossible).<br> <br> - DHC<br> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> What repo are you using? What build number? I have used both gluster and 3.1 compatibily nodes on top of CentOS 6.3. <a class="moz-txt-link-freetext" href="http://www.middleswarth.net/content/installing-ovirt-31-and-glusterfs-using-either-nfs-or-posix-native-file-system">http://www.middleswarth.net/content/installing-ovirt-31-and-glusterfs-using-either-nfs-or-posix-native-file-system</a><br> <br> <pre class="moz-signature" cols="72">-- Thanks Robert Middleswarth @rmiddle (twitter/IRC) </pre> </body> </html> --------------080901020607090608000809--

I am not using the dreyou repo. Rather a pure EL 6.3 and it's associated packages. - DHC On Fri, Aug 10, 2012 at 8:07 AM, Robert Middleswarth < robert@middleswarth.net> wrote:
On 08/10/2012 02:02 AM, Dead Horse wrote:
With ovirt 3.1 EL 6.3 nodes will only work with a data center set to 3.0 compatibility mode. Is there any reason(s) why EL 6.3+ nodes would not be allowed in a 3.1 compatibility level cluster? This assumes non use of gluster since adding vdsm-gluster to an EL 6 node requires some work package update/version wise (not impossible).
- DHC
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
What repo are you using? What build number? I have used both gluster and 3.1 compatibily nodes on top of CentOS 6.3. http://www.middleswarth.net/content/installing-ovirt-31-and-glusterfs-using-...
-- Thanks Robert Middleswarth @rmiddle (twitter/IRC)
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Dead Horse" <deadhorseconsulting@gmail.com> To: "Robert Middleswarth" <robert@middleswarth.net> Cc: users@ovirt.org Sent: Friday, August 10, 2012 9:22:30 AM Subject: Re: [Users] EL 6.3+ nodes and 3.1 compatibility?
I am not using the dreyou repo. Rather a pure EL 6.3 and it's associated packages.
where are you getting your VDSM packages from - because the 3.1 packages aren't in RHEL or it's clones.
- DHC
On Fri, Aug 10, 2012 at 8:07 AM, Robert Middleswarth < robert@middleswarth.net > wrote:
On 08/10/2012 02:02 AM, Dead Horse wrote:
With ovirt 3.1 EL 6.3 nodes will only work with a data center set to 3.0 compatibility mode. Is there any reason(s) why EL 6.3+ nodes would not be allowed in a 3.1 compatibility level cluster? This assumes non use of gluster since adding vdsm-gluster to an EL 6 node requires some work package update/version wise (not impossible).
- DHC
_______________________________________________
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
What repo are you using? What build number? I have used both gluster and 3.1 compatibily nodes on top of CentOS 6.3. http://www.middleswarth.net/content/installing-ovirt-31-and-glusterfs-using-...
--
Thanks
Robert Middleswarth
@rmiddle (twitter/IRC)
_______________________________________________
Users mailing list
Users@ovirt.org
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a pure RHEL 6.3 node. Given that I gather in order to make this work I need to build the newer vdsm 4.10.x and update/build the other according dependent packages? - DHC On Fri, Aug 10, 2012 at 8:23 AM, Andrew Cathrow <acathrow@redhat.com> wrote:
----- Original Message -----
From: "Dead Horse" <deadhorseconsulting@gmail.com> To: "Robert Middleswarth" <robert@middleswarth.net> Cc: users@ovirt.org Sent: Friday, August 10, 2012 9:22:30 AM Subject: Re: [Users] EL 6.3+ nodes and 3.1 compatibility?
I am not using the dreyou repo. Rather a pure EL 6.3 and it's associated packages.
where are you getting your VDSM packages from - because the 3.1 packages aren't in RHEL or it's clones.
- DHC
On Fri, Aug 10, 2012 at 8:07 AM, Robert Middleswarth < robert@middleswarth.net > wrote:
On 08/10/2012 02:02 AM, Dead Horse wrote:
With ovirt 3.1 EL 6.3 nodes will only work with a data center set to 3.0 compatibility mode. Is there any reason(s) why EL 6.3+ nodes would not be allowed in a 3.1 compatibility level cluster? This assumes non use of gluster since adding vdsm-gluster to an EL 6 node requires some work package update/version wise (not impossible).
- DHC
_______________________________________________
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
What repo are you using? What build number? I have used both gluster and 3.1 compatibily nodes on top of CentOS 6.3.
http://www.middleswarth.net/content/installing-ovirt-31-and-glusterfs-using-...
--
Thanks
Robert Middleswarth
@rmiddle (twitter/IRC)
_______________________________________________
Users mailing list
Users@ovirt.org
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Yes, this VDSM in EL 6.3 only knows about 3.0 cluster level. ----- Original Message -----
From: "Dead Horse" <deadhorseconsulting@gmail.com> To: "Andrew Cathrow" <acathrow@redhat.com> Cc: users@ovirt.org, "Robert Middleswarth" <robert@middleswarth.net> Sent: Friday, August 10, 2012 9:28:23 AM Subject: Re: [Users] EL 6.3+ nodes and 3.1 compatibility?
This is a pure RHEL 6.3 node. Given that I gather in order to make this work I need to build the newer vdsm 4.10.x and update/build the other according dependent packages?
- DHC
On Fri, Aug 10, 2012 at 8:23 AM, Andrew Cathrow < acathrow@redhat.com
wrote:
----- Original Message -----
From: "Dead Horse" < deadhorseconsulting@gmail.com >
To: "Robert Middleswarth" < robert@middleswarth.net >
Cc: users@ovirt.org
Sent: Friday, August 10, 2012 9:22:30 AM
Subject: Re: [Users] EL 6.3+ nodes and 3.1 compatibility?
I am not using the dreyou repo. Rather a pure EL 6.3 and it's
associated packages.
where are you getting your VDSM packages from - because the 3.1 packages aren't in RHEL or it's clones.
- DHC
On Fri, Aug 10, 2012 at 8:07 AM, Robert Middleswarth <
robert@middleswarth.net > wrote:
On 08/10/2012 02:02 AM, Dead Horse wrote:
With ovirt 3.1 EL 6.3 nodes will only work with a data center set
to
3.0 compatibility mode. Is there any reason(s) why EL 6.3+ nodes
would not be allowed in a 3.1 compatibility level cluster? This
assumes non use of gluster since adding vdsm-gluster to an EL 6
node
requires some work package update/version wise (not impossible).
- DHC
_______________________________________________
Users mailing list Users@ovirt.org
What repo are you using? What build number? I have used both
gluster
and 3.1 compatibily nodes on top of CentOS 6.3.
http://www.middleswarth.net/content/installing-ovirt-31-and-glusterfs-using-...
--
Thanks
Robert Middleswarth
@rmiddle (twitter/IRC)
_______________________________________________
Users mailing list
Users@ovirt.org
_______________________________________________
Users mailing list
Users@ovirt.org
participants (5)
-
Andrew Cathrow
-
Dead Horse
-
Itamar Heim
-
Michal Skrivanek
-
Robert Middleswarth