Rebuilding vdsm - "ovirt-3.1" from GIT and according dependent packages did
the trick to get the node added into the 3.1 cluster (EG: logrotate,
gluster, etc). Next I ran into the issue with the current RHEL 6.3 sanlock
not starting due to wdmd failing startup due to a watchdog not being loaded
(
). Fixed that by
building and installing a newer sanlock where this was fixed.
- DHC
On Sat, Aug 11, 2012 at 4:31 PM, Itamar Heim <iheim(a)redhat.com> wrote:
On 08/10/2012 04:21 PM, Dead Horse wrote:
> When I try adding an EL 6.3 based node to an ovirt 3.1 cluster, ovirt
> complains that the host must be run in 3.0 compatibility mode and then
> bombs out.
>
you should create a 3.0 dc/cluster for this version of vdsm.
RHEL does not yet have a VDSM version with 3.1 support.
> The version of VDSM on the node is:
> vdsm-4.9-113.1.el6.x86_64
> vdsm-cli-4.9-113.1.el6.x86_64
>
> Other misc versions:
> kernel-2.6.32-279.2.1.el6.x86_
> 64
> libvirt-0.9.10-21.el6_3.3.x86_**64
> libvirt-python-0.9.10-21.el6_**3.3.x86_64
> qemu-img-0.12.1.2-2.295.el6_3.**1.x86_64
> qemu-kvm-0.12.1.2-2.295.el6_3.**1.x86_64
> gpxe-roms-qemu-0.9.7-6.9.el6.**noarch
> qemu-kvm-tools-0.12.1.2-2.295.**el6_3.1.x86_64
> spice-server-0.10.1-10.el6.**x86_64
>
> 'vdsClient -s 0 getVdsCaps' is does not seem to work since I have SSL
> disabled for vdsm so here is the output from the vdsm.log:
> Thread-13::DEBUG::2012-08-10 00:46:26,202::clientIF::59::**vds::(wrapper)
> return getVdsCapabilities with {'status': {'message': 'Done',
'code':
> 0}, 'info': {'HBAInventory': {'iSCSI':
[{'InitiatorName':
> 'iqn.1994-05.com.redhat:**decf7eb6ff3f'}], 'FC': []},
'packages2':
> {'kernel': {'release': '279.2.1.el6.x86_64',
'buildtime': 1342732135.0,
> 'version': '2.6.32'}, 'spice-server': {'release':
'10.el6', 'buildtime':
> '1340343617', 'version': '0.10.1'}, 'vdsm':
{'release': '113.1.el6',
> 'buildtime': '1344576418', 'version': '4.9'},
'qemu-kvm': {'release':
> '2.295.el6_3.1', 'buildtime': '1343643134',
'version': '0.12.1.2'},
> 'libvirt': {'release': '21.el6_3.3', 'buildtime':
'1342618149',
> 'version': '0.9.10'}, 'qemu-img': {'release':
'2.295.el6_3.1',
> 'buildtime': '1343643134', 'version': '0.12.1.2'}},
'cpuModel':
> 'Intel(R) Xeon(R) CPU E5540 @ 2.53GHz', 'hooks': {},
> 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2',
'2.3'], 'networks':
> {'ovirtmgmt': {'addr': '192.168.1.22', 'cfg':
{'IPV6INIT': 'no',
> 'IPADDR': '192.168.1.22', 'DELAY': '0',
'PREFIX': '24', 'BOOTPROTO':
> 'none', 'STP': 'off', 'DEVICE': 'ovirtmgmt',
'TYPE': 'Bridge', 'ONBOOT':
> 'yes'}, 'ports': ['eth0'], 'netmask':
'255.255.255.0', 'stp': 'off',
> 'gateway': '192.168.1.1'}, 'virbr0': {'addr':
'192.168.122.1', 'cfg':
> {}, 'ports': ['virbr0-nic'], 'netmask':
'255.255.255.0', 'stp': 'on',
> 'gateway': '0.0.0.0'}}, 'uuid':
> 'FF200008-FFFF-FFFF-FFFF-**00212883B51A_00:21:28:83:B5:**16',
'nics':
> {'eth3': {'hwaddr': '00:21:28:83:B5:19', 'netmask':
'', 'speed': 0,
> 'addr': ''}, 'eth2': {'hwaddr':
'00:21:28:83:B5:18', 'netmask': '',
> 'speed': 0, 'addr': ''}, 'eth1': {'hwaddr':
'00:21:28:83:B5:17',
> 'netmask': '', 'speed': 0, 'addr': ''},
'eth0': {'hwaddr':
> '00:21:28:83:B5:16', 'netmask': '', 'speed': 1000,
'addr': ''}},
> 'software_revision': '113.1', 'management_ip': '',
'clusterLevels':
> ['3.0'], 'cpuFlags':
> u'fpu,vme,de,pse,tsc,msr,pae,**mce,cx8,apic,mtrr,pge,mca,**
> cmov,pat,pse36,clflush,dts,**acpi,mmx,fxsr,sse,sse2,ss,ht,**
> tm,pbe,syscall,nx,rdtscp,lm,**constant_tsc,arch_perfmon,**
> pebs,bts,rep_good,xtopology,**nonstop_tsc,aperfmperf,pni,**
> dtes64,monitor,ds_cpl,vmx,est,**tm2,ssse3,cx16,xtpr,pdcm,dca,**
> sse4_1,sse4_2,popcnt,lahf_lm,**ida,dts,tpr_shadow,vnmi,**
> flexpriority,ept,vpid,model_**486,model_pentium,model_**
> pentium2,model_pentium3,model_**pentiumpro,model_qemu32,model_**
> cpu64-rhel5,model_cpu64-rhel6,**model_kvm32,model_coreduo,**
> model_kvm64,model_core2duo,**model_n270,model_Conroe,model_**
> Penryn,model_Nehalem,model_**Opteron_G1',
> 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:**decf7eb6ff3f',
'memSize':
> '36140', 'reservedMem': '256', 'bondings':
{'bond4': {'hwaddr':
> '00:00:00:00:00:00', 'cfg': {}, 'netmask': '',
'addr': '', 'slaves':
> []}, 'bond0': {'hwaddr': '00:00:00:00:00:00', 'cfg':
{}, 'netmask': '',
> 'addr': '', 'slaves': []}, 'bond1':
{'hwaddr': '00:00:00:00:00:00',
> 'cfg': {}, 'netmask': '', 'addr': '',
'slaves': []}, 'bond2': {'hwaddr':
> '00:00:00:00:00:00', 'cfg': {}, 'netmask': '',
'addr': '', 'slaves':
> []}, 'bond3': {'hwaddr': '00:00:00:00:00:00', 'cfg':
{}, 'netmask': '',
> 'addr': '', 'slaves': []}}, 'software_version':
'4.9', 'cpuSpeed':
> '1600.000', 'cpuSockets': '2', 'vlans': {},
'cpuCores': '8',
> 'kvmEnabled': 'true', 'guestOverhead': '65',
'supportedRHEVMs': ['3.0'],
> 'version_name': 'Snow Man', 'emulatedMachines':
[u'rhel6.3.0', u'pc',
> u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0',
u'rhel5.5.0', u'rhel5.4.4',
> u'rhel5.4.0'], 'operatingSystem': {'release': '1',
'version': '6.3',
> 'name': 'RHEL'}}}
> Thread-14::DEBUG::2012-08-10 00:46:26,335::clientIF::54::**vds::(wrapper)
> [192.168.1.20]::call getVdsCapabilities with () {}
> Thread-14::DEBUG::2012-08-10
> 00:46:26,353::utils::579::**Storage.Misc.excCmd::(execCmd) '/bin/rpm -q
> --qf "%{NAME}\t%{VERSION}\t%{**RELEASE}\t%{BUILDTIME}\n" qemu-kvm'
(cwd
> None)
>
> - DHC
>
>
> On Fri, Aug 10, 2012 at 6:41 AM, Itamar Heim <iheim(a)redhat.com
> <mailto:iheim@redhat.com>> wrote:
>
> On 08/10/2012 09:16 AM, Michal Skrivanek wrote:
>
> On Aug 10, 2012, at 08:02 , Dead Horse wrote:
>
> With ovirt 3.1 EL 6.3 nodes will only work with a data
> center set to 3.0 compatibility mode. Is there any reason(s)
> why EL 6.3+ nodes would not be allowed in a 3.1
> compatibility level cluster?
>
> Why would they only work in 3.0 compatibility? 3.1 nodes work in
> 3.1 cluster level because, well, they are the same version.
> Or I didn't get the question and please elaborate:)
>
>
> which version of VDSM do you have on these nodes?
> what does 'vdsClient -s 0 getVdsCaps' return on them?
>
>
> This assumes non use of gluster since adding vdsm-gluster to
> an EL 6 node requires some work package update/version wise
> (not impossible).
>
> - DHC
> ______________________________**___________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
>
http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.o...
>
<
http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org...
> >
>
>
> ______________________________**___________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
>
http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.o...
>
<
http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org...
> >
>
>
>
>
>