[Users] Ovirt Host (3.0) compatability issue with Cluster (3.1)
Keyur Makwana
kurmakwana at gmail.com
Fri Nov 9 15:31:06 UTC 2012
Hello,
Thanks for the update mates.
Silly me! I loaded wrong ISO apparently. I shall update once am through the
next installation.
Thanks again!
Regards,
Keyur
On Fri, Nov 9, 2012 at 8:49 PM, Itamar Heim <iheim at redhat.com> wrote:
> On 11/09/2012 03:33 PM, Keyur Makwana wrote:
>
>> Hello,
>>
>> Thanks for having a look at this issue.
>>
>
> the cluster level supported by this node is 3.0.
> you should change your cluster level to 3.0 to use it (your DC should also
> be 3.0. you may need to create a new DC, since you can't lower a DC level
> if it has clusters in it iirc).
>
> but you are using in incompatible engine (ovirt) and node (rhev).
> they have different compatibility levels even if the numbers are the same.
>
>
>
>> On ovirt-engine - either of the command did not work.
>>
>> Here is the output from the hypervisor.
>>
>> [root at h1 ~]# vdsClient -s 0 getVdsCaps
>> HBAInventory = {'iSCSI': [{'InitiatorName':
>> 'iqn.1994-05.com.redhat:**5e25411e9039'}], 'FC': [{'wwpn':
>> '10000000c9cb2826', 'wwnn': '20000000c9cb2826', 'model': 'LPe1205-CIOv -
>> Emulex LPe1205-CIOv 8Gb 2-port PCIe Fibre Channel Adapter'}, {'wwpn':
>> '10000000c9cb2827', 'wwnn': '20000000c9cb2827', 'model': 'LPe1205-CIOv -
>> Emulex LPe1205-CIOv 8Gb 2-port PCIe Fibre Channel Adapter'}]}
>> ISCSIInitiatorName = iqn.1994-05.com.redhat:**5e25411e9039
>> bondings = {'bond4': {'hwaddr': '00:00:00:00:00:00', 'cfg': {},
>> 'netmask': '', 'addr': '', 'slaves': []}, 'bond0': {'hwaddr':
>> '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves':
>> []}, 'bond1': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '',
>> 'addr': '', 'slaves': []}, 'bond2': {'hwaddr': '00:00:00:00:00:00',
>> 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond3': {'hwaddr':
>> '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}}
>> clusterLevels = ['3.0']
>> cpuCores = 12
>> cpuFlags =
>> fpu,vme,de,pse,tsc,msr,pae,**mce,cx8,apic,mtrr,pge,mca,**
>> cmov,pat,pse36,clflush,dts,**acpi,mmx,fxsr,sse,sse2,ss,ht,**
>> tm,pbe,syscall,nx,pdpe1gb,**rdtscp,lm,constant_tsc,arch_**
>> perfmon,pebs,bts,rep_good,**xtopology,nonstop_tsc,**
>> aperfmperf,pni,pclmulqdq,**dtes64,monitor,ds_cpl,vmx,smx,**
>> est,tm2,ssse3,cx16,xtpr,pdcm,**dca,sse4_1,sse4_2,popcnt,aes,**
>> lahf_lm,arat,epb,dts,tpr_**shadow,vnmi,flexpriority,ept,**
>> vpid,model_486,model_pentium,**model_pentium2,model_pentium3,**
>> model_pentiumpro,model_qemu32,**model_coreduo,model_core2duo,**
>> model_n270,model_Conroe,model_**Penryn,model_Nehalem,model_**
>> Westmere,model_Opteron_G1
>> cpuModel = Intel(R) Xeon(R) CPU E5645 @ 2.40GHz
>> cpuSockets = 2
>> cpuSpeed = 2400.276
>> emulatedMachines = ['rhel6.2.0', 'pc', 'rhel6.1.0', 'rhel6.0.0',
>> 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0']
>> guestOverhead = 65
>> hooks = {'before_vm_start': {'50_vhostmd': {'md5':
>> '**1635e45584995d9ad2629939cb0bcd**bf'}}, 'after_vm_destroy':
>> {'50_vhostmd':
>> {'md5': '**7e7509430304099f38e4a9de966af0**90'}},
>> 'before_vm_dehibernate':
>> {'50_vhostmd': {'md5': '**1635e45584995d9ad2629939cb0bcd**bf'}},
>> 'before_vm_migrate_**destination': {'50_vhostmd': {'md5':
>> '**1635e45584995d9ad2629939cb0bcd**bf'}}}
>> kvmEnabled = true
>> lastClient = 172.29.5.200
>> lastClientIface = rhevm
>> management_ip = 172.29.81.141
>> memSize = 64420
>> networks = {'rhevm': {'addr': '172.29.81.141', 'cfg': {'IPV6FORWARDING':
>> 'no', 'IPV6INIT': 'no', 'SKIPLIBVIRT': 'True', 'IPADDR':
>> '172.29.81.141', 'ONBOOT': 'yes', 'DELAY': '0', 'DEVICE': 'rhevm',
>> 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'IPV6_AUTOCONF':
>> 'no', 'PEERNTP': 'yes', 'TYPE': 'Bridge', 'GATEWAY': '172.29.81.1'},
>> 'ports': ['eth8'], 'netmask': '255.255.255.0', 'stp': 'off', 'gateway':
>> '172.29.81.1'}}
>> nics = {'eth9': {'hwaddr': '5C:F3:FC:98:19:82', 'netmask': '', 'speed':
>> 0, 'addr': ''}, 'eth8': {'hwaddr': '5C:F3:FC:98:19:80', 'netmask': '',
>> 'speed': 1000, 'addr': ''}, 'eth7': {'hwaddr': '00:00:C9:D2:16:73',
>> 'netmask': '', 'speed': 0, 'addr': ''}, 'eth6': {'hwaddr':
>> '00:00:C9:D2:16:6F', 'netmask': '', 'speed': 0, 'addr': ''}, 'eth5':
>> {'hwaddr': '00:00:C9:D2:16:72', 'netmask': '', 'speed': 0, 'addr': ''},
>> 'eth4': {'hwaddr': '00:00:C9:D2:16:6E', 'netmask': '', 'speed': 0,
>> 'addr': ''}, 'eth3': {'hwaddr': '00:00:C9:D2:16:71', 'netmask': '',
>> 'speed': 0, 'addr': ''}, 'eth2': {'hwaddr': '00:00:C9:D2:16:6D',
>> 'netmask': '', 'speed': 0, 'addr': ''}, 'eth1': {'hwaddr':
>> '00:00:C9:D2:16:70', 'netmask': '', 'speed': 0, 'addr': ''}, 'eth0':
>> {'hwaddr': '00:00:C9:D2:16:6C', 'netmask': '', 'speed': 0, 'addr': ''}}
>> operatingSystem = {'release': '20120510.0.el6_2', 'version': '6.2',
>> 'name': 'RHEV Hypervisor'}
>> packages2 = {'kernel': {'release': '220.17.1.el6.x86_64', 'buildtime':
>> 1335447433.0, 'version': '2.6.32'}, 'spice-server': {'release': '5.el6',
>> 'buildtime': '1319663438', 'version': '0.8.2'}, 'vdsm': {'release':
>> '112.12.el6_2', 'buildtime': '1334495504', 'version': '4.9'},
>> 'qemu-kvm': {'release': '2.209.el6_2.5', 'buildtime': '1334776518',
>> 'version': '0.12.1.2'}, 'libvirt': {'release': '23.el6_2.8',
>> 'buildtime': '1333958926', 'version': '0.9.4'}, 'qemu-img': {'release':
>> '2.209.el6_2.5', 'buildtime': '1334776518', 'version': '0.12.1.2'}}
>> reservedMem = 256
>> software_revision = 112.12
>> software_version = 4.9
>> supportedProtocols = ['2.2', '2.3']
>> supportedRHEVMs = ['3.0']
>> uuid = C12E325E-3D69-11E1-91C1-**5CF3FC981980_00:00:C9:D2:16:6C
>> version_name = Snow Man
>> vlans = {}
>> vmTypes = ['kvm']
>> [root at h1 ~]#
>>
>> Regards,
>> Keyur
>>
>>
>>
>> On Fri, Nov 9, 2012 at 7:36 PM, Yair Zaslavsky <yzaslavs at redhat.com
>> <mailto:yzaslavs at redhat.com>> wrote:
>>
>> For troubleshooting -
>> Can you run on your host machine
>>
>> vdsClient 0 getVdsCaps (might be vdsClient -s 0 getVdsCaps)
>>
>> and send us the results?
>>
>>
>> ------------------------------**------------------------------**
>> ------------
>>
>> *From: *"Keyur Makwana" <kurmakwana at gmail.com
>> <mailto:kurmakwana at gmail.com>>
>> *To: *users at ovirt.org <mailto:users at ovirt.org>
>> *Sent: *Friday, November 9, 2012 12:50:22 PM
>> *Subject: *[Users] Ovirt Host (3.0) compatability issue with
>>
>> Cluster (3.1)
>>
>>
>> Hello All,
>>
>> I am new to Ovirt and trying to Implement Ovirt in my
>> organisation with an view to implement complete open source
>> virtualization solution instead of using VMware or other Paid
>> once.
>>
>> I have installed ovirt-engine on Fedora 17 and have installed
>> Ovirt node (Hypervisor) - Red Hat Enterprise Virtualization
>> Hypervisor release 6.2 (20120510.0.el6_2)
>>
>>
>> Post installation engine started fine but while trying to add
>> host its giving me error as below.
>>
>> Host host1 is compatible with versions (3.0) and cannot join
>> Cluster Default which is set to version 3.1.
>>
>> I don't know what exact details I would need to share for
>> troubleshooting. Hence I have attached a file with details of
>> rpms installed on ovirt-engine and node. ( One may not like to
>> scroll the entire file)
>>
>> I might be missing something very silly or I may sound too stupid.
>>
>> Ovirt masters : Please guide.
>>
>> Thanks in advance,
>> Keyur
>>
>>
>> ______________________________**_________________
>> Users mailing list
>> Users at ovirt.org <mailto:Users at ovirt.org>
>> http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
>>
>>
>>
>>
>> ______________________________**_________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20121109/d8795073/attachment-0001.html>
More information about the Users
mailing list