This is a multi-part message in MIME format.
--------------020300060102040203060709
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
On 01/30/2012 07:15 AM, Brown, Chris (GE Healthcare) wrote:
If you would like to use SL 6.2 as a node:
- Change the contents of /etc/redhat-release to "Red Hat Enterprise Linux Server
release 6.2 (Santiago"
- Build the latest version of RHEV/RHEL 6.x vdsm srpm here:
http://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/RHEV/SRPMS/
- Add the resulting rpms to a yum repo or create one using createrepo and serve it up
where your SL node can get to it
- Register that repo with yum EG: add a repo file in /etc/yum.repos.d pointing to it
- Finally you will have to edit a value in the postgres database to actually get ovirt to
execute vms on the SL (EL6) node once it is added
- since ovirt is built around FC16 it will pass "emulatedmachine" or the
"-M" value to KVM as "pc-0.14" whereas on EL6 the value passed is
"rhel6.2.0"
- thus to change it to what we desire run the following: psql -U postgres -d engine -c
"update vdc_options set option_value='rhel6.2.0'
where option_name='EmulatedMachine' and version='3.0';"
- after that make sure to restart the jboss service as the previous values are cached
- Chris
________________________________
From: users-bounces(a)ovirt.org on behalf of Dan Kenigsberg
Sent: Sun 1/29/2012 3:53 AM
To: ??????
Cc: users(a)ovirt.org
Subject: Re: [Users] Support by Scientific linux?
On Sun, Jan 29, 2012 at 05:12:09AM +0400, ?????? wrote:
> Hi.
>
> Testing work Ovirt on scientific linux 6.1.
>
> Compiling from source oVirt-engine
> (
http://www.ovirt.org/wiki/Building_Ovirt_Engine) is successful.
>
> However, if you add a node incorrectly determined the operating system:
>
> OS Version: unknown.
>
> Nodes perform the installation instructions on:
>
http://www.ovirt.org/wiki/Building_Ovirt_Engine
>
> Analyzing the log files are found incorrect answer vdsm (Node status
> NonOperational. nonOperationalReason = VERSION_INCOMPATIBLE_WITH_CLUSTER)
I
agree with Dan and Chris comments, however, the reason engine-core
decided to move host to non-operational is as it doesn't fit cluster
minimum requirements; any chance to see a full engine-core log when you
try to activate the host ?
>
> /var/log/vdsm/vdsm.log:
>
> Thread-16::DEBUG::2012-01-28 14:08:09,200::clientIF::48::vds::(wrapper)
> return getVdsCapabilities with {'status': {'message': 'Done',
'code': 0},
> 'info': {'HBAInventory': {'iSCSI':
[{'InitiatorName':
> 'iqn.1994-05.com.redhat:4052e3fcadb'}], 'FC': []},
'packages2': {'kernel':
> {'release': '220.el6.x86_64', 'buildtime': '0',
'version': '2.6.32'},
> 'spice-server': {'release': '5.el6', 'buildtime':
'1323492018', 'version':
> '0.8.2'}, 'vdsm': {'release': '63.el6',
'buildtime': '1327784725',
> 'version': '4.9'}, 'qemu-kvm': {'release':
'2.209.el6_2.4', 'buildtime':
> '1327361568', 'version': '0.12.1.2'}, 'libvirt':
{'release': '23.el6',
> 'buildtime': '1323231757', 'version': '0.9.4'},
'qemu-img': {'release':
> '2.209.el6_2.4', 'buildtime': '1327361568',
'version': '0.12.1.2'}},
> 'cpuModel': 'Intel(R) Xeon(R) CPU 5140 @ 2.33GHz',
'hooks': {},
> 'networks': {'virbr0': {'cfg': {}, 'netmask':
'255.255.255.0', 'stp': 'on',
> 'ports': ['virbr0-nic'], 'addr': '192.168.122.1'}},
'vmTypes': ['kvm',
> 'qemu'], 'cpuFlags':
> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflus
> h,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_pe
> rfmon,pebs,bts,rep_good,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,sss
> e3,cx16,xtpr,pdcm,dca,lahf_lm,dts,tpr_shadow,model_486,model_pentium,model_p
> entium2,model_pentium3,model_pentiumpro,model_qemu32,model_coreduo,model_cor
> e2duo,model_n270,model_Conroe,model_Opteron_G1', 'cpuSockets':
'1', 'uuid':
> '343C9406-3478-4923-3478-492339393407_00:1c:c4:74:a0:60',
'lastClientIface':
> 'eth0', 'nics': {'eth1': {'hwaddr':
'00:1C:C4:74:A0:61', 'netmask': '',
> 'speed': 0, 'addr': ''}, 'eth0': {'hwaddr':
'00:1C:C4:74:A0:60', 'netmask':
> '255.255.255.0', 'speed': 1000, 'addr':
'10.1.20.10'}}, 'software_revision':
> '63', 'management_ip': '', 'clusterLevels':
['2.3'], 'supportedProtocols':
> ['2.2', '2.3'], 'ISCSIInitiatorName':
'iqn.1994-05.com.redhat:4052e3fcadb',
> 'memSize': '15949', 'reservedMem': '256',
'bondings': {'bond4': {'hwaddr':
> '00:00:00:00:00:00', 'cfg': {}, 'netmask': '',
'addr': '', 'slaves': []},
> 'bond0': {'hwaddr': '00:00:00:00:00:00', 'cfg': {},
'netmask': '', 'addr':
> '', 'slaves': []}, 'bond1': {'hwaddr':
'00:00:00:00:00:00', 'cfg': {},
> 'netmask': '', 'addr': '', 'slaves': []},
'bond2': {'hwaddr':
> '00:00:00:00:00:00', 'cfg': {}, 'netmask': '',
'addr': '', 'slaves': []},
> 'bond3': {'hwaddr': '00:00:00:00:00:00', 'cfg': {},
'netmask': '', 'addr':
> '', 'slaves': []}}, 'software_version': '4.9',
'cpuSpeed': '2333.331',
> 'version_name': 'Snow Man', 'vlans': {}, 'cpuCores':
2, 'kvmEnabled':
> 'true', 'guestOverhead': '65', 'supportedRHEVMs':
['2.3'],
> 'emulatedMachines': ['pc', 'rhel6.2.0', 'rhel6.1.0',
'rhel6.0.0',
> 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0'],
'operatingSystem': {'release': '',
> 'version': '', 'name': 'unknown'},
'lastClient': '10.1.20.12'}}
>
> Note the line: 'operatingSystem': {'release':'',
'version':'', 'name':
> 'unknown'}.
>
>
>
>
>
> The output commands to the node:
>
> [root@node ~]# lsb_release -a
>
> LSB Version:
> :core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:print
> ing-4.0-amd64:printing-4.0-noarch
>
> Distributor ID: Scientific
>
> Description: Scientific Linux release 6.2 rolling (Carbon)
>
> Release: 6.2
>
> Codename: Carbon
>
> [root@lnode ~]# cat /etc/redhat-release
>
> Scientific Linux release 6.1 (Carbon)
>
>
>
>
>
> How do I get to define vdsm right operating system?
>
>
>
> p.s. Sorry for my english (google translate)
We have plenty of native Russian speakers on this list ;-)
The short answer is that is that no one wrote a patch to Vdsm to let it
recognize Scientific Linux. You are more than welcome to write such a
patch.
I suspect that you would need a (yet unpublished) SL6.2 in order to have
oVirt work properly. Vdsm requires several packages by version for a
reason.
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--------------020300060102040203060709
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
On 01/30/2012 07:15 AM, Brown, Chris (GE Healthcare) wrote:
<blockquote
cite="mid:C63EAE2B896C144EA58784406C423B991A6B69@CINMLVEM37.e2k.ad.ge.com"
type="cite">
<pre wrap="">If you would like to use SL 6.2 as a node:
- Change the contents of /etc/redhat-release to "Red Hat Enterprise Linux Server
release 6.2 (Santiago"
- Build the latest version of RHEV/RHEL 6.x vdsm srpm here: <a
class="moz-txt-link-freetext"
href="http://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/R...
- Add the resulting rpms to a yum repo or create one using createrepo and serve it up
where your SL node can get to it
- Register that repo with yum EG: add a repo file in /etc/yum.repos.d pointing to it
- Finally you will have to edit a value in the postgres database to actually get ovirt to
execute vms on the SL (EL6) node once it is added
- since ovirt is built around FC16 it will pass "emulatedmachine" or the
"-M" value to KVM as "pc-0.14" whereas on EL6 the value passed is
"rhel6.2.0"
- thus to change it to what we desire run the following: psql -U postgres -d engine -c
"update vdc_options set option_value='rhel6.2.0'
where option_name='EmulatedMachine' and version='3.0';"
- after that make sure to restart the jboss service as the previous values are cached
- Chris
________________________________
From: <a class="moz-txt-link-abbreviated"
href="mailto:users-bounces@ovirt.org">users-bounces@ovirt.org</a> on
behalf of Dan Kenigsberg
Sent: Sun 1/29/2012 3:53 AM
To: ??????
Cc: <a class="moz-txt-link-abbreviated"
href="mailto:users@ovirt.org">users@ovirt.org</a>
Subject: Re: [Users] Support by Scientific linux?
On Sun, Jan 29, 2012 at 05:12:09AM +0400, ?????? wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Hi.
Testing work Ovirt on scientific linux 6.1.
Compiling from source oVirt-engine
(<a class="moz-txt-link-freetext"
href="http://www.ovirt.org/wiki/Building_Ovirt_Engine">http:...>)
is successful.
However, if you add a node incorrectly determined the operating system:
OS Version: unknown.
Nodes perform the installation instructions on:
<a class="moz-txt-link-freetext"
href="http://www.ovirt.org/wiki/Building_Ovirt_Engine">http:...
Analyzing the log files are found incorrect answer vdsm (Node status
NonOperational. nonOperationalReason = VERSION_INCOMPATIBLE_WITH_CLUSTER)</pre>
</blockquote>
</blockquote>
<small>I agree with Dan and Chris comments, however, the reason
engine-core decided to move host to non-operational is as it
doesn't fit cluster minimum requirements; any chance to see a full
engine-core log when you try to activate the host ? <br>
</small><br>
<blockquote
cite="mid:C63EAE2B896C144EA58784406C423B991A6B69@CINMLVEM37.e2k.ad.ge.com"
type="cite">
<blockquote type="cite">
<pre wrap="">
/var/log/vdsm/vdsm.log:
Thread-16::DEBUG::2012-01-28 14:08:09,200::clientIF::48::vds::(wrapper)
return getVdsCapabilities with {'status': {'message': 'Done',
'code': 0},
'info': {'HBAInventory': {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:4052e3fcadb'}], 'FC': []}, 'packages2':
{'kernel':
{'release': '220.el6.x86_64', 'buildtime': '0',
'version': '2.6.32'},
'spice-server': {'release': '5.el6', 'buildtime':
'1323492018', 'version':
'0.8.2'}, 'vdsm': {'release': '63.el6',
'buildtime': '1327784725',
'version': '4.9'}, 'qemu-kvm': {'release':
'2.209.el6_2.4', 'buildtime':
'1327361568', 'version': '0.12.1.2'}, 'libvirt':
{'release': '23.el6',
'buildtime': '1323231757', 'version': '0.9.4'},
'qemu-img': {'release':
'2.209.el6_2.4', 'buildtime': '1327361568', 'version':
'0.12.1.2'}},
'cpuModel': 'Intel(R) Xeon(R) CPU 5140 @ 2.33GHz',
'hooks': {},
'networks': {'virbr0': {'cfg': {}, 'netmask':
'255.255.255.0', 'stp': 'on',
'ports': ['virbr0-nic'], 'addr': '192.168.122.1'}},
'vmTypes': ['kvm',
'qemu'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflus
h,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_pe
rfmon,pebs,bts,rep_good,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,sss
e3,cx16,xtpr,pdcm,dca,lahf_lm,dts,tpr_shadow,model_486,model_pentium,model_p
entium2,model_pentium3,model_pentiumpro,model_qemu32,model_coreduo,model_cor
e2duo,model_n270,model_Conroe,model_Opteron_G1', 'cpuSockets': '1',
'uuid':
'343C9406-3478-4923-3478-492339393407_00:1c:c4:74:a0:60',
'lastClientIface':
'eth0', 'nics': {'eth1': {'hwaddr':
'00:1C:C4:74:A0:61', 'netmask': '',
'speed': 0, 'addr': ''}, 'eth0': {'hwaddr':
'00:1C:C4:74:A0:60', 'netmask':
'255.255.255.0', 'speed': 1000, 'addr': '10.1.20.10'}},
'software_revision':
'63', 'management_ip': '', 'clusterLevels':
['2.3'], 'supportedProtocols':
['2.2', '2.3'], 'ISCSIInitiatorName':
'iqn.1994-05.com.redhat:4052e3fcadb',
'memSize': '15949', 'reservedMem': '256',
'bondings': {'bond4': {'hwaddr':
'00:00:00:00:00:00', 'cfg': {}, 'netmask': '',
'addr': '', 'slaves': []},
'bond0': {'hwaddr': '00:00:00:00:00:00', 'cfg': {},
'netmask': '', 'addr':
'', 'slaves': []}, 'bond1': {'hwaddr':
'00:00:00:00:00:00', 'cfg': {},
'netmask': '', 'addr': '', 'slaves': []},
'bond2': {'hwaddr':
'00:00:00:00:00:00', 'cfg': {}, 'netmask': '',
'addr': '', 'slaves': []},
'bond3': {'hwaddr': '00:00:00:00:00:00', 'cfg': {},
'netmask': '', 'addr':
'', 'slaves': []}}, 'software_version': '4.9',
'cpuSpeed': '2333.331',
'version_name': 'Snow Man', 'vlans': {}, 'cpuCores': 2,
'kvmEnabled':
'true', 'guestOverhead': '65', 'supportedRHEVMs':
['2.3'],
'emulatedMachines': ['pc', 'rhel6.2.0', 'rhel6.1.0',
'rhel6.0.0',
'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0'], 'operatingSystem':
{'release': '',
'version': '', 'name': 'unknown'}, 'lastClient':
'10.1.20.12'}}
Note the line: 'operatingSystem': {'release':'',
'version':'', 'name':
'unknown'}.
The output commands to the node:
[root@node ~]# lsb_release -a
LSB Version:
:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:print
ing-4.0-amd64:printing-4.0-noarch
Distributor ID: Scientific
Description: Scientific Linux release 6.2 rolling (Carbon)
Release: 6.2
Codename: Carbon
[root@lnode ~]# cat /etc/redhat-release
Scientific Linux release 6.1 (Carbon)
How do I get to define vdsm right operating system?
p.s. Sorry for my english (google translate)
</pre>
</blockquote>
<pre wrap="">
We have plenty of native Russian speakers on this list ;-)
The short answer is that is that no one wrote a patch to Vdsm to let it
recognize Scientific Linux. You are more than welcome to write such a
patch.
I suspect that you would need a (yet unpublished) SL6.2 in order to have
oVirt work properly. Vdsm requires several packages by version for a
reason.
_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated"
href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a class="moz-txt-link-freetext"
href="http://lists.ovirt.org/mailman/listinfo/users">http://...
_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated"
href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a class="moz-txt-link-freetext"
href="http://lists.ovirt.org/mailman/listinfo/users">http://...
</pre>
</blockquote>
<br>
</body>
</html>
--------------020300060102040203060709--