[Users] Network interfaces order

This is a multipart message in MIME format. ------=_NextPart_000_0063_01CD4A1B.EA938040 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Hi, Every time i reboot my servers, my network interfaces change order and i have to apply network configuration again. Anyway to make the order of the nics persistent? I added a udev rule but when I reboot it disappears. Best regards, Ricardo Esteves. ------=_NextPart_000_0063_01CD4A1B.EA938040 Content-Type: text/html; charset="US-ASCII" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" = xmlns:o=3D"urn:schemas-microsoft-com:office:office" = xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" = xmlns=3D"http://www.w3.org/TR/REC-html40"><head><META = HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; = charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 14 = (filtered medium)"><style><!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; margin-bottom:.0001pt; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} span.EmailStyle17 {mso-style-type:personal; font-family:"Calibri","sans-serif"; color:windowtext;} span.EmailStyle18 {mso-style-type:personal-reply; font-family:"Calibri","sans-serif"; color:#1F497D;} .MsoChpDefault {mso-style-type:export-only; font-size:10.0pt;} @page WordSection1 {size:612.0pt 792.0pt; margin:70.85pt 3.0cm 70.85pt 3.0cm;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--></head><body lang=3DPT link=3Dblue = vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span = style=3D'color:#1F497D'>Hi,<o:p></o:p></span></p><p = class=3DMsoNormal><span = style=3D'color:#1F497D'><o:p> </o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>Every time = i reboot my servers, my network interfaces change order and i have to = apply network configuration again.<o:p></o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US = style=3D'color:#1F497D'><o:p> </o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>Anyway to = make the order of the nics persistent? I added a udev rule but when I = reboot it disappears. <o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US style=3D'color:#1F497D'><o:p> </o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>Best = regards,<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US = style=3D'color:#1F497D'>Ricardo = Esteves.<o:p></o:p></span></p></div></body></html> ------=_NextPart_000_0063_01CD4A1B.EA938040--

--=-VJZIZdWD9t/RYIM//6bF Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi, Am Donnerstag, den 14.06.2012, 10:53 +0100 schrieb Ricardo Esteves:
Every time i reboot my servers, my network interfaces change order and i have to apply network configuration again.
Where does this happen? On oVirt Node or somewhere else?
Anyway to make the order of the nics persistent? I added a udev rule but when I reboot it disappears.=20
If it's Node then the appropriate file has to be persisted to survive a reboot. (persist MYFILE) Greetings - fabian
Best regards, =20 Ricardo Esteves. =20 =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--=-VJZIZdWD9t/RYIM//6bF Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) iQIcBAABAgAGBQJP2bVWAAoJEC9+uOgSHVGUXNkP/2Da+kvLdtemG3Dim5qW8k+x ET+b+/e7y9Yrvu6Y4x5h5DKyWBRSW5UqslxW8iFFqgakqZyKnlZNORjac+hQXpmz 926H5nbmEFy68JfFrAffrNbwKfyVND3XhcH8KMwFfBe58PGguWJZXXj0nq9kAI+b QtttV62hC3CWGsKp5dilvgs12zn2lSK28USXoRGuKsXBO5OazW9TMOvoDADGvzST qs+BlcDXtAV8CGM30UZ6I1Pbc4X09PVmYZ9xA9q/HzfWgYdw91vm8TvlWdAxYTOb N3/7kV/TMMxRIYHBkSI0LysyzmpMsiERvatvqwJ0qsz6//uVsJym+4vbB0VT8F+R nNf5I6MgucblSnwvwZEVFu1+yMwOf9hzxHCv1c/KH+JLj0cTFYmQd6H3eVoXvCtN s1enbzUmhQkWn8FNbFuuKjj3Ev/18bGuojNxNZht6OKiDzA2UWmoeGhOJ8u/3FUf tW0ZHhRy7galXk2225UDlSt9k1fVxwJlE4khJc+R0xnoc8lHWN1W1lPXx67g+Z8Z 0Annk0K9HBggJ9IVnISft6Bf/GDK9AnukFyTHB3QO4M977DmA0hxcp7RulqVyY30 ahBWjr7l7O3j57cXAGCxS6NBlW6X4Ls/+MLC8zohUEgINKDolXwSXy+Wjpvaq4Yj k0cAUWLyP3LgENWGLXwq =82h4 -----END PGP SIGNATURE----- --=-VJZIZdWD9t/RYIM//6bF--

--=-eFNoBhoJs2NJ3i64qAhY Content-Type: multipart/alternative; boundary="=-qa/a7JVPKHEqeV0ZHmlV" --=-qa/a7JVPKHEqeV0ZHmlV Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit This happens on the oVirt node. I've added the file /etc/udev/rules.d/70-persistent-net.rules with the next lines: SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="d4:85:64:48:42:c0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="d4:85:64:48:42:c4", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="1c:c1:de:7b:71:30", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2" SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="1c:c1:de:7b:71:32", ATTR{type}=="1", KERNEL=="eth*", NAME="eth3" Made it persistent using the command persist, after the reboot now the file is still there but the order of the nics still changes :( -----Original Message----- From: Fabian Deutsch <fabiand@redhat.com> To: Ricardo Esteves <ricardo.m.esteves@gmail.com> Cc: users@ovirt.org Subject: Re: [Users] Network interfaces order Date: Thu, 14 Jun 2012 11:56:38 +0200 Hi, Am Donnerstag, den 14.06.2012, 10:53 +0100 schrieb Ricardo Esteves:
Every time i reboot my servers, my network interfaces change order and i have to apply network configuration again.
Where does this happen? On oVirt Node or somewhere else?
Anyway to make the order of the nics persistent? I added a udev rule but when I reboot it disappears.
If it's Node then the appropriate file has to be persisted to survive a reboot. (persist MYFILE) Greetings - fabian
Best regards,
Ricardo Esteves.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--=-qa/a7JVPKHEqeV0ZHmlV Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: 7bit <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 TRANSITIONAL//EN"> <HTML> <HEAD> <META HTTP-EQUIV="Content-Type" CONTENT="text/html; CHARSET=UTF-8"> <META NAME="GENERATOR" CONTENT="GtkHTML/4.2.3"> </HEAD> <BODY> <BR> This happens on the oVirt node.<BR> <BR> I've added the file /etc/udev/rules.d/70-persistent-net.rules with the next lines:<BR> <BR> SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="d4:85:64:48:42:c0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"<BR> <BR> SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="d4:85:64:48:42:c4", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"<BR> <BR> SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="1c:c1:de:7b:71:30", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2"<BR> <BR> SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="1c:c1:de:7b:71:32", ATTR{type}=="1", KERNEL=="eth*", NAME="eth3"<BR> <BR> <BR> <BR> Made it persistent using the command persist, after the reboot now the file is still there but the order of the nics still changes <IMG SRC="cid:1339669776.2148.29.camel@tuborg.vi.pt" ALIGN="middle" ALT=":(" BORDER="0"><BR> <BR> -----Original Message-----<BR> <B>From</B>: Fabian Deutsch <<A HREF="mailto:Fabian%20Deutsch%20%3cfabiand@redhat.com%3e">fabiand@redhat.com</A>><BR> <B>To</B>: Ricardo Esteves <<A HREF="mailto:Ricardo%20Esteves%20%3cricardo.m.esteves@gmail.com%3e">ricardo.m.esteves@gmail.com</A>><BR> <B>Cc</B>: <A HREF="mailto:users@ovirt.org">users@ovirt.org</A><BR> <B>Subject</B>: Re: [Users] Network interfaces order<BR> <B>Date</B>: Thu, 14 Jun 2012 11:56:38 +0200<BR> <BR> <PRE> Hi, Am Donnerstag, den 14.06.2012, 10:53 +0100 schrieb Ricardo Esteves: > Every time i reboot my servers, my network interfaces change order and > i have to apply network configuration again. Where does this happen? On oVirt Node or somewhere else? > Anyway to make the order of the nics persistent? I added a udev rule > but when I reboot it disappears. If it's Node then the appropriate file has to be persisted to survive a reboot. (persist MYFILE) Greetings - fabian > Best regards, > > Ricardo Esteves. > > > _______________________________________________ > Users mailing list > <A HREF="mailto:Users@ovirt.org">Users@ovirt.org</A> > <A HREF="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</A> </PRE> </BODY> </HTML> --=-qa/a7JVPKHEqeV0ZHmlV-- --=-eFNoBhoJs2NJ3i64qAhY Content-ID: <1339669776.2148.29.camel@tuborg.vi.pt> Content-Disposition: attachment; filename="face-sad.png" Content-Type: image/png; name="face-sad.png" Content-Transfer-Encoding: base64 iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz AAAN1wAADdcBQiibeAAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAAAXdEVY dEF1dGhvcgBMYXBvIENhbGFtYW5kcmVp35EaKgAAACl0RVh0RGVzY3JpcHRpb24AQmFzZWQgb2Yg SmFrdWIgU3RlaW5lciBkZXNpZ26ghAVzAAADRklEQVQ4jWWTW0jddQCAv9//di7q0aamnmlemGGz uVXeKG0eKYgxCIoKetyopCAadHmMBT0ExWpQEYHIahELIgcrJJpK5shbO2woaOnyuOTojnk85/zP //7rLYR9zx/f2yeklOxn+J2+uCrUF0OqHEARjwAQyHnbFxO+9L859cHUP/t9sT9w4e3+U5HSsvMP dveGqurrtJKqapCSwp0ttlOb3vVr03Yhn3/9pXPTw3cFLr716I/19z+U6Bx8MizsBYSxBYYHSLB1 cGvx9aNMjV2xbi0lx4c+nT0BoACMvNk3dPDQkUT38eNhkZtmJ5ciq7go1ZWI6koolywkf4atURIn nw43HGof+OjlziEANb77y31GOHJ58MQLEffPi2i1TYz8cAPfUGlqqwNNRwBfDt9AU8M01pRwsLlH T/4xOzh26fOvFddzTx871hMK1sfx7iRZX/yN989NsZ7aBRkgpAQZkN4u8tp7V8mtz0Bqgsd6+0O+ 751WfM9JVNY0qM7G7+C5nB9ZwPFU2ptCCGsPrCzks7Q1lpAphBi9PIlze4aaeIPqe05Cc6TaUVpR QyGzihbTee6JZuoON9NRmyVIpxAIZL7A84NxcmYjA01zeJk1yrvjOJ7Sobme0KxsmsCyCcKCIxWb PPy4jtguEuRDAMiiQ4XrcqbXxt328QsOrmXi+FLTikXz5q3l6z31WjlBMU9gOLjpDfJCZyXtkTMD 2uMalYaPn7cIii5o9/DX4gy2bd3ULJ/JpZXlruqSqGKYGUwpuDJrsRvEiFXVUFUe5af5TbzcDidb A0o9GydaSnJpKbA8JsWZZw+3C5TZp1rcSJu/hhJRIGyghnX2nICiA9VRAZ6PNG0CK2DZeICv5s2i 48ou9dri1vbohU+MvzPFrgP6rh6THkjYMSX/5jx82ydvBhiui1+wWS0qfLeomHvF4MNvJ1YvCSkl ZxMJLRVan4uqdmt/ZTraUhYQUyWqIgCJ58OeL1jLq4zdPmDuOvpKRtvoHB+X3v8vvNLZqRdimbNS yjeayvbC8bAtaiM2AJumQcoMydVsqRUgPq6y7n33i7k5964bAZ7pbzmq4L4qEH1+IFoBVEWuSORU gP7Z97+uJvf7/wFM+ac9EJMtXAAAAABJRU5ErkJggg== --=-eFNoBhoJs2NJ3i64qAhY--

--=-HyXeG6oEzzdL5KmQHhnL Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hey, Am Donnerstag, den 14.06.2012, 11:29 +0100 schrieb Ricardo Esteves:
=20 Made it persistent using the command persist, after the reboot now the file is still there but the order of the nics still changes :(
The problem is deep in the boot process.=20 But we've already got a bug entry for that problem: https://bugzilla.redhat.com/show_bug.cgi?id=3D824595 Greetings fabian --=-HyXeG6oEzzdL5KmQHhnL Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) iQIcBAABAgAGBQJP2c1nAAoJEC9+uOgSHVGUTNMQAIMoKlbzmKoDQdvafvra+j19 w35bhomPNUQ5MFb17Ib7OBoaAtL3pApb0LERbw4jLrMM5VyOZTFNx+p+ke3yjxu8 6XrmrSbkH/iGptXiq+Iq1ycJLUy5pqKqAAkaNhvCAiSSDaiXUhCUD05hFyJpX5bO CNsJZeB2yF+KSJBLxRLVdlF+tA9vBk7rs6Nl3t0VrfMf23ZLpe72yLe9iGLhUe1Y 7nCkrs9lzF3TvgT7w0Wh1lvP+AfQ0Ux06+bvjR1DcjcNO2mv40ArLDe3m27xBO9f dug6HR0TUliIFPV1KCgFL/jBCn5zQFkLHrtb7rqLgUiHYYviBi8s9mGNL1J9dvr5 Err49iO7wjrF/H6Fw1g5HwtNSoTU4a4EkUKER8iqGR3mdTy6BvCZPvnaRp0xOuLn DqHQu6nD6TglxKvbqmzgoKhwTDOu1hvzPJ+uEyc4OYphCYQP7/XtC1MyqRmJwmTp 8wTHUzN4LuUz2cehB572qfn7Et+Lyi2LJ5mgFoh0gghYBz8K3boEtkzLG6/dudnf rFw4oobHCtFjWqd0fw/Hjy6RtCKb7qbz3Xg3kaifNm7+6gyZrj3+BAUn2scL1Q60 hoJ2cvZaJq3V4pWSTuH24Ob7Ka/F5QtEtU+MLaWAxr8ogTDACpqqmYVfKPnKF8sh 1MqXdhRFM4+jdRgZQABp =oRvw -----END PGP SIGNATURE----- --=-HyXeG6oEzzdL5KmQHhnL--

Hi, I have configured 3 networks with VLANs in one of my nics (attached picture), can someone explain me how to delete the VI and VI_WIFI network? Best regards, Ricardo Esteves.

------=_Part_355_17101555.1342708967071 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Hi Ricardo, In order to detach a network with vlan tag from the interface, you should check the checkbox next to the network name under the vlan column of the network you wish to detach and to click on the "detach" button. Any reason why not to use cluster 3.1 with the advanced UI of the Setup Networks ? Regards, Meni ----- Original Message ----- From: "Ricardo Esteves" <ricardo.m.esteves@gmail.com> To: users@ovirt.org Sent: Thursday, July 19, 2012 5:12:38 PM Subject: [Users] Delete Network Hi, I have configured 3 networks with VLANs in one of my nics (attached picture), can someone explain me how to delete the VI and VI_WIFI network? Best regards, Ricardo Esteves. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ------=_Part_355_17101555.1342708967071 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><head><style type=3D'text/css'>p { margin: 0; }</style></head><body><= div style=3D'font-family: arial,helvetica,sans-serif; font-size: 10pt; colo= r: #000000'>Hi <span style=3D"font-family: monospace;"></span>Ricardo,= <br><br>In order to detach a network with vlan tag from the interface, you = should check the checkbox<br>next to the network name under the vlan column= of the network you wish to detach and to<br>click on the "detach" button.<= br><br>Any reason why not to use cluster 3.1 with the advanced UI of the Se= tup Networks ? <br><br>Regards,<br>Meni<br><br><hr id=3D"zwchr"><div style= =3D"color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-deco= ration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b=
From: </b>"Ricardo Esteves" <ricardo.m.esteves@gmail.com><br><b>To: = </b>users@ovirt.org<br><b>Sent: </b>Thursday, July 19, 2012 5:12:38 PM<br><= b>Subject: </b>[Users] Delete Network<br><br>
=20 =20 <pre>Hi, I have configured 3 networks with VLANs in one of my nics (attached picture= ),=20 can someone explain me how to delete the VI and VI_WIFI network? Best regards, Ricardo Esteves. </pre> <br>_______________________________________________<br>Users mailing list<b= r>Users@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br></div=
<br></div></body></html> ------=_Part_355_17101555.1342708967071--

Hi, Ok, i tried that, but for some reason the detach is allways grey. I had 3.0 because i still had a host with 3.0, but is they are all 3.1 so i'm goint to change the cluster to 3.1. Best regards, Ricardo Esteves. -----Original Message----- From: Meni Yakvoe <myakove@redhat.com> To: Ricardo Esteves <ricardo.m.esteves@gmail.com> Cc: users@ovirt.org Subject: Re: [Users] Delete Network Date: Thu, 19 Jul 2012 10:44:26 -0400 (EDT) Hi Ricardo, In order to detach a network with vlan tag from the interface, you should check the checkbox next to the network name under the vlan column of the network you wish to detach and to click on the "detach" button. Any reason why not to use cluster 3.1 with the advanced UI of the Setup Networks ? Regards, Meni ________________________________________________________________________ From: "Ricardo Esteves" <ricardo.m.esteves@gmail.com> To: users@ovirt.org Sent: Thursday, July 19, 2012 5:12:38 PM Subject: [Users] Delete Network Hi, I have configured 3 networks with VLANs in one of my nics (attached picture), can someone explain me how to delete the VI and VI_WIFI network? Best regards, Ricardo Esteves. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, I've increased the LUN i use as iSCSI storage domain on my storage, but oVirt still sees the LUN with the old size. How do i refresh the LUN size and how to increase the filesystem of the storage domain? Best regards, Ricardo Esteves.

Hi! Interesting question, I would also be interested in that. LVM would be aware of the expansion, I suppose...Did you run a pvs command for LVM to list the size? It would be nice if LVM automatically would change the LV size...anyone knows how this works...? Filesystem size is another thing. Filesystem's doesn't exist on the storage domain, it is only block storage. Your filesystems only exists on your VM's. I suppose you need to run a filesystem tool to expand that, depending on your filesystem -----users-bounces@ovirt.org skrev: ----- Till: users@ovirt.org Från: Ricardo Esteves Sänt av: users-bounces@ovirt.org Datum: 2012.07.30 18:33 Ärende: [Users] Increase storage domain Hi, I've increased the LUN i use as iSCSI storage domain on my storage, but oVirt still sees the LUN with the old size. How do i refresh the LUN size and how to increase the filesystem of the storage domain? Best regards, Ricardo Esteves. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
Hi!
Interesting question, I would also be interested in that. LVM would be aware of the expansion, I suppose...Did you run a pvs command for LVM to list the size? It would be nice if LVM automatically would change the LV size...anyone knows how this works...?
1. You should put all hosts except the spm in maintenance. 2. pvresize the LUN on the spm host 3. if other hosts can still 'see' the LUN then you'd need to repeat [2] on those to refresh the device map on all of them (or disconnect the iSCSI session and let oVirt reconnect)
Filesystem size is another thing. Filesystem's doesn't exist on the storage domain, it is only block storage. Your filesystems only exists on your VM's. I suppose you need to run a filesystem tool to expand that, depending on your filesystem
There is no filesystem so problem solved :)
-----users-bounces@ovirt.org skrev: ----- Till: users@ovirt.org Från: Ricardo Esteves Sänt av: users-bounces@ovirt.org Datum: 2012.07.30 18:33 Ärende: [Users] Increase storage domain
Hi,
I've increased the LUN i use as iSCSI storage domain on my storage, but oVirt still sees the LUN with the old size.
How do i refresh the LUN size and how to increase the filesystem of the storage domain?
Best regards, Ricardo Esteves.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, pvresize doesn't work, still same size. How do i disconnect the iscsi session? Between disconnecting and ovirt connect again, will i loose connection to my VMs? Best regards, Ricardo Esteves. -----Original Message----- From: Ayal Baron <abaron@redhat.com> To: Johan Kragsterman <johan.kragsterman@capvert.se> Cc: users@ovirt.org, Ricardo Esteves <ricardo.m.esteves@gmail.com> Subject: Re: [Users] Increase storage domain Date: Tue, 31 Jul 2012 02:30:51 -0400 (EDT) ----- Original Message -----
Hi!
Interesting question, I would also be interested in that. LVM would be aware of the expansion, I suppose...Did you run a pvs command for LVM to list the size? It would be nice if LVM automatically would change the LV size...anyone knows how this works...?
1. You should put all hosts except the spm in maintenance. 2. pvresize the LUN on the spm host 3. if other hosts can still 'see' the LUN then you'd need to repeat [2] on those to refresh the device map on all of them (or disconnect the iSCSI session and let oVirt reconnect)
Filesystem size is another thing. Filesystem's doesn't exist on the storage domain, it is only block storage. Your filesystems only exists on your VM's. I suppose you need to run a filesystem tool to expand that, depending on your filesystem
There is no filesystem so problem solved :)
-----users-bounces@ovirt.org skrev: ----- Till: users@ovirt.org Från: Ricardo Esteves Sänt av: users-bounces@ovirt.org Datum: 2012.07.30 18:33 Ärende: [Users] Increase storage domain
Hi,
I've increased the LUN i use as iSCSI storage domain on my storage, but oVirt still sees the LUN with the old size.
How do i refresh the LUN size and how to increase the filesystem of the storage domain?
Best regards, Ricardo Esteves.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
Hi,
pvresize doesn't work, still same size.
How do i disconnect the iscsi session?
Between disconnecting and ovirt connect again, will i loose connection to my VMs?
Of course you would. Your VMs would automatically pause. I doubt this is what you want. What you can do prior to running pvresize is run: iscsiadm -m session -R Hope this helps.
Best regards, Ricardo Esteves.
-----Original Message----- From : Ayal Baron < abaron@redhat.com > To : Johan Kragsterman < johan.kragsterman@capvert.se > Cc : users@ovirt.org , Ricardo Esteves < ricardo.m.esteves@gmail.com
Subject : Re: [Users] Increase storage domain Date : Tue, 31 Jul 2012 02:30:51 -0400 (EDT)
----- Original Message -----
Hi!
Interesting question, I would also be interested in that. LVM would be aware of the expansion, I suppose...Did you run a pvs command for LVM to list the size? It would be nice if LVM automatically would change the LV size...anyone knows how this works...?
1. You should put all hosts except the spm in maintenance. 2. pvresize the LUN on the spm host 3. if other hosts can still 'see' the LUN then you'd need to repeat [2] on those to refresh the device map on all of them (or disconnect the iSCSI session and let oVirt reconnect)
Filesystem size is another thing. Filesystem's doesn't exist on the storage domain, it is only block storage. Your filesystems only exists on your VM's. I suppose you need to run a filesystem tool to expand that, depending on your filesystem
There is no filesystem so problem solved :)
-----users-bounces@ovirt.org skrev: ----- Till: users@ovirt.org > Från: Ricardo Esteves Sänt av: users-bounces@ovirt.org > Datum: 2012.07.30 18:33 Ärende: [Users] Increase storage domain
Hi,
I've increased the LUN i use as iSCSI storage domain on my storage, but oVirt still sees the LUN with the old size.
How do i refresh the LUN size and how to increase the filesystem of the storage domain?
Best regards, Ricardo Esteves.
_______________________________________________ Users mailing list Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > _______________________________________________ Users mailing list Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users >

On Sat, Aug 11, 2012 at 5:56 PM, Ayal Baron <abaron@redhat.com> wrote:
----- Original Message -----
Hi,
pvresize doesn't work, still same size.
How do i disconnect the iscsi session?
Between disconnecting and ovirt connect again, will i loose connection to my VMs?
Of course you would. Your VMs would automatically pause. I doubt this is what you want. What you can do prior to running pvresize is run: iscsiadm -m session -R Hope this helps.
Did anyone have any success with this? I was unable to get pvdisplay to show the new size until I rebooted the hosts and ran pvresize. I started with a 1TiB volume on our Equallogics SAN and set it to 5TiB. I put my non-SPM host into maintenance and ran pvresize on the SPM. I did not expect this to work because of previous experience and this thread. pvresize said it resized, but pvdisplay showed 1023GiB instead of 1TB, so it shrunk it a tiny bit? Next, I tried `iscsiadm -m session -R` and then pvresize which said it resized 0 PVs and pvdisplay confirms no change. I did a `iscsiadm -m node -T <iqn> -u` then `iscsiadm -m node -T <iqn> -l` followed by pvdisplay. pvdisplay spewed an IO error message for the PV and each LV and I noticed that the device had changed from /dev/mapper/<UUID> to /dev/sdf, which explains why it thought the PV and all the other LVs were missing. I should have deactivated the LVs/VG/PV first I supposed, and then reactivated them afterwords. Anyway, this gave me pause, but I'm still pre-production, so I went ahead and did a pvresize, which did nothing,and pvdisplay gave the same output, including errors, as before. So, I rebooted the host, activated the host, ran pvresize on the host, and all is as desired. I then rebooted my other host and all was well when it came up. I can deal with rebooting each host if necessary, but it is certainly not ideal. Has anyone worked out the correct steps make this happen without rebooting the hosts and minimal VM interruption? I might try it a couple more times if not. The bigger question is, how do I get the engine to see the new size? It is still seeing 1TB. Stopping and starting (why no restart on /etc/init.d/ovirt-engine?) did not cause a refresh. Oop! There it went, just as I was typing this, I saw it change in the window behind this one. So, was it just a cache time out, or did I need to restart the engine as well? An ideal setup would be for the engine to detect the change and run the necessary commands on each host. If auto detection is not reasonable, an option in the GUI to tell the engine the LUN has changed would be nearly as good. Alternately, would it just be better to create a new LUN on the iSCSI target and add it to the storage domain? Is that even doable? Certainly it is as simple as adding a new PV to the VG in LVM, but does the engine/GUI support it? It seems a bit more messy than growing an existing domain from an iSCSI target point of view, but are there any technical down sides? Eventually, I think I'll look into filing a feature request, so I would appreciate if some one could point me in the right direction, but let's hash out what makes sense here before doing that.

----- Original Message -----
On Sat, Aug 11, 2012 at 5:56 PM, Ayal Baron < abaron@redhat.com > wrote:
----- Original Message -----
Hi,
pvresize doesn't work, still same size.
How do i disconnect the iscsi session?
Between disconnecting and ovirt connect again, will i loose connection to my VMs?
Of course you would. Your VMs would automatically pause. I doubt this is what you want. What you can do prior to running pvresize is run: iscsiadm -m session -R Hope this helps.
Did anyone have any success with this? I was unable to get pvdisplay to show the new size until I rebooted the hosts and ran pvresize. I started with a 1TiB volume on our Equallogics SAN and set it to 5TiB. I put my non-SPM host into maintenance and ran pvresize on the SPM. I did not expect this to work because of previous experience and this thread. pvresize said it resized, but pvdisplay showed 1023GiB instead of 1TB, so it shrunk it a tiny bit?
Next, I tried `iscsiadm -m session -R` and then pvresize which said it resized 0 PVs and pvdisplay confirms no change. I did a `iscsiadm -m node -T <iqn> -u` then `iscsiadm -m node -T <iqn> -l` followed by pvdisplay. pvdisplay spewed an IO error message for the PV and each LV and I noticed that the device had changed from /dev/mapper/<UUID> to /dev/sdf, which explains why it thought the PV and all the other LVs were missing. I should have deactivated the LVs/VG/PV first I supposed, and then reactivated them afterwords.
Anyway, this gave me pause, but I'm still pre-production, so I went ahead and did a pvresize, which did nothing,and pvdisplay gave the same output, including errors, as before. So, I rebooted the host, activated the host, ran pvresize on the host, and all is as desired.
I then rebooted my other host and all was well when it came up. I can deal with rebooting each host if necessary, but it is certainly not ideal. Has anyone worked out the correct steps make this happen without rebooting the hosts and minimal VM interruption? I might try it a couple more times if not.
Sounds really over-complicated for what you're trying to do. After increasing the size of the LUN in the storage side try running the following command on the SPM: vdsClient -s 0 getDeviceList (-s is only if ssl is enabled, otherwise just remove it) After that run pvresize (for LVM to update its metadata). That should be it on the SPM side. Then if indeed it succeeds, wait a little while for engine to catch up (it periodically runs getStoragePoolInfo and updates its info about free space, you can find this in vdsm.log) regardless, see below for the preferred method.
The bigger question is, how do I get the engine to see the new size? It is still seeing 1TB. Stopping and starting (why no restart on /etc/init.d/ovirt-engine?) did not cause a refresh. Oop! There it went, just as I was typing this, I saw it change in the window behind this one. So, was it just a cache time out, or did I need to restart the engine as well?
An ideal setup would be for the engine to detect the change and run the necessary commands on each host. If auto detection is not reasonable, an option in the GUI to tell the engine the LUN has changed would be nearly as good.
Alternately, would it just be better to create a new LUN on the iSCSI target and add it to the storage domain? Is that even doable?
This flow is fully supported and is currently the easiest way of doing this (supported from the GUI and from the CLI). Simply extend a domain with a new LUN
Certainly it is as simple as adding a new PV to the VG in LVM, but does the engine/GUI support it? It seems a bit more messy than growing an existing domain from an iSCSI target point of view, but are there any technical down sides?
The target has nothing to do with it, you can have multiple LUNs behind the same target.
Eventually, I think I'll look into filing a feature request, so I would appreciate if some one could point me in the right direction, but let's hash out what makes sense here before doing that.

On Wed, Sep 26, 2012 at 6:12 PM, Ayal Baron <abaron@redhat.com> wrote:
Sounds really over-complicated for what you're trying to do.
Agreed! That's why I asked. =) To be clear, all that was necessary to end up where I wanted was to reboot the hosts, which is not terribly complicated, but time consuming and should not be necessary. I tired all those other steps based on recommendations in this thread to avoid the reboot.
After increasing the size of the LUN in the storage side try running the following command on the SPM: vdsClient -s 0 getDeviceList (-s is only if ssl is enabled, otherwise just remove it)
After that run pvresize (for LVM to update its metadata). That should be it on the SPM side.
This did not make any difference. I increased the LUN to 14.1 on the Equallogics box and then ran these commands (you may want to skip past this to the text below since I am leaning heavily toward the add a LUN method): [root@cloudhost04 ~]# pvdisplay --- Physical volume --- PV Name /dev/mapper/364ed2a35d83f5d68b705e54229020027 VG Name 64c4a870-98dc-40fc-b21e-092156febcdc PV Size 14.00 TiB / not usable 129.00 MiB Allocatable yes PE Size 128.00 MiB Total PE 114686 Free PE 111983 Allocated PE 2703 PV UUID h8tZon-o5sB-FR4M-m8oT-UPub-eM1w-7eexhO [root@cloudhost04 ~]# vdsClient -s 0 getDeviceList [{'GUID': '364ed2a35d83f5d68b705e54229020027', 'capacity': '15393163837440', 'devtype': 'iSCSI', 'fwrev': '5.2', 'logicalblocksize': '512', 'partitioned': False, 'pathlist': [{'connection': '10.10.5.18', 'initiatorname': 'default', 'iqn': 'iqn.2001-05.com.equallogic:4-52aed6-685d3fd83-2700022942e505b7-cloud2', 'port': '3260', 'portal': '1'}], 'pathstatus': [{'lun': '0', 'physdev': 'sdd', 'state': 'active', 'type': 'iSCSI'}], 'physicalblocksize': '512', 'productID': '100E-00', 'pvUUID': 'h8tZon-o5sB-FR4M-m8oT-UPub-eM1w-7eexhO', 'serial': '', 'vendorID': 'EQLOGIC', 'vgUUID': 'XtdGHH-5WwC-oWRa-bv0V-me7t-T6ti-M9WKd2'}] [root@cloudhost04 ~]# pvdisplay --- Physical volume --- PV Name /dev/mapper/364ed2a35d83f5d68b705e54229020027 VG Name 64c4a870-98dc-40fc-b21e-092156febcdc PV Size 14.00 TiB / not usable 129.00 MiB Allocatable yes PE Size 128.00 MiB Total PE 114686 Free PE 111983 Allocated PE 2703 PV UUID h8tZon-o5sB-FR4M-m8oT-UPub-eM1w-7eexhO [root@cloudhost04 ~]# pvresize /dev/mapper/364ed2a35d83f5d68b705e54229020027 Physical volume "/dev/mapper/364ed2a35d83f5d68b705e54229020027" changed 1 physical volume(s) resized / 0 physical volume(s) not resized [root@cloudhost04 ~]# pvdisplay --- Physical volume --- PV Name /dev/mapper/364ed2a35d83f5d68b705e54229020027 VG Name 64c4a870-98dc-40fc-b21e-092156febcdc PV Size 14.00 TiB / not usable 129.00 MiB Allocatable yes PE Size 128.00 MiB Total PE 114686 Free PE 111983 Allocated PE 2703 PV UUID h8tZon-o5sB-FR4M-m8oT-UPub-eM1w-7eexhO So, not change.
Then if indeed it succeeds, wait a little while for engine to catch up (it periodically runs getStoragePoolInfo and updates its info about free space, you can find this in vdsm.log) regardless, see below for the preferred method.
Thanks for the confirmation. Any idea what the interval is?
Alternately, would it just be better to create a new LUN on the iSCSI
target and add it to the storage domain? Is that even doable?
This flow is fully supported and is currently the easiest way of doing this (supported from the GUI and from the CLI). Simply extend a domain with a new LUN
Great! I'll give that a shot.
Certainly it is as simple as adding a new PV to the VG in LVM, but does the engine/GUI support it? It seems a bit more messy than growing an existing domain from an iSCSI target point of view, but are there any technical down sides?
The target has nothing to do with it, you can have multiple LUNs behind the same target.
The target serves the LUNs and it was the additional LUNs that I was referring to as being messier when a single LUN could do the job. Not a big problem, just name the LUNs the same the same patters (cloud<#> in my case), but when all other things are equal, less LUNs is less to think about. However, as I read this email, it occurred that some other things might not be equal. Specifically, using multiple LUNs could provide a means of shrinking the storage domain in the future. LVM provides a simple means to remove a PV from a VG, but does the engine support this in the CLI or GUI? That is, if the a storage domain has multiple LUNs in it, can those be removed at a later date?

----- Original Message -----
On Wed, Sep 26, 2012 at 6:12 PM, Ayal Baron < abaron@redhat.com > wrote:
Sounds really over-complicated for what you're trying to do.
Agreed! That's why I asked. =) To be clear, all that was necessary to end up where I wanted was to reboot the hosts, which is not terribly complicated, but time consuming and should not be necessary. I tired all those other steps based on recommendations in this thread to avoid the reboot.
After increasing the size of the LUN in the storage side try running the following command on the SPM: vdsClient -s 0 getDeviceList (-s is only if ssl is enabled, otherwise just remove it)
After that run pvresize (for LVM to update its metadata). That should be it on the SPM side.
This did not make any difference. I increased the LUN to 14.1 on the Equallogics box and then ran these commands (you may want to skip past this to the text below since I am leaning heavily toward the add a LUN method):
[root@cloudhost04 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/mapper/364ed2a35d83f5d68b705e54229020027
VG Name 64c4a870-98dc-40fc-b21e-092156febcdc
PV Size 14.00 TiB / not usable 129.00 MiB
Allocatable yes
PE Size 128.00 MiB
Total PE 114686
Free PE 111983
Allocated PE 2703
PV UUID h8tZon-o5sB-FR4M-m8oT-UPub-eM1w-7eexhO
[root@cloudhost04 ~]# vdsClient -s 0 getDeviceList
[{'GUID': '364ed2a35d83f5d68b705e54229020027',
'capacity': '15393163837440',
'devtype': 'iSCSI',
'fwrev': '5.2',
'logicalblocksize': '512',
'partitioned': False,
'pathlist': [{'connection': '10.10.5.18',
'initiatorname': 'default',
'iqn': 'iqn.2001-05.com.equallogic:4-52aed6-685d3fd83-2700022942e505b7-cloud2',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '0',
'physdev': 'sdd',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': '100E-00',
'pvUUID': 'h8tZon-o5sB-FR4M-m8oT-UPub-eM1w-7eexhO',
'serial': '',
'vendorID': 'EQLOGIC',
'vgUUID': 'XtdGHH-5WwC-oWRa-bv0V-me7t-T6ti-M9WKd2'}]
[root@cloudhost04 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/mapper/364ed2a35d83f5d68b705e54229020027
VG Name 64c4a870-98dc-40fc-b21e-092156febcdc
PV Size 14.00 TiB / not usable 129.00 MiB
Allocatable yes
PE Size 128.00 MiB
Total PE 114686
Free PE 111983
Allocated PE 2703
PV UUID h8tZon-o5sB-FR4M-m8oT-UPub-eM1w-7eexhO
[root@cloudhost04 ~]# pvresize /dev/mapper/364ed2a35d83f5d68b705e54229020027
Physical volume "/dev/mapper/364ed2a35d83f5d68b705e54229020027" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
[root@cloudhost04 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/mapper/364ed2a35d83f5d68b705e54229020027
VG Name 64c4a870-98dc-40fc-b21e-092156febcdc
PV Size 14.00 TiB / not usable 129.00 MiB
Allocatable yes
PE Size 128.00 MiB
Total PE 114686
Free PE 111983
Allocated PE 2703
PV UUID h8tZon-o5sB-FR4M-m8oT-UPub-eM1w-7eexhO
So, not change.
This looks like an LVM issue. Have you tried deactivating the VG before pvresize?
Then if indeed it succeeds, wait a little while for engine to catch up (it periodically runs getStoragePoolInfo and updates its info about free space, you can find this in vdsm.log) regardless, see below for the preferred method.
Thanks for the confirmation. Any idea what the interval is?
Alternately, would it just be better to create a new LUN on the iSCSI target and add it to the storage domain? Is that even doable?
This flow is fully supported and is currently the easiest way of doing this (supported from the GUI and from the CLI). Simply extend a domain with a new LUN
Great! I'll give that a shot.
Certainly it is as simple as adding a new PV to the VG in LVM, but does the engine/GUI support it? It seems a bit more messy than growing an existing domain from an iSCSI target point of view, but are there any technical down sides?
The target has nothing to do with it, you can have multiple LUNs behind the same target.
The target serves the LUNs and it was the additional LUNs that I was referring to as being messier when a single LUN could do the job. Not a big problem, just name the LUNs the same the same patters (cloud<#> in my case), but when all other things are equal, less LUNs is less to think about.
However, as I read this email, it occurred that some other things might not be equal. Specifically, using multiple LUNs could provide a means of shrinking the storage domain in the future. LVM provides a simple means to remove a PV from a VG, but does the engine support this in the CLI or GUI? That is, if the a storage domain has multiple LUNs in it, can those be removed at a later date?
Not yet.

On Thu, Sep 27, 2012 at 11:08 AM, Ayal Baron <abaron@redhat.com> wrote:
Alan Johnson < alan@datdec.com > meant to write: So, no change.
This looks like an LVM issue. Have you tried deactivating the VG before pvresize?
I have not, but I don't think I'll bother playing with that any more since there is a more accepted way of growing that has no significant down side and leaves open the potential for more functionality. Good to know that I should not have to make the change. I should mention that the host is running is a minimal install of CentOS 6.3, updated, and then tweaked by oVirt. Perhaps there is some other package that enables this functionality?
However, as I read this email, it occurred that some other things might not be equal. Specifically, using multiple LUNs could provide a means of shrinking the storage domain in the future. LVM provides a simple means to remove a PV from a VG, but does the engine support this in the CLI or GUI? That is, if the a storage domain has multiple LUNs in it, can those be removed at a later date?
Not yet.
Does this mean it is in the works? If not, where could I put in such feature request? Certainly, I have no pressing need of this, but it seems like a fairly simple thing to implement since I have done it so easily in the past with a just a couple of commands outside of an oVirt environment. I believe the primary purpose of the LVM functionality was to enable removal of dying PVs before they take out an entire VG. No reason it would not work just as well to remove a healthy PV. It can take a long time to move all the extents off the PV requested, but there is command to show the progress, so it would also be easy to wrap that in to the GUI.

----- Original Message -----
On Thu, Sep 27, 2012 at 11:08 AM, Ayal Baron < abaron@redhat.com > wrote:
Alan Johnson < alan@datdec.com > meant to write: So, no change.
This looks like an LVM issue. Have you tried deactivating the VG before pvresize?
I have not, but I don't think I'll bother playing with that any more since there is a more accepted way of growing that has no significant down side and leaves open the potential for more functionality. Good to know that I should not have to make the change.
I should mention that the host is running is a minimal install of CentOS 6.3, updated, and then tweaked by oVirt. Perhaps there is some other package that enables this functionality?
no there isn't. LVM should work fine.
However, as I read this email, it occurred that some other things might not be equal. Specifically, using multiple LUNs could provide a means of shrinking the storage domain in the future. LVM provides a simple means to remove a PV from a VG, but does the engine support this in the CLI or GUI? That is, if the a storage domain has multiple LUNs in it, can those be removed at a later date?
Not yet.
Does this mean it is in the works? If not, where could I put in such feature request?
Certainly, I have no pressing need of this, but it seems like a fairly simple thing to implement since I have done it so easily in the past with a just a couple of commands outside of an oVirt environment. I believe the primary purpose of the LVM functionality was to enable removal of dying PVs before they take out an entire VG. No reason it would not work just as well to remove a healthy PV. It can take a long time to move all the extents off the PV requested, but there is command to show the progress, so it would also be easy to wrap that in to the GUI.
What's simple in a single host environment is really not that simple when it comes to clusters. The tricky part is the coordination between the different hosts and doing it live or with minimal impact.

On Sat, Sep 29, 2012 at 3:47 PM, Ayal Baron <abaron@redhat.com> wrote:
However, as I read this email, it occurred that some other things might not be equal. Specifically, using multiple LUNs could provide a means of shrinking the storage domain in the future. LVM provides a simple means to remove a PV from a VG, but does the engine support this in the CLI or GUI? That is, if the a storage domain has multiple LUNs in it, can those be removed at a later date?
Not yet.
Does this mean it is in the works? If not, where could I put in such feature request?
Certainly, I have no pressing need of this, but it seems like a fairly simple thing to implement since I have done it so easily in the past with a just a couple of commands outside of an oVirt environment. I believe the primary purpose of the LVM functionality was to enable removal of dying PVs before they take out an entire VG. No reason it would not work just as well to remove a healthy PV. It can take a long time to move all the extents off the PV requested, but there is command to show the progress, so it would also be easy to wrap that in to the GUI.
What's simple in a single host environment is really not that simple when it comes to clusters. The tricky part is the coordination between the different hosts and doing it live or with minimal impact.
Fair enough, but it seems that the cluster environment has been addressed with the SPM mechanism for all things LVM. Certainly, initial coding the feature would be fairly trivial, but I can imagine that testing in the cluster environment might expose additional complexity.

----- Original Message -----
On Sat, Sep 29, 2012 at 3:47 PM, Ayal Baron < abaron@redhat.com > wrote:
However, as I read this email, it occurred that some other things might not be equal. Specifically, using multiple LUNs could provide a means of shrinking the storage domain in the future. LVM provides a simple means to remove a PV from a VG, but does the engine support this in the CLI or GUI? That is, if the a storage domain has multiple LUNs in it, can those be removed at a later date?
Not yet.
Does this mean it is in the works? If not, where could I put in such feature request?
Certainly, I have no pressing need of this, but it seems like a fairly simple thing to implement since I have done it so easily in the past with a just a couple of commands outside of an oVirt environment. I believe the primary purpose of the LVM functionality was to enable removal of dying PVs before they take out an entire VG. No reason it would not work just as well to remove a healthy PV. It can take a long time to move all the extents off the PV requested, but there is command to show the progress, so it would also be easy to wrap that in to the GUI.
What's simple in a single host environment is really not that simple when it comes to clusters. The tricky part is the coordination between the different hosts and doing it live or with minimal impact.
Fair enough, but it seems that the cluster environment has been addressed with the SPM mechanism for all things LVM. Certainly, initial coding the feature would be fairly trivial, but I can imagine that testing in the cluster environment might expose additional complexity.
The actual data move is done by the SPM and is a simple pvmove command as you've stated. The simple way of doing this would be to put the domain in maintenance mode and then pvmove on the SPM (currently you can't run such operations while domain is in maintenance, but it just makes sense to do it) and then activate the domain. This means however that you would not be able to run any VM that has disks on this VG, even ones that reside entirely on other PVs. If, however, we'd want to do it 'semi live' then it would become much more complex. First you need to realize that LVs are not neatly dispersed between PVs. You can have extents from different PVs for the same LV (esp. after lvextend which happens automatically in the system when there are snapshots). So we'd need to map all the LVs which are affected and prevent running any VM that uses these LVs. Then we'd need to also guarantee there is enough space to move these extents to (again, in addition to user creating new objects, there are automatic lvextend operations going on so we'd need a way to reserve space on the VG for this operation). Once we've done this we'd need to run the op and then we'd need to make sure that all the hosts see things properly.

Hi, With the latest version of ovirt my host's CPU is not correctly identified, my host's have an Intel Xeon E5530 (Nehalem family), but is being identified as Conrad family. My installed versions: ovirt-engine-3.1.0-2.fc17.noarch ovirt-node-iso-2.5.0-2.0.fc17.iso Best regards, Ricardo Esteves.

On 08/01/2012 12:57 PM, Ricardo Esteves wrote:
Hi,
With the latest version of ovirt my host's CPU is not correctly identified, my host's have an Intel Xeon E5530 (Nehalem family), but is being identified as Conrad family.
My installed versions:
ovirt-engine-3.1.0-2.fc17.noarch ovirt-node-iso-2.5.0-2.0.fc17.iso
Best regards, Ricardo Esteves.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
please share: vdsClient -s 0 getVdsCaps | grep -i flags

[root@blade4 ~]# vdsClient -s 0 getVdsCaps | grep -i flags Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2275, in <module> File "/usr/share/vdsm/vdsClient.py", line 403, in do_getCap File "/usr/lib64/python2.7/xmlrpclib.py", line 1224, in __call__ File "/usr/lib64/python2.7/xmlrpclib.py", line 1578, in __request File "/usr/lib64/python2.7/xmlrpclib.py", line 1264, in request File "/usr/lib64/python2.7/xmlrpclib.py", line 1292, in single_request File "/usr/lib64/python2.7/xmlrpclib.py", line 1439, in send_content File "/usr/lib64/python2.7/httplib.py", line 954, in endheaders File "/usr/lib64/python2.7/httplib.py", line 814, in _send_output File "/usr/lib64/python2.7/httplib.py", line 776, in send File "/usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py", line 98, in connect File "/usr/lib64/python2.7/ssl.py", line 381, in wrap_socket File "/usr/lib64/python2.7/ssl.py", line 141, in __init__ SSLError: [Errno 0] _ssl.c:340: error:00000000:lib(0):func(0):reason(0) -----Original Message----- From: Itamar Heim <iheim@redhat.com> To: Ricardo Esteves <ricardo.m.esteves@gmail.com> Cc: users@ovirt.org Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification Date: Wed, 01 Aug 2012 13:59:27 +0300 On 08/01/2012 12:57 PM, Ricardo Esteves wrote:
Hi,
With the latest version of ovirt my host's CPU is not correctly identified, my host's have an Intel Xeon E5530 (Nehalem family), but is being identified as Conrad family.
My installed versions:
ovirt-engine-3.1.0-2.fc17.noarch ovirt-node-iso-2.5.0-2.0.fc17.iso
Best regards, Ricardo Esteves.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
please share: vdsClient -s 0 getVdsCaps | grep -i flags

On 08/01/2012 02:28 PM, Ricardo Esteves wrote:
[root@blade4 <mailto:root@blade4> ~]# vdsClient -s 0 getVdsCaps | grep -i flags Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2275, in <module> File "/usr/share/vdsm/vdsClient.py", line 403, in do_getCap File "/usr/lib64/python2.7/xmlrpclib.py", line 1224, in __call__ File "/usr/lib64/python2.7/xmlrpclib.py", line 1578, in __request File "/usr/lib64/python2.7/xmlrpclib.py", line 1264, in request File "/usr/lib64/python2.7/xmlrpclib.py", line 1292, in single_request File "/usr/lib64/python2.7/xmlrpclib.py", line 1439, in send_content File "/usr/lib64/python2.7/httplib.py", line 954, in endheaders File "/usr/lib64/python2.7/httplib.py", line 814, in _send_output File "/usr/lib64/python2.7/httplib.py", line 776, in send File "/usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py", line 98, in connect File "/usr/lib64/python2.7/ssl.py", line 381, in wrap_socket File "/usr/lib64/python2.7/ssl.py", line 141, in __init__ SSLError: [Errno 0] _ssl.c:340: error:00000000:lib(0):func(0):reason(0)
did you somehow disable ssl? is vdsm running? what's its status in engine? what does this return: virsh capabilities | grep model
-----Original Message----- *From*: Itamar Heim <iheim@redhat.com <mailto:Itamar%20Heim%20%3ciheim@redhat.com%3e>> *To*: Ricardo Esteves <ricardo.m.esteves@gmail.com <mailto:Ricardo%20Esteves%20%3cricardo.m.esteves@gmail.com%3e>> *Cc*: users@ovirt.org <mailto:users@ovirt.org> *Subject*: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification *Date*: Wed, 01 Aug 2012 13:59:27 +0300
On 08/01/2012 12:57 PM, Ricardo Esteves wrote:
Hi,
With the latest version of ovirt my host's CPU is not correctly identified, my host's have an Intel Xeon E5530 (Nehalem family), but is being identified as Conrad family.
My installed versions:
ovirt-engine-3.1.0-2.fc17.noarch ovirt-node-iso-2.5.0-2.0.fc17.iso
Best regards, Ricardo Esteves.
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
please share: vdsClient -s 0 getVdsCaps | grep -i flags

I didn't disabled anything, but after installing the node when i configure the option "oVirt Engine" it gives an error saying it can't download the certificate, but i had this error with previous versions of the node, and it detected ok the CPU family. This is the output after a fresh install of the node: [root@blade4 ~]# vdsClient -s 0 getVdsCaps | grep -i flags Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2275, in <module> File "/usr/share/vdsm/vdsClient.py", line 403, in do_getCap File "/usr/lib64/python2.7/xmlrpclib.py", line 1224, in __call__ File "/usr/lib64/python2.7/xmlrpclib.py", line 1578, in __request File "/usr/lib64/python2.7/xmlrpclib.py", line 1264, in request File "/usr/lib64/python2.7/xmlrpclib.py", line 1292, in single_request File "/usr/lib64/python2.7/xmlrpclib.py", line 1439, in send_content File "/usr/lib64/python2.7/httplib.py", line 954, in endheaders File "/usr/lib64/python2.7/httplib.py", line 814, in _send_output File "/usr/lib64/python2.7/httplib.py", line 776, in send File "/usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py", line 91, in connect File "/usr/lib64/python2.7/socket.py", line 553, in create_connection gaierror: [Errno -2] Name or service not known [root@blade4 ~]# virsh capabilities <capabilities> <host> <uuid>35303737-3830-435a-4a30-30333035455a</uuid> <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='2'/> <feature name='rdtscp'/> <feature name='dca'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <power_management/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='8'> <cpu id='0'/> <cpu id='1'/> <cpu id='2'/> <cpu id='3'/> <cpu id='4'/> <cpu id='5'/> <cpu id='6'/> <cpu id='7'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/bin/qemu-kvm</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <pae/> <nonpae/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/bin/qemu-kvm</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities> When i add the host to oVirt i get this message: Host blade4.vi.pt moved to Non-Operational state as host does not meet the cluster's minimum CPU level. Missing CPU features : model_Nehalem Best regards, Ricardo Esteves. -----Original Message----- From: Itamar Heim <iheim@redhat.com> To: Ricardo Esteves <ricardo.m.esteves@gmail.com> Cc: users@ovirt.org Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification Date: Wed, 01 Aug 2012 15:43:22 +0300 On 08/01/2012 02:28 PM, Ricardo Esteves wrote:
[root@blade4 <mailto:root@blade4> ~]# vdsClient -s 0 getVdsCaps | grep -i flags Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2275, in <module> File "/usr/share/vdsm/vdsClient.py", line 403, in do_getCap File "/usr/lib64/python2.7/xmlrpclib.py", line 1224, in __call__ File "/usr/lib64/python2.7/xmlrpclib.py", line 1578, in __request File "/usr/lib64/python2.7/xmlrpclib.py", line 1264, in request File "/usr/lib64/python2.7/xmlrpclib.py", line 1292, in single_request File "/usr/lib64/python2.7/xmlrpclib.py", line 1439, in send_content File "/usr/lib64/python2.7/httplib.py", line 954, in endheaders File "/usr/lib64/python2.7/httplib.py", line 814, in _send_output File "/usr/lib64/python2.7/httplib.py", line 776, in send File "/usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py", line 98, in connect File "/usr/lib64/python2.7/ssl.py", line 381, in wrap_socket File "/usr/lib64/python2.7/ssl.py", line 141, in __init__ SSLError: [Errno 0] _ssl.c:340: error:00000000:lib(0):func(0):reason(0)
did you somehow disable ssl? is vdsm running? what's its status in engine? what does this return: virsh capabilities | grep model
-----Original Message----- *From*: Itamar Heim <iheim@redhat.com <mailto:Itamar%20Heim%20%3ciheim@redhat.com%3e>> *To*: Ricardo Esteves <ricardo.m.esteves@gmail.com <mailto:Ricardo%20Esteves%20%3cricardo.m.esteves@gmail.com%3e>> *Cc*: users@ovirt.org <mailto:users@ovirt.org> *Subject*: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification *Date*: Wed, 01 Aug 2012 13:59:27 +0300
On 08/01/2012 12:57 PM, Ricardo Esteves wrote:
Hi,
With the latest version of ovirt my host's CPU is not correctly identified, my host's have an Intel Xeon E5530 (Nehalem family), but is being identified as Conrad family.
My installed versions:
ovirt-engine-3.1.0-2.fc17.noarch ovirt-node-iso-2.5.0-2.0.fc17.iso
Best regards, Ricardo Esteves.
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
please share: vdsClient -s 0 getVdsCaps | grep -i flags

And now, after reboot of the node, i get this: [root@blade4 ~]# virsh capabilities Segmentation fault -----Original Message----- From: Ricardo Esteves <ricardo.m.esteves@gmail.com> To: Itamar Heim <iheim@redhat.com> Cc: users@ovirt.org Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification Date: Wed, 01 Aug 2012 16:52:11 +0100 I didn't disabled anything, but after installing the node when i configure the option "oVirt Engine" it gives an error saying it can't download the certificate, but i had this error with previous versions of the node, and it detected ok the CPU family. This is the output after a fresh install of the node: [root@blade4 ~]# vdsClient -s 0 getVdsCaps | grep -i flags Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2275, in <module> File "/usr/share/vdsm/vdsClient.py", line 403, in do_getCap File "/usr/lib64/python2.7/xmlrpclib.py", line 1224, in __call__ File "/usr/lib64/python2.7/xmlrpclib.py", line 1578, in __request File "/usr/lib64/python2.7/xmlrpclib.py", line 1264, in request File "/usr/lib64/python2.7/xmlrpclib.py", line 1292, in single_request File "/usr/lib64/python2.7/xmlrpclib.py", line 1439, in send_content File "/usr/lib64/python2.7/httplib.py", line 954, in endheaders File "/usr/lib64/python2.7/httplib.py", line 814, in _send_output File "/usr/lib64/python2.7/httplib.py", line 776, in send File "/usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py", line 91, in connect File "/usr/lib64/python2.7/socket.py", line 553, in create_connection gaierror: [Errno -2] Name or service not known [root@blade4 ~]# virsh capabilities <capabilities> <host> <uuid>35303737-3830-435a-4a30-30333035455a</uuid> <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='2'/> <feature name='rdtscp'/> <feature name='dca'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <power_management/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='8'> <cpu id='0'/> <cpu id='1'/> <cpu id='2'/> <cpu id='3'/> <cpu id='4'/> <cpu id='5'/> <cpu id='6'/> <cpu id='7'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/bin/qemu-kvm</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <pae/> <nonpae/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/bin/qemu-kvm</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities> When i add the host to oVirt i get this message: Host blade4.vi.pt moved to Non-Operational state as host does not meet the cluster's minimum CPU level. Missing CPU features : model_Nehalem Best regards, Ricardo Esteves. -----Original Message----- From: Itamar Heim <iheim@redhat.com> To: Ricardo Esteves <ricardo.m.esteves@gmail.com> Cc: users@ovirt.org Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification Date: Wed, 01 Aug 2012 15:43:22 +0300 On 08/01/2012 02:28 PM, Ricardo Esteves wrote:
[root@blade4 <mailto:root@blade4> ~]# vdsClient -s 0 getVdsCaps | grep -i flags Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2275, in <module> File "/usr/share/vdsm/vdsClient.py", line 403, in do_getCap File "/usr/lib64/python2.7/xmlrpclib.py", line 1224, in __call__ File "/usr/lib64/python2.7/xmlrpclib.py", line 1578, in __request File "/usr/lib64/python2.7/xmlrpclib.py", line 1264, in request File "/usr/lib64/python2.7/xmlrpclib.py", line 1292, in single_request File "/usr/lib64/python2.7/xmlrpclib.py", line 1439, in send_content File "/usr/lib64/python2.7/httplib.py", line 954, in endheaders File "/usr/lib64/python2.7/httplib.py", line 814, in _send_output File "/usr/lib64/python2.7/httplib.py", line 776, in send File "/usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py", line 98, in connect File "/usr/lib64/python2.7/ssl.py", line 381, in wrap_socket File "/usr/lib64/python2.7/ssl.py", line 141, in __init__ SSLError: [Errno 0] _ssl.c:340: error:00000000:lib(0):func(0):reason(0)
did you somehow disable ssl? is vdsm running? what's its status in engine? what does this return: virsh capabilities | grep model
-----Original Message----- *From*: Itamar Heim <iheim@redhat.com <mailto:Itamar%20Heim%20%3ciheim@redhat.com%3e>> *To*: Ricardo Esteves <ricardo.m.esteves@gmail.com <mailto:Ricardo%20Esteves%20%3cricardo.m.esteves@gmail.com%3e>> *Cc*: users@ovirt.org <mailto:users@ovirt.org> *Subject*: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification *Date*: Wed, 01 Aug 2012 13:59:27 +0300
On 08/01/2012 12:57 PM, Ricardo Esteves wrote:
Hi,
With the latest version of ovirt my host's CPU is not correctly identified, my host's have an Intel Xeon E5530 (Nehalem family), but is being identified as Conrad family.
My installed versions:
ovirt-engine-3.1.0-2.fc17.noarch ovirt-node-iso-2.5.0-2.0.fc17.iso
Best regards, Ricardo Esteves.
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
please share: vdsClient -s 0 getVdsCaps | grep -i flags

On 02/08/2012, at 2:29 AM, Ricardo Esteves wrote:
And now, after reboot of the node, i get this:
[root@blade4 ~]# virsh capabilities Segmentation fault
When that seg fault happens, does anything get printed to /var/log/messages? Kind of wondering if there's something else at play here, which might show up there. Worth a look at. :) Regards and best wishes, Justin Clift -- Aeolus Community Manager http://www.aeolusproject.org

On Fri, Aug 03, 2012 at 06:47:44AM +1000, Justin Clift wrote:
On 02/08/2012, at 2:29 AM, Ricardo Esteves wrote:
And now, after reboot of the node, i get this:
[root@blade4 ~]# virsh capabilities Segmentation fault
When that seg fault happens, does anything get printed to /var/log/messages?
Kind of wondering if there's something else at play here, which might show up there. Worth a look at. :)
Regards and best wishes,
Justin Clift
Please note that vdsm hacks libvirt to use sasl authentication, which may be related to this crash. Does anything look better with `virsh -r`?

Ok, i fixed the ssl problem, my ovirt manager machine iptables was blocking the 8443 port. I also reinstalled the lastest version of the node (ovirt-node-iso-2.5.1-1.0.fc17.iso), but ovirt manager still doesn't recognize the CPU. The host status remains Non Operational : Host localhost.localdomain moved to Non-Operational state as host does not meet the cluster's minimum CPU level. Missing CPU features : model_Nehalem Here are the outputs of the commands vdsClient and virsh: [root@blade4 ~]# vdsClient -s 0 getVdsCaps HBAInventory = {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:blade4.vi.pt'}], 'FC': []} ISCSIInitiatorName = iqn.1994-05.com.redhat:blade4.vi.pt bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}} clusterLevels = ['3.0', '3.1'] cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_coreduo,model_Conroe cpuModel = Intel(R) Xeon(R) CPU E5530 @ 2.40GHz cpuSockets = 1 cpuSpeed = 1600.000 emulatedMachines = ['pc-0.15', 'pc-1.0', 'pc', 'pc-0.14', 'pc-0.13', 'pc-0.12', 'pc-0.11', 'pc-0.10', 'isapc', 'pc-0.15', 'pc-1.0', 'pc', 'pc-0.14', 'pc-0.13', 'pc-0.12', 'pc-0.11', 'pc-0.10', 'isapc'] guestOverhead = 65 hooks = {} kvmEnabled = true lastClient = 192.168.10.40 lastClientIface = ovirtmgmt management_ip = memSize = 17926 netConfigDirty = False networks = {'ovirtmgmt': {'addr': '192.168.10.24', 'cfg': {'IPV6FORWARDING': 'no', 'IPV6INIT': 'no', 'IPADDR': '192.168.10.24', 'ONBOOT': 'yes', 'IPV6_AUTOCONF': 'no', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'DEVICE': 'ovirtmgmt', 'PEERNTP': 'yes', 'TYPE': 'Bridge', 'GATEWAY': '192.168.10.254'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '192.168.10.254', 'ports': ['em1.10']}} nics = {'p1p1': {'hwaddr': 'd8:d3:85:67:e3:b8', 'netmask': '', 'speed': 0, 'addr': '', 'mtu': '1500'}, 'em1': {'hwaddr': 'd8:d3:85:bf:e9:b0', 'netmask': '', 'speed': 1000, 'addr': '', 'mtu': '1500'}, 'rename3': {'hwaddr': 'd8:d3:85:67:e3:ba', 'netmask': '', 'speed': 0, 'addr': '', 'mtu': '1500'}, 'em2': {'hwaddr': 'd8:d3:85:bf:e9:b4', 'netmask': '', 'speed': 0, 'addr': '', 'mtu': '1500'}} operatingSystem = {'release': '1', 'version': '17', 'name': 'oVirt Node'} packages2 = {'kernel': {'release': '2.fc17.x86_64', 'buildtime': 1343659739.0, 'version': '3.5.0'}, 'spice-server': {'release': '5.fc17', 'buildtime': '1336983054', 'version': '0.10.1'}, 'vdsm': {'release': '6.fc17', 'buildtime': '1343817997', 'version': '4.10.0'}, 'qemu-kvm': {'release': '18.fc17', 'buildtime': '1342650221', 'version': '1.0'}, 'libvirt': {'release': '3.fc17', 'buildtime': '1340891887', 'version': '0.9.11.4'}, 'qemu-img': {'release': '18.fc17', 'buildtime': '1342650221', 'version': '1.0'}} reservedMem = 321 software_revision = 6 software_version = 4.10 supportedProtocols = ['2.2', '2.3'] supportedRHEVMs = ['3.0', '3.1'] uuid = 37373035-3038-5A43-4A30-30333035455A_d8:d3:85:67:e3:b8 version_name = Snow Man vlans = {'em1.10': {'netmask': '', 'iface': 'em1', 'addr': '', 'mtu': '1500'}} vmTypes = ['kvm'] [root@blade4 ~]# virsh capabilities <capabilities> <host> <uuid>35303737-3830-435a-4a30-30333035455a</uuid> <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='2'/> <feature name='rdtscp'/> <feature name='dca'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <power_management/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='8'> <cpu id='0'/> <cpu id='1'/> <cpu id='2'/> <cpu id='3'/> <cpu id='4'/> <cpu id='5'/> <cpu id='6'/> <cpu id='7'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/bin/qemu-kvm</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <pae/> <nonpae/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/bin/qemu-kvm</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities> - ----Original Message----- From: Dan Kenigsberg <danken@redhat.com> To: Justin Clift <jclift@redhat.com> Cc: Ricardo Esteves <maverick.pt@gmail.com>, users@ovirt.org Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification Date: Sun, 5 Aug 2012 11:18:25 +0300 On Fri, Aug 03, 2012 at 06:47:44AM +1000, Justin Clift wrote:
On 02/08/2012, at 2:29 AM, Ricardo Esteves wrote:
And now, after reboot of the node, i get this:
[root@blade4 ~]# virsh capabilities Segmentation fault
When that seg fault happens, does anything get printed to /var/log/messages?
Kind of wondering if there's something else at play here, which might show up there. Worth a look at. :)
Regards and best wishes,
Justin Clift
Please note that vdsm hacks libvirt to use sasl authentication, which may be related to this crash. Does anything look better with `virsh -r`?

--_9116751c-2c3c-47f8-bfc9-1a7743108df9_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Ricardo=2C
From your getVdsCaps=2C I see that it reports as "model_coreduo=2Cmodel_Con= roe". Changing your compatibility level for the data center to Conroe allow= you to bring that host up.
- Nick From: ricardo.m.esteves@gmail.com To: iheim@redhat.com Date: Thu=2C 9 Aug 2012 14:17:44 +0100 CC: users@ovirt.org Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification =0A= =0A= =0A= =0A= =0A= =0A= =0A= =0A= Ok=2C i fixed the ssl problem=2C my ovirt manager machine iptables was bloc= king the 8443 port. =0A= =0A= I also reinstalled the lastest version of the node (ovirt-node-iso-2.5.1-1.= 0.fc17.iso)=2C but ovirt manager still doesn't recognize the CPU.=20 =0A= =0A= The host status remains Non Operational : =0A= =0A= Host localhost.localdomain moved to Non-Operational state as host does not = meet the cluster's minimum CPU level. Missing CPU features : model_Nehalem =0A= =0A= Here are the outputs of the commands vdsClient and virsh: =0A= =0A= [root@blade4 ~]# vdsClient -s 0 getVdsCaps =0A= HBAInventory =3D {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:blad= e4.vi.pt'}]=2C 'FC': []} =0A= ISCSIInitiatorName =3D iqn.1994-05.com.redhat:blade4.vi.pt =0A= bondings =3D {'bond4': {'addr': ''=2C 'cfg': {}=2C 'mtu': '1500'=2C 'netma= sk': ''=2C 'slaves': []=2C 'hwaddr': '00:00:00:00:00:00'}=2C 'bond0': {'add= r': ''=2C 'cfg': {}=2C 'mtu': '1500'=2C 'netmask': ''=2C 'slaves': []=2C 'h= waddr': '00:00:00:00:00:00'}=2C 'bond1': {'addr': ''=2C 'cfg': {}=2C 'mtu':= '1500'=2C 'netmask': ''=2C 'slaves': []=2C 'hwaddr': '00:00:00:00:00:00'}= =2C 'bond2': {'addr': ''=2C 'cfg': {}=2C 'mtu': '1500'=2C 'netmask': ''=2C = 'slaves': []=2C 'hwaddr': '00:00:00:00:00:00'}=2C 'bond3': {'addr': ''=2C '= cfg': {}=2C 'mtu': '1500'=2C 'netmask': ''=2C 'slaves': []=2C 'hwaddr': '00= :00:00:00:00:00'}} =0A= clusterLevels =3D ['3.0'=2C '3.1'] =0A= cpuCores =3D 4 =0A= cpuFlags =3D fpu=2Cvme=2Cde=2Cpse=2Ctsc=2Cmsr=2Cpae=2Cmce=2Ccx8=2Capic=2Cs= ep=2Cmtrr=2Cpge=2Cmca=2Ccmov=2Cpat=2Cpse36=2Cclflush=2Cdts=2Cacpi=2Cmmx=2Cf= xsr=2Csse=2Csse2=2Css=2Cht=2Ctm=2Cpbe=2Csyscall=2Cnx=2Crdtscp=2Clm=2Cconsta= nt_tsc=2Carch_perfmon=2Cpebs=2Cbts=2Crep_good=2Cnopl=2Cxtopology=2Cnonstop_= tsc=2Caperfmperf=2Cpni=2Cdtes64=2Cmonitor=2Cds_cpl=2Cvmx=2Cest=2Ctm2=2Cssse= 3=2Ccx16=2Cxtpr=2Cpdcm=2Cdca=2Csse4_1=2Csse4_2=2Cpopcnt=2Clahf_lm=2Cida=2Cd= therm=2Ctpr_shadow=2Cvnmi=2Cflexpriority=2Cept=2Cvpid=2Cmodel_coreduo=2Cmod= el_Conroe =0A= cpuModel =3D Intel(R) Xeon(R) CPU E5530 @ 2.40GHz =0A= cpuSockets =3D 1 =0A= cpuSpeed =3D 1600.000 =0A= emulatedMachines =3D ['pc-0.15'=2C 'pc-1.0'=2C 'pc'=2C 'pc-0.14'=2C 'pc-0.= 13'=2C 'pc-0.12'=2C 'pc-0.11'=2C 'pc-0.10'=2C 'isapc'=2C 'pc-0.15'=2C 'pc-1= .0'=2C 'pc'=2C 'pc-0.14'=2C 'pc-0.13'=2C 'pc-0.12'=2C 'pc-0.11'=2C 'pc-0.10= '=2C 'isapc'] =0A= guestOverhead =3D 65 =0A= hooks =3D {} =0A= kvmEnabled =3D true =0A= lastClient =3D 192.168.10.40 =0A= lastClientIface =3D ovirtmgmt =0A= management_ip =3D=20 =0A= memSize =3D 17926 =0A= netConfigDirty =3D False =0A= networks =3D {'ovirtmgmt': {'addr': '192.168.10.24'=2C 'cfg': {'IPV6FORWAR= DING': 'no'=2C 'IPV6INIT': 'no'=2C 'IPADDR': '192.168.10.24'=2C 'ONBOOT': '= yes'=2C 'IPV6_AUTOCONF': 'no'=2C 'DELAY': '0'=2C 'NM_CONTROLLED': 'no'=2C '= NETMASK': '255.255.255.0'=2C 'BOOTPROTO': 'static'=2C 'DEVICE': 'ovirtmgmt'= =2C 'PEERNTP': 'yes'=2C 'TYPE': 'Bridge'=2C 'GATEWAY': '192.168.10.254'}=2C= 'mtu': '1500'=2C 'netmask': '255.255.255.0'=2C 'stp': 'off'=2C 'bridged': = True=2C 'gateway': '192.168.10.254'=2C 'ports': ['em1.10']}} =0A= nics =3D {'p1p1': {'hwaddr': 'd8:d3:85:67:e3:b8'=2C 'netmask': ''=2C 'spee= d': 0=2C 'addr': ''=2C 'mtu': '1500'}=2C 'em1': {'hwaddr': 'd8:d3:85:bf:e9:= b0'=2C 'netmask': ''=2C 'speed': 1000=2C 'addr': ''=2C 'mtu': '1500'}=2C 'r= ename3': {'hwaddr': 'd8:d3:85:67:e3:ba'=2C 'netmask': ''=2C 'speed': 0=2C '= addr': ''=2C 'mtu': '1500'}=2C 'em2': {'hwaddr': 'd8:d3:85:bf:e9:b4'=2C 'ne= tmask': ''=2C 'speed': 0=2C 'addr': ''=2C 'mtu': '1500'}} =0A= operatingSystem =3D {'release': '1'=2C 'version': '17'=2C 'name': 'oVirt N= ode'} =0A= packages2 =3D {'kernel': {'release': '2.fc17.x86_64'=2C 'buildtime': 13436= 59739.0=2C 'version': '3.5.0'}=2C 'spice-server': {'release': '5.fc17'=2C '= buildtime': '1336983054'=2C 'version': '0.10.1'}=2C 'vdsm': {'release': '6.= fc17'=2C 'buildtime': '1343817997'=2C 'version': '4.10.0'}=2C 'qemu-kvm': {= 'release': '18.fc17'=2C 'buildtime': '1342650221'=2C 'version': '1.0'}=2C '= libvirt': {'release': '3.fc17'=2C 'buildtime': '1340891887'=2C 'version': '= 0.9.11.4'}=2C 'qemu-img': {'release': '18.fc17'=2C 'buildtime': '1342650221= '=2C 'version': '1.0'}} =0A= reservedMem =3D 321 =0A= software_revision =3D 6 =0A= software_version =3D 4.10 =0A= supportedProtocols =3D ['2.2'=2C '2.3'] =0A= supportedRHEVMs =3D ['3.0'=2C '3.1'] =0A= uuid =3D 37373035-3038-5A43-4A30-30333035455A_d8:d3:85:67:e3:b8 =0A= version_name =3D Snow Man =0A= vlans =3D {'em1.10': {'netmask': ''=2C 'iface': 'em1'=2C 'addr': ''=2C 'mt= u': '1500'}} =0A= vmTypes =3D ['kvm'] =0A= =0A= =0A= [root@blade4 ~]# virsh capabilities =0A= <capabilities> =0A= =0A= <host> =0A= <uuid>35303737-3830-435a-4a30-30333035455a</uuid> =0A= <cpu> =0A= <arch>x86_64</arch> =0A= <model>Nehalem</model> =0A= <vendor>Intel</vendor> =0A= <topology sockets=3D'1' cores=3D'4' threads=3D'2'/> =0A= <feature name=3D'rdtscp'/> =0A= <feature name=3D'dca'/> =0A= <feature name=3D'pdcm'/> =0A= <feature name=3D'xtpr'/> =0A= <feature name=3D'tm2'/> =0A= <feature name=3D'est'/> =0A= <feature name=3D'vmx'/> =0A= <feature name=3D'ds_cpl'/> =0A= <feature name=3D'monitor'/> =0A= <feature name=3D'dtes64'/> =0A= <feature name=3D'pbe'/> =0A= <feature name=3D'tm'/> =0A= <feature name=3D'ht'/> =0A= <feature name=3D'ss'/> =0A= <feature name=3D'acpi'/> =0A= <feature name=3D'ds'/> =0A= <feature name=3D'vme'/> =0A= </cpu> =0A= <power_management/> =0A= <migration_features> =0A= <live/> =0A= <uri_transports> =0A= <uri_transport>tcp</uri_transport> =0A= </uri_transports> =0A= </migration_features> =0A= <topology> =0A= <cells num=3D'1'> =0A= <cell id=3D'0'> =0A= <cpus num=3D'8'> =0A= <cpu id=3D'0'/> =0A= <cpu id=3D'1'/> =0A= <cpu id=3D'2'/> =0A= <cpu id=3D'3'/> =0A= <cpu id=3D'4'/> =0A= <cpu id=3D'5'/> =0A= <cpu id=3D'6'/> =0A= <cpu id=3D'7'/> =0A= </cpus> =0A= </cell> =0A= </cells> =0A= </topology> =0A= <secmodel> =0A= <model>selinux</model> =0A= <doi>0</doi> =0A= </secmodel> =0A= </host> =0A= =0A= <guest> =0A= <os_type>hvm</os_type> =0A= <arch name=3D'i686'> =0A= <wordsize>32</wordsize> =0A= <emulator>/usr/bin/qemu-system-x86_64</emulator> =0A= <machine>pc-0.15</machine> =0A= <machine>pc-1.0</machine> =0A= <machine canonical=3D'pc-1.0'>pc</machine> =0A= <machine>pc-0.14</machine> =0A= <machine>pc-0.13</machine> =0A= <machine>pc-0.12</machine> =0A= <machine>pc-0.11</machine> =0A= <machine>pc-0.10</machine> =0A= <machine>isapc</machine> =0A= <domain type=3D'qemu'> =0A= </domain> =0A= <domain type=3D'kvm'> =0A= <emulator>/usr/bin/qemu-kvm</emulator> =0A= <machine>pc-0.15</machine> =0A= <machine>pc-1.0</machine> =0A= <machine canonical=3D'pc-1.0'>pc</machine> =0A= <machine>pc-0.14</machine> =0A= <machine>pc-0.13</machine> =0A= <machine>pc-0.12</machine> =0A= <machine>pc-0.11</machine> =0A= <machine>pc-0.10</machine> =0A= <machine>isapc</machine> =0A= </domain> =0A= </arch> =0A= <features> =0A= <cpuselection/> =0A= <deviceboot/> =0A= <pae/> =0A= <nonpae/> =0A= <acpi default=3D'on' toggle=3D'yes'/> =0A= <apic default=3D'on' toggle=3D'no'/> =0A= </features> =0A= </guest> =0A= =0A= <guest> =0A= <os_type>hvm</os_type> =0A= <arch name=3D'x86_64'> =0A= <wordsize>64</wordsize> =0A= <emulator>/usr/bin/qemu-system-x86_64</emulator> =0A= <machine>pc-0.15</machine> =0A= <machine>pc-1.0</machine> =0A= <machine canonical=3D'pc-1.0'>pc</machine> =0A= <machine>pc-0.14</machine> =0A= <machine>pc-0.13</machine> =0A= <machine>pc-0.12</machine> =0A= <machine>pc-0.11</machine> =0A= <machine>pc-0.10</machine> =0A= <machine>isapc</machine> =0A= <domain type=3D'qemu'> =0A= </domain> =0A= <domain type=3D'kvm'> =0A= <emulator>/usr/bin/qemu-kvm</emulator> =0A= <machine>pc-0.15</machine> =0A= <machine>pc-1.0</machine> =0A= <machine canonical=3D'pc-1.0'>pc</machine> =0A= <machine>pc-0.14</machine> =0A= <machine>pc-0.13</machine> =0A= <machine>pc-0.12</machine> =0A= <machine>pc-0.11</machine> =0A= <machine>pc-0.10</machine> =0A= <machine>isapc</machine> =0A= </domain> =0A= </arch> =0A= <features> =0A= <cpuselection/> =0A= <deviceboot/> =0A= <acpi default=3D'on' toggle=3D'yes'/> =0A= <apic default=3D'on' toggle=3D'no'/> =0A= </features> =0A= </guest> =0A= =0A= </capabilities> =0A= - =0A= ----Original Message----- =0A= From: Dan Kenigsberg <danken@redhat.com> =0A= To: Justin Clift <jclift@redhat.com> =0A= Cc: Ricardo Esteves <maverick.pt@gmail.com>=2C users@ovirt.org =0A= Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification =0A= Date: Sun=2C 5 Aug 2012 11:18:25 +0300 =0A= =0A= On Fri=2C Aug 03=2C 2012 at 06:47:44AM +1000=2C Justin Clift wrote:=0A=
On 02/08/2012=2C at 2:29 AM=2C Ricardo Esteves wrote:=0A=
And now=2C after reboot of the node=2C i get this:=0A= =0A= [root@blade4 ~]# virsh capabilities=0A= Segmentation fault=0A= =0A= When that seg fault happens=2C does anything get printed to=0A= /var/log/messages?=0A= =0A= Kind of wondering if there's something else at play here=2C=0A= which might show up there. Worth a look at. :)=0A= =0A= Regards and best wishes=2C=0A= =0A= Justin Clift=0A= =0A= Please note that vdsm hacks libvirt to use sasl authentication=2C which=0A= may be related to this crash.=0A= =0A= Does anything look better with `virsh -r`?=0A= =0A= =0A= =0A=
_______________________________________________=0A= Users mailing list=0A= Users@ovirt.org=0A= http://lists.ovirt.org/mailman/listinfo/users = --_9116751c-2c3c-47f8-bfc9-1a7743108df9_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <html> <head> <style><!-- .hmmessage P { margin:0px=3B padding:0px } body.hmmessage { font-size: 12pt=3B font-family:Calibri } --></style></head> <body class=3D'hmmessage'><div dir=3D'ltr'>Ricardo=2C<br><br>From your getV= dsCaps=2C I see that it reports as "model_coreduo=2Cmodel_Conroe". Changing= your compatibility level for the data center to Conroe allow you to bring = that host up.<br><br>- Nick<br><br><div><div id=3D"SkyDrivePlaceholder"></d= iv><hr id=3D"stopSpelling">From: ricardo.m.esteves@gmail.com<br>To: iheim@r= edhat.com<br>Date: Thu=2C 9 Aug 2012 14:17:44 +0100<br>CC: users@ovirt.org<= br>Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification<b= r><br>=0A= =0A= =0A= =0A= =0A= =0A= =0A= <br>=0A= Ok=2C i fixed the ssl problem=2C my ovirt manager machine iptables was bloc= king the 8443 port.<br>=0A= <br>=0A= I also reinstalled the lastest version of the node (ovirt-node-iso-2.5.1-1.= 0.fc17.iso)=2C but ovirt manager still doesn't recognize the CPU. <br>=0A= <br>=0A= The host status remains Non Operational :<br>=0A= <br>=0A= Host localhost.localdomain moved to Non-Operational state as host does not = meet the cluster's minimum CPU level. Missing CPU features : model_Nehalem<= br>=0A= <br>=0A= Here are the outputs of the commands vdsClient and virsh:<br>=0A= <br>=0A= [<a href=3D"mailto:root@blade4">root@blade4</a> ~]# vdsClient -s 0 getVdsCa= ps<br>=0A= HBAInventory =3D {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:blad= e4.vi.pt'}]=2C 'FC': []}<br>=0A= ISCSIInitiatorName =3D iqn.1994-05.com.redhat:blade4.vi.pt<br>=0A= bondings =3D {'bond4': {'addr': ''=2C 'cfg': {}=2C 'mtu': '1500'=2C 'netma= sk': ''=2C 'slaves': []=2C 'hwaddr': '00:00:00:00:00:00'}=2C 'bond0': {'add= r': ''=2C 'cfg': {}=2C 'mtu': '1500'=2C 'netmask': ''=2C 'slaves': []=2C 'h= waddr': '00:00:00:00:00:00'}=2C 'bond1': {'addr': ''=2C 'cfg': {}=2C 'mtu':= '1500'=2C 'netmask': ''=2C 'slaves': []=2C 'hwaddr': '00:00:00:00:00:00'}= =2C 'bond2': {'addr': ''=2C 'cfg': {}=2C 'mtu': '1500'=2C 'netmask': ''=2C = 'slaves': []=2C 'hwaddr': '00:00:00:00:00:00'}=2C 'bond3': {'addr': ''=2C '= cfg': {}=2C 'mtu': '1500'=2C 'netmask': ''=2C 'slaves': []=2C 'hwaddr': '00= :00:00:00:00:00'}}<br>=0A= clusterLevels =3D ['3.0'=2C '3.1']<br>=0A= cpuCores =3D 4<br>=0A= cpuFlags =3D fpu=2Cvme=2Cde=2Cpse=2Ctsc=2Cmsr=2Cpae=2Cmce=2Ccx8=2Capic=2Cs= ep=2Cmtrr=2Cpge=2Cmca=2Ccmov=2Cpat=2Cpse36=2Cclflush=2Cdts=2Cacpi=2Cmmx=2Cf= xsr=2Csse=2Csse2=2Css=2Cht=2Ctm=2Cpbe=2Csyscall=2Cnx=2Crdtscp=2Clm=2Cconsta= nt_tsc=2Carch_perfmon=2Cpebs=2Cbts=2Crep_good=2Cnopl=2Cxtopology=2Cnonstop_= tsc=2Caperfmperf=2Cpni=2Cdtes64=2Cmonitor=2Cds_cpl=2Cvmx=2Cest=2Ctm2=2Cssse= 3=2Ccx16=2Cxtpr=2Cpdcm=2Cdca=2Csse4_1=2Csse4_2=2Cpopcnt=2Clahf_lm=2Cida=2Cd= therm=2Ctpr_shadow=2Cvnmi=2Cflexpriority=2Cept=2Cvpid=2Cmodel_coreduo=2Cmod= el_Conroe<br>=0A= cpuModel =3D Intel(R) Xeon(R) CPU =3B =3B =3B =3B =3B&= nbsp=3B =3B =3B =3B =3B E5530 =3B @ 2.40GHz<br>=0A= cpuSockets =3D 1<br>=0A= cpuSpeed =3D 1600.000<br>=0A= emulatedMachines =3D ['pc-0.15'=2C 'pc-1.0'=2C 'pc'=2C 'pc-0.14'=2C 'pc-0.= 13'=2C 'pc-0.12'=2C 'pc-0.11'=2C 'pc-0.10'=2C 'isapc'=2C 'pc-0.15'=2C 'pc-1= .0'=2C 'pc'=2C 'pc-0.14'=2C 'pc-0.13'=2C 'pc-0.12'=2C 'pc-0.11'=2C 'pc-0.10= '=2C 'isapc']<br>=0A= guestOverhead =3D 65<br>=0A= hooks =3D {}<br>=0A= kvmEnabled =3D true<br>=0A= lastClient =3D 192.168.10.40<br>=0A= lastClientIface =3D ovirtmgmt<br>=0A= management_ip =3D <br>=0A= memSize =3D 17926<br>=0A= netConfigDirty =3D False<br>=0A= networks =3D {'ovirtmgmt': {'addr': '192.168.10.24'=2C 'cfg': {'IPV6FORWAR= DING': 'no'=2C 'IPV6INIT': 'no'=2C 'IPADDR': '192.168.10.24'=2C 'ONBOOT': '= yes'=2C 'IPV6_AUTOCONF': 'no'=2C 'DELAY': '0'=2C 'NM_CONTROLLED': 'no'=2C '= NETMASK': '255.255.255.0'=2C 'BOOTPROTO': 'static'=2C 'DEVICE': 'ovirtmgmt'= =2C 'PEERNTP': 'yes'=2C 'TYPE': 'Bridge'=2C 'GATEWAY': '192.168.10.254'}=2C= 'mtu': '1500'=2C 'netmask': '255.255.255.0'=2C 'stp': 'off'=2C 'bridged': = True=2C 'gateway': '192.168.10.254'=2C 'ports': ['em1.10']}}<br>=0A= nics =3D {'p1p1': {'hwaddr': 'd8:d3:85:67:e3:b8'=2C 'netmask': ''=2C 'spee= d': 0=2C 'addr': ''=2C 'mtu': '1500'}=2C 'em1': {'hwaddr': 'd8:d3:85:bf:e9:= b0'=2C 'netmask': ''=2C 'speed': 1000=2C 'addr': ''=2C 'mtu': '1500'}=2C 'r= ename3': {'hwaddr': 'd8:d3:85:67:e3:ba'=2C 'netmask': ''=2C 'speed': 0=2C '= addr': ''=2C 'mtu': '1500'}=2C 'em2': {'hwaddr': 'd8:d3:85:bf:e9:b4'=2C 'ne= tmask': ''=2C 'speed': 0=2C 'addr': ''=2C 'mtu': '1500'}}<br>=0A= operatingSystem =3D {'release': '1'=2C 'version': '17'=2C 'name': 'oVirt N= ode'}<br>=0A= packages2 =3D {'kernel': {'release': '2.fc17.x86_64'=2C 'buildtime': 13436= 59739.0=2C 'version': '3.5.0'}=2C 'spice-server': {'release': '5.fc17'=2C '= buildtime': '1336983054'=2C 'version': '0.10.1'}=2C 'vdsm': {'release': '6.= fc17'=2C 'buildtime': '1343817997'=2C 'version': '4.10.0'}=2C 'qemu-kvm': {= 'release': '18.fc17'=2C 'buildtime': '1342650221'=2C 'version': '1.0'}=2C '= libvirt': {'release': '3.fc17'=2C 'buildtime': '1340891887'=2C 'version': '= 0.9.11.4'}=2C 'qemu-img': {'release': '18.fc17'=2C 'buildtime': '1342650221= '=2C 'version': '1.0'}}<br>=0A= reservedMem =3D 321<br>=0A= software_revision =3D 6<br>=0A= software_version =3D 4.10<br>=0A= supportedProtocols =3D ['2.2'=2C '2.3']<br>=0A= supportedRHEVMs =3D ['3.0'=2C '3.1']<br>=0A= uuid =3D 37373035-3038-5A43-4A30-30333035455A_d8:d3:85:67:e3:b8<br>=0A= version_name =3D Snow Man<br>=0A= vlans =3D {'em1.10': {'netmask': ''=2C 'iface': 'em1'=2C 'addr': ''=2C 'mt= u': '1500'}}<br>=0A= vmTypes =3D ['kvm']<br>=0A= <br>=0A= <br>=0A= [<a href=3D"mailto:root@blade4">root@blade4</a> ~]# virsh capabilities<br>= =0A= <=3Bcapabilities>=3B<br>=0A= <br>=0A=  =3B <=3Bhost>=3B<br>=0A=  =3B =3B =3B <=3Buuid>=3B35303737-3830-435a-4a30-3033303545= 5a<=3B/uuid>=3B<br>=0A=  =3B =3B =3B <=3Bcpu>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Barch>=3Bx86_64<=3B/arch&= gt=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmodel>=3BNehalem<=3B/mod= el>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bvendor>=3BIntel<=3B/vend= or>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Btopology sockets=3D'1' cores= =3D'4' threads=3D'2'/>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'rdtscp'/>= =3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'dca'/>=3B<= br>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'pdcm'/>=3B= <br>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'xtpr'/>=3B= <br>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'tm2'/>=3B<= br>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'est'/>=3B<= br>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'vmx'/>=3B<= br>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'ds_cpl'/>= =3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'monitor'/>= =3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'dtes64'/>= =3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'pbe'/>=3B<= br>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'tm'/>=3B<b= r>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'ht'/>=3B<b= r>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'ss'/>=3B<b= r>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'acpi'/>=3B= <br>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'ds'/>=3B<b= r>=0A=  =3B =3B =3B =3B =3B <=3Bfeature name=3D'vme'/>=3B<= br>=0A=  =3B =3B =3B <=3B/cpu>=3B<br>=0A=  =3B =3B =3B <=3Bpower_management/>=3B<br>=0A=  =3B =3B =3B <=3Bmigration_features>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Blive/>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Buri_transports>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Buri_transpor= t>=3Btcp<=3B/uri_transport>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3B/uri_transports>=3B<br>=0A=  =3B =3B =3B <=3B/migration_features>=3B<br>=0A=  =3B =3B =3B <=3Btopology>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bcells num=3D'1'>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bcell id=3D'0= '>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B =3B =3B &l= t=3Bcpus num=3D'8'>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B =3B =3B&nb= sp=3B =3B <=3Bcpu id=3D'0'/>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B =3B =3B&nb= sp=3B =3B <=3Bcpu id=3D'1'/>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B =3B =3B&nb= sp=3B =3B <=3Bcpu id=3D'2'/>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B =3B =3B&nb= sp=3B =3B <=3Bcpu id=3D'3'/>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B =3B =3B&nb= sp=3B =3B <=3Bcpu id=3D'4'/>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B =3B =3B&nb= sp=3B =3B <=3Bcpu id=3D'5'/>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B =3B =3B&nb= sp=3B =3B <=3Bcpu id=3D'6'/>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B =3B =3B&nb= sp=3B =3B <=3Bcpu id=3D'7'/>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B =3B =3B &l= t=3B/cpus>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3B/cell>=3B<= br>=0A=  =3B =3B =3B =3B =3B <=3B/cells>=3B<br>=0A=  =3B =3B =3B <=3B/topology>=3B<br>=0A=  =3B =3B =3B <=3Bsecmodel>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmodel>=3Bselinux<=3B/mod= el>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bdoi>=3B0<=3B/doi>=3B<b= r>=0A=  =3B =3B =3B <=3B/secmodel>=3B<br>=0A=  =3B <=3B/host>=3B<br>=0A= <br>=0A=  =3B <=3Bguest>=3B<br>=0A=  =3B =3B =3B <=3Bos_type>=3Bhvm<=3B/os_type>=3B<br>=0A=  =3B =3B =3B <=3Barch name=3D'i686'>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bwordsize>=3B32<=3B/words= ize>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bemulator>=3B/usr/bin/qemu-= system-x86_64<=3B/emulator>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bpc-0.15<=3B/m= achine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bpc-1.0<=3B/ma= chine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine canonical=3D'pc-1.0'= >=3Bpc<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bpc-0.14<=3B/m= achine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bpc-0.13<=3B/m= achine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bpc-0.12<=3B/m= achine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bpc-0.11<=3B/m= achine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bpc-0.10<=3B/m= achine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bisapc<=3B/mac= hine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bdomain type=3D'qemu'>=3B<b= r>=0A=  =3B =3B =3B =3B =3B <=3B/domain>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bdomain type=3D'kvm'>=3B<br=
=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bemulator>= =3B/usr/bin/qemu-kvm<=3B/emulator>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bpc-0.15<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bpc-1.0<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine cano= nical=3D'pc-1.0'>=3Bpc<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bpc-0.14<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bpc-0.13<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bpc-0.12<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bpc-0.11<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bpc-0.10<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bisapc<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3B/domain>=3B<br>=0A=  =3B =3B =3B <=3B/arch>=3B<br>=0A=  =3B =3B =3B <=3Bfeatures>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bcpuselection/>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bdeviceboot/>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bpae/>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bnonpae/>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bacpi default=3D'on' toggle= =3D'yes'/>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bapic default=3D'on' toggle= =3D'no'/>=3B<br>=0A=  =3B =3B =3B <=3B/features>=3B<br>=0A=  =3B <=3B/guest>=3B<br>=0A= <br>=0A=  =3B <=3Bguest>=3B<br>=0A=  =3B =3B =3B <=3Bos_type>=3Bhvm<=3B/os_type>=3B<br>=0A=  =3B =3B =3B <=3Barch name=3D'x86_64'>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bwordsize>=3B64<=3B/words= ize>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bemulator>=3B/usr/bin/qemu-= system-x86_64<=3B/emulator>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bpc-0.15<=3B/m= achine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bpc-1.0<=3B/ma= chine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine canonical=3D'pc-1.0'= >=3Bpc<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bpc-0.14<=3B/m= achine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bpc-0.13<=3B/m= achine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bpc-0.12<=3B/m= achine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bpc-0.11<=3B/m= achine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bpc-0.10<=3B/m= achine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bmachine>=3Bisapc<=3B/mac= hine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bdomain type=3D'qemu'>=3B<b= r>=0A=  =3B =3B =3B =3B =3B <=3B/domain>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bdomain type=3D'kvm'>=3B<br= =0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bemulator>= =3B/usr/bin/qemu-kvm<=3B/emulator>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bpc-0.15<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bpc-1.0<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine cano= nical=3D'pc-1.0'>=3Bpc<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bpc-0.14<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bpc-0.13<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bpc-0.12<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bpc-0.11<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bpc-0.10<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B =3B =3B <=3Bmachine>= =3Bisapc<=3B/machine>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3B/domain>=3B<br>=0A=  =3B =3B =3B <=3B/arch>=3B<br>=0A=  =3B =3B =3B <=3Bfeatures>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bcpuselection/>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bdeviceboot/>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bacpi default=3D'on' toggle= =3D'yes'/>=3B<br>=0A=  =3B =3B =3B =3B =3B <=3Bapic default=3D'on' toggle= =3D'no'/>=3B<br>=0A=  =3B =3B =3B <=3B/features>=3B<br>=0A=  =3B <=3B/guest>=3B<br>=0A= <br>=0A= <=3B/capabilities>=3B<br>=0A= -<br>=0A= ----Original Message-----<br>=0A= <b>From</b>: Dan Kenigsberg <=3B<a href=3D"mailto:Dan%20Kenigsberg%20%3cd= anken@redhat.com%3e">danken@redhat.com</a>>=3B<br>=0A= <b>To</b>: Justin Clift <=3B<a href=3D"mailto:Justin%20Clift%20%3cjclift@= redhat.com%3e">jclift@redhat.com</a>>=3B<br>=0A= <b>Cc</b>: Ricardo Esteves <=3B<a href=3D"mailto:Ricardo%20Esteves%20%3cm= averick.pt@gmail.com%3e">maverick.pt@gmail.com</a>>=3B=2C <a href=3D"mail= to:users@ovirt.org">users@ovirt.org</a><br>=0A= <b>Subject</b>: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identificati= on<br>=0A= <b>Date</b>: Sun=2C 5 Aug 2012 11:18:25 +0300<br>=0A= <br>=0A= <pre>On Fri=2C Aug 03=2C 2012 at 06:47:44AM +1000=2C Justin Clift wrote:=0A= >=3B On 02/08/2012=2C at 2:29 AM=2C Ricardo Esteves wrote:=0A= >=3B >=3B And now=2C after reboot of the node=2C i get this:=0A= >=3B >=3B =0A= >=3B >=3B [root@blade4 ~]# virsh capabilities=0A= >=3B >=3B Segmentation fault=0A= >=3B =0A= >=3B When that seg fault happens=2C does anything get printed to=0A= >=3B /var/log/messages?=0A= >=3B =0A= >=3B Kind of wondering if there's something else at play here=2C=0A= >=3B which might show up there. Worth a look at. :)=0A= >=3B =0A= >=3B Regards and best wishes=2C=0A= >=3B =0A= >=3B Justin Clift=0A= =0A= Please note that vdsm hacks libvirt to use sasl authentication=2C which=0A= may be related to this crash.=0A= =0A= Does anything look better with `virsh -r`?=0A= </pre>=0A= =0A= =0A= <br>_______________________________________________=0A= Users mailing list=0A= Users@ovirt.org=0A= http://lists.ovirt.org/mailman/listinfo/users</div> </div></body=
</html>= --_9116751c-2c3c-47f8-bfc9-1a7743108df9_--

On 08/09/2012 04:17 PM, Ricardo Esteves wrote:
Ok, i fixed the ssl problem, my ovirt manager machine iptables was blocking the 8443 port.
I also reinstalled the lastest version of the node (ovirt-node-iso-2.5.1-1.0.fc17.iso), but ovirt manager still doesn't recognize the CPU.
The host status remains Non Operational :
Host localhost.localdomain moved to Non-Operational state as host does not meet the cluster's minimum CPU level. Missing CPU features : model_Nehalem
danken - libvirt reports nehalem, yet vdsm reports conroe?
Here are the outputs of the commands vdsClient and virsh:
[root@blade4 <mailto:root@blade4> ~]# vdsClient -s 0 getVdsCaps HBAInventory = {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:blade4.vi.pt'}], 'FC': []} ISCSIInitiatorName = iqn.1994-05.com.redhat:blade4.vi.pt bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}} clusterLevels = ['3.0', '3.1'] cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_coreduo,model_Conroe cpuModel = Intel(R) Xeon(R) CPU E5530 @ 2.40GHz cpuSockets = 1 cpuSpeed = 1600.000 emulatedMachines = ['pc-0.15', 'pc-1.0', 'pc', 'pc-0.14', 'pc-0.13', 'pc-0.12', 'pc-0.11', 'pc-0.10', 'isapc', 'pc-0.15', 'pc-1.0', 'pc', 'pc-0.14', 'pc-0.13', 'pc-0.12', 'pc-0.11', 'pc-0.10', 'isapc'] guestOverhead = 65 hooks = {} kvmEnabled = true lastClient = 192.168.10.40 lastClientIface = ovirtmgmt management_ip = memSize = 17926 netConfigDirty = False networks = {'ovirtmgmt': {'addr': '192.168.10.24', 'cfg': {'IPV6FORWARDING': 'no', 'IPV6INIT': 'no', 'IPADDR': '192.168.10.24', 'ONBOOT': 'yes', 'IPV6_AUTOCONF': 'no', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'DEVICE': 'ovirtmgmt', 'PEERNTP': 'yes', 'TYPE': 'Bridge', 'GATEWAY': '192.168.10.254'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '192.168.10.254', 'ports': ['em1.10']}} nics = {'p1p1': {'hwaddr': 'd8:d3:85:67:e3:b8', 'netmask': '', 'speed': 0, 'addr': '', 'mtu': '1500'}, 'em1': {'hwaddr': 'd8:d3:85:bf:e9:b0', 'netmask': '', 'speed': 1000, 'addr': '', 'mtu': '1500'}, 'rename3': {'hwaddr': 'd8:d3:85:67:e3:ba', 'netmask': '', 'speed': 0, 'addr': '', 'mtu': '1500'}, 'em2': {'hwaddr': 'd8:d3:85:bf:e9:b4', 'netmask': '', 'speed': 0, 'addr': '', 'mtu': '1500'}} operatingSystem = {'release': '1', 'version': '17', 'name': 'oVirt Node'} packages2 = {'kernel': {'release': '2.fc17.x86_64', 'buildtime': 1343659739.0, 'version': '3.5.0'}, 'spice-server': {'release': '5.fc17', 'buildtime': '1336983054', 'version': '0.10.1'}, 'vdsm': {'release': '6.fc17', 'buildtime': '1343817997', 'version': '4.10.0'}, 'qemu-kvm': {'release': '18.fc17', 'buildtime': '1342650221', 'version': '1.0'}, 'libvirt': {'release': '3.fc17', 'buildtime': '1340891887', 'version': '0.9.11.4'}, 'qemu-img': {'release': '18.fc17', 'buildtime': '1342650221', 'version': '1.0'}} reservedMem = 321 software_revision = 6 software_version = 4.10 supportedProtocols = ['2.2', '2.3'] supportedRHEVMs = ['3.0', '3.1'] uuid = 37373035-3038-5A43-4A30-30333035455A_d8:d3:85:67:e3:b8 version_name = Snow Man vlans = {'em1.10': {'netmask': '', 'iface': 'em1', 'addr': '', 'mtu': '1500'}} vmTypes = ['kvm']
[root@blade4 <mailto:root@blade4> ~]# virsh capabilities <capabilities>
<host> <uuid>35303737-3830-435a-4a30-30333035455a</uuid> <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='2'/> <feature name='rdtscp'/> <feature name='dca'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <power_management/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='8'> <cpu id='0'/> <cpu id='1'/> <cpu id='2'/> <cpu id='3'/> <cpu id='4'/> <cpu id='5'/> <cpu id='6'/> <cpu id='7'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> </host>
<guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/bin/qemu-kvm</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <pae/> <nonpae/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest>
<guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/bin/qemu-kvm</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest>
</capabilities> - ----Original Message----- *From*: Dan Kenigsberg <danken@redhat.com <mailto:Dan%20Kenigsberg%20%3cdanken@redhat.com%3e>> *To*: Justin Clift <jclift@redhat.com <mailto:Justin%20Clift%20%3cjclift@redhat.com%3e>> *Cc*: Ricardo Esteves <maverick.pt@gmail.com <mailto:Ricardo%20Esteves%20%3cmaverick.pt@gmail.com%3e>>, users@ovirt.org <mailto:users@ovirt.org> *Subject*: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification *Date*: Sun, 5 Aug 2012 11:18:25 +0300
On Fri, Aug 03, 2012 at 06:47:44AM +1000, Justin Clift wrote:
On 02/08/2012, at 2:29 AM, Ricardo Esteves wrote:
And now, after reboot of the node, i get this:
[root@blade4 ~]# virsh capabilities Segmentation fault
When that seg fault happens, does anything get printed to /var/log/messages?
Kind of wondering if there's something else at play here, which might show up there. Worth a look at. :)
Regards and best wishes,
Justin Clift
Please note that vdsm hacks libvirt to use sasl authentication, which may be related to this crash.
Does anything look better with `virsh -r`?

On Thu, Aug 09, 2012 at 05:17:14PM +0300, Itamar Heim wrote:
On 08/09/2012 04:17 PM, Ricardo Esteves wrote:
Ok, i fixed the ssl problem, my ovirt manager machine iptables was blocking the 8443 port.
I also reinstalled the lastest version of the node (ovirt-node-iso-2.5.1-1.0.fc17.iso), but ovirt manager still doesn't recognize the CPU.
The host status remains Non Operational :
Host localhost.localdomain moved to Non-Operational state as host does not meet the cluster's minimum CPU level. Missing CPU features : model_Nehalem
danken - libvirt reports nehalem, yet vdsm reports conroe?
Interesting... If yo put <cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu> in /tmp/cpu.xml what does virsh -r cpu-compare /tmp/cpu.xml report?

[root@blade4 ~]# virsh -r cpu-compare /tmp/cpu.xml Host CPU is a superset of CPU described in /tmp/cpu.xml -----Original Message----- From: Dan Kenigsberg <danken@redhat.com> To: Itamar Heim <iheim@redhat.com>, gal@redhat.com Cc: Ricardo Esteves <ricardo.m.esteves@gmail.com>, users@ovirt.org Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification Date: Thu, 9 Aug 2012 18:57:29 +0300 On Thu, Aug 09, 2012 at 05:17:14PM +0300, Itamar Heim wrote:
On 08/09/2012 04:17 PM, Ricardo Esteves wrote:
Ok, i fixed the ssl problem, my ovirt manager machine iptables was blocking the 8443 port.
I also reinstalled the lastest version of the node (ovirt-node-iso-2.5.1-1.0.fc17.iso), but ovirt manager still doesn't recognize the CPU.
The host status remains Non Operational :
Host localhost.localdomain moved to Non-Operational state as host does not meet the cluster's minimum CPU level. Missing CPU features : model_Nehalem
danken - libvirt reports nehalem, yet vdsm reports conroe?
Interesting... If yo put <cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu> in /tmp/cpu.xml what does virsh -r cpu-compare /tmp/cpu.xml report?

On Thu, Aug 09, 2012 at 05:22:09PM +0100, Ricardo Esteves wrote:
[root@blade4 ~]# virsh -r cpu-compare /tmp/cpu.xml Host CPU is a superset of CPU described in /tmp/cpu.xml
-----Original Message----- From: Dan Kenigsberg <danken@redhat.com> To: Itamar Heim <iheim@redhat.com>, gal@redhat.com Cc: Ricardo Esteves <ricardo.m.esteves@gmail.com>, users@ovirt.org Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification Date: Thu, 9 Aug 2012 18:57:29 +0300
On Thu, Aug 09, 2012 at 05:17:14PM +0300, Itamar Heim wrote:
On 08/09/2012 04:17 PM, Ricardo Esteves wrote:
Ok, i fixed the ssl problem, my ovirt manager machine iptables was blocking the 8443 port.
I also reinstalled the lastest version of the node (ovirt-node-iso-2.5.1-1.0.fc17.iso), but ovirt manager still doesn't recognize the CPU.
The host status remains Non Operational :
Host localhost.localdomain moved to Non-Operational state as host does not meet the cluster's minimum CPU level. Missing CPU features : model_Nehalem
danken - libvirt reports nehalem, yet vdsm reports conroe?
Interesting...
If yo put <cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu> in /tmp/cpu.xml
what does
virsh -r cpu-compare /tmp/cpu.xml
report?
(top posting make it very difficult to follow a long thread) I have a hunch that this is something that has been fixed by http://gerrit.ovirt.org/#/c/5035/ Find vendor for all cpu models, including those based on another cpu module. it is accpeted upstream, but unfortunately did not make it into the ovirt-3.1 release. Could you apply the patch to Vdsm and see if it fixes the reported cpu level? Dan.

Hi, I the attached picture "net1.jpg" i have my initial network configuration. Physical card (em1) with vlan 10 (em1.10) bridged to the ovirtmgmt with IP 192.168.10.25 and default gw 192.168.10.254. Now i want to bond em1 card with p1p1 card (attached picture net2.jpg) But if i fill the default gw it gives me this error: The default gateway should be set only on engine network. If i don't fill the default gw, when i click ok, i loose connection to the server and then after more or less 1 minute the server automaticaly reboots. Anyone had this kind of problem? Best regards, Ricardo Esteves.

On 13/08/2012, at 10:55 PM, Ricardo Esteves wrote:
Hi,
I the attached picture "net1.jpg" i have my initial network configuration.
Physical card (em1) with vlan 10 (em1.10) bridged to the ovirtmgmt with IP 192.168.10.25 and default gw 192.168.10.254.
Now i want to bond em1 card with p1p1 card (attached picture net2.jpg)
But if i fill the default gw it gives me this error: The default gateway should be set only on engine network.
If i don't fill the default gw, when i click ok, i loose connection to the server and then after more or less 1 minute the server automaticaly reboots.
Anyone had this kind of problem?
Two thoughts here: ;) * The "lose connection to the server" bit sort of sounds like this bug: https://bugzilla.redhat.com/show_bug.cgi?id=838816 Reckon that's a match? * Aside from that, you might have to manually configure networking for the hosts from the command line. Using the normal Linux commands I mean, not oVirt specific ones. i.e. creating the bridging and everything manually. This is the approach I had to take last week when trying out Aeolus with oVirt 3.1. Network layer breaks when adding a 2nd interface, but was able to work around it by manually creating the bridges from cli, after having defined the logical networks in the oVirt Web UI. The "configure things manually" approach isn't all that documented either. I kind of stumbled my way through, by looking at the examples here: http://wiki.ovirt.org/wiki/Installing_VDSM_from_rpm#Configuring_the_bridge_I... Is any of that helpful? Regards and best wishes, Justin Clift
Best regards, Ricardo Esteves.
-- Aeolus Community Manager http://www.aeolusproject.org

<div>DEVICE=3Dbond0</div><div>NM_CONTROLLED=3Dno</div><div>USERCTL=3Dno</d= iv><div>BOOTPROTO=3Dnone</div><div>BONDING_OPTS=3D"mode=3D4 miimon=3D100"</=
MTU=3D1500</div></div><div><br></div></div><div>EOF</div><div><br></div><d= iv><div># cat > /etc/sysconfig/network-scripts/ifcfg-bond0.2 <&l= t; EOF</div><div><div><div>DEVICE=3Dbond0.2</div><div>VLAN=3Dyes</div><div>= BOOTPROTO=3Dnone</div><div>NM_CONTROLLED=3Dno</div><div>BRIDGE=3Dovirtmgmt<= /div><div>MTU=3D9000</div></div><div><br></div></div><div>EOF</div><div><br= </div><div>Lastly, create the bridges ontop of the VLAN interfaces. The na= mes, as I have understood it, can be whatever you want, but one needs to be= called "ovirtmgmt" of course:</div><div><div># cat > /etc/sysconfi= g/network-scripts/ifcfg-ovirtmgmt << EOF</div><div><div><div><div>TYP= E=3DBridge</div><div>NM_CONTROLLED=3D"no"</div><div>BOOTPROTO=3D"none"</div= <div>DEVICE=3D"ovirtmgmt"</div><div>ONBOOT=3D"yes"</div><div>IPADDR=3DXXX.= XXX.XXX.XXX</div><div>NETMASK=3D255.255.255.0</div></div></div><div><br></d= iv></div><div>EOF</div><div><br></div><div><div># cat > /etc/syscon= fig/network-scripts/ifcfg-br0 << EOF</div><div><div><div><div>TYPE=3D= Bridge</div><div>NM_CONTROLLED=3D"no"</div><div>BOOTPROTO=3D"none"</div><di= v>DEVICE=3D"br0"</div><div>ONBOOT=3D"yes"</div><div>IPADDR=3DXXX.XXX.XXX.XX= X</div><div>NETMASK=3D255.255.255.0</div></div></div><div><br></div></div><=
--_000_E3028A1067B0412D83F3DFD15D668622sluse_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable 13 aug 2012 kl. 23.56 skrev Justin Clift: On 13/08/2012, at 10:55 PM, Ricardo Esteves wrote: Hi, I the attached picture "net1.jpg" i have my initial network configuration. Physical card (em1) with vlan 10 (em1.10) bridged to the ovirtmgmt with IP = 192.168.10.25 and default gw 192.168.10.254. Now i want to bond em1 card with p1p1 card (attached picture net2.jpg) But if i fill the default gw it gives me this error: The default gateway sh= ould be set only on engine network. If i don't fill the default gw, when i click ok, i loose connection to the = server and then after more or less 1 minute the server automaticaly reboots= . Anyone had this kind of problem? Two thoughts here: ;) * The "lose connection to the server" bit sort of sounds like this bug: https://bugzilla.redhat.com/show_bug.cgi?id=3D838816 Reckon that's a match? * Aside from that, you might have to manually configure networking for the hosts from the command line. Using the normal Linux commands I mean, not oVirt specific ones. i.e. creating the bridging and everything manually. This is the approach I had to take last week when trying out Aeolus with oVirt 3.1. Network layer breaks when adding a 2nd interface, but was able to work around it by manually creating the bridges from cli, after having defined the logical networks in the oVirt Web UI. The "configure things manually" approach isn't all that documented either. I kind of stumbled my way through, by looking at the examples here: I second that! I just trial and errored my way through bond -> vlan -> brid= ge. Let=B4s just say that=B4s for people who are up for a challenge. But lu= ckily, I=B4m one for sharing: Start by defining to load bonding at boot: # cat > /etc/modprobe.d/bonding.conf << EOF alias bond0 bonding EOF Then define the bond. This is LACP mode: # cat > /etc/sysconfig/network-scripts/ifcfg-bond0 << EOF DEVICE=3Dbond0 NM_CONTROLLED=3Dno USERCTL=3Dno BOOTPROTO=3Dnone BONDING_OPTS=3D"mode=3D4 miimon=3D100" TYPE=3DEthernet MTU=3D9000 EOF Then "enslave" the physical NICs to the bond: # cat > /etc/sysconfig/network-scripts/ifcfg-em1 << EOF NM_CONTROLLED=3D"no" BOOTPROTO=3D"none" DEVICE=3D"em1" ONBOOT=3D"yes" USERCTL=3Dno MASTER=3Dbond0 SLAVE=3Dyes EOF # cat > /etc/sysconfig/network-scripts/ifcfg-em2 << EOF NM_CONTROLLED=3D"no" BOOTPROTO=3D"none" DEVICE=3D"em2" ONBOOT=3D"yes" USERCTL=3Dno MASTER=3Dbond0 SLAVE=3Dyes EOF Then create VLAN interfaces ontop of the bond. In this example, I=B4m using= VLAN ID 1 and 2: # cat > /etc/sysconfig/network-scripts/ifcfg-bond0.1 << EOF DEVICE=3Dbond0.1 VLAN=3Dyes BOOTPROTO=3Dnone NM_CONTROLLED=3Dno BRIDGE=3Dbr1 MTU=3D1500 EOF # cat > /etc/sysconfig/network-scripts/ifcfg-bond0.2 << EOF DEVICE=3Dbond0.2 VLAN=3Dyes BOOTPROTO=3Dnone NM_CONTROLLED=3Dno BRIDGE=3Dovirtmgmt MTU=3D9000 EOF Lastly, create the bridges ontop of the VLAN interfaces. The names, as I ha= ve understood it, can be whatever you want, but one needs to be called "ovi= rtmgmt" of course: # cat > /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt << EOF TYPE=3DBridge NM_CONTROLLED=3D"no" BOOTPROTO=3D"none" DEVICE=3D"ovirtmgmt" ONBOOT=3D"yes" IPADDR=3DXXX.XXX.XXX.XXX NETMASK=3D255.255.255.0 EOF # cat > /etc/sysconfig/network-scripts/ifcfg-br0 << EOF TYPE=3DBridge NM_CONTROLLED=3D"no" BOOTPROTO=3D"none" DEVICE=3D"br0" ONBOOT=3D"yes" IPADDR=3DXXX.XXX.XXX.XXX NETMASK=3D255.255.255.0 EOF Gateway goes into: # cat > /etc/sysconfig/network << EOF GATEWAY=3DXXX.XXX.XXX.XXX EOF This way, you can have almost(4096) as many interfaces as you want with onl= y two physical NICs. I also gave an example on how you tune Jumbo Frames to= be active on some interfaces, and have regular windows size on the rest. J= umbo most only be active on interfaces that isn=B4t routed, since the defau= lt routing size is 1500. /Karli http://wiki.ovirt.org/wiki/Installing_VDSM_from_rpm#Configuring_the_bridge= _Interface Is any of that helpful? Regards and best wishes, Justin Clift Best regards, Ricardo Esteves. -- Aeolus Community Manager http://www.aeolusproject.org _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users Med V=E4nliga H=E4lsningar ---------------------------------------------------------------------------= ---- Karli Sj=F6berg Swedish University of Agricultural Sciences Box 7079 (Visiting Address Kron=E5sv=E4gen 8) S-750 07 Uppsala, Sweden Phone: +46-(0)18-67 15 66 karli.sjoberg@slu.se<mailto:karli.sjoberg@adm.slu.se> --_000_E3028A1067B0412D83F3DFD15D668622sluse_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <html><head></head><body style=3D"word-wrap: break-word; -webkit-nbsp-mode:= space; -webkit-line-break: after-white-space; "><br><div><div>13 aug 2012 = kl. 23.56 skrev Justin Clift:</div><br class=3D"Apple-interchange-newline">= <blockquote type=3D"cite"><div>On 13/08/2012, at 10:55 PM, Ricardo Esteves = wrote:<br><blockquote type=3D"cite">Hi,<br></blockquote><blockquote type=3D= "cite"><br></blockquote><blockquote type=3D"cite">I the attached picture "n= et1.jpg" i have my initial network configuration.<br></blockquote><blockquo= te type=3D"cite"><br></blockquote><blockquote type=3D"cite">Physical card (= em1) with vlan 10 (em1.10) bridged to the ovirtmgmt with IP 192.168.10.25 a= nd default gw 192.168.10.254. <br></blockquote><blockquote type=3D"cite"><b= r></blockquote><blockquote type=3D"cite">Now i want to bond em1 card with p= 1p1 card (attached picture net2.jpg)<br></blockquote><blockquote type=3D"ci= te"><br></blockquote><blockquote type=3D"cite">But if i fill the default gw= it gives me this error: The default gateway should be set only on engine n= etwork.<br></blockquote><blockquote type=3D"cite"><br></blockquote><blockqu= ote type=3D"cite">If i don't fill the default gw, when i click ok, i loose = connection to the server and then after more or less 1 minute the server au= tomaticaly reboots.<br></blockquote><blockquote type=3D"cite"><br></blockqu= ote><blockquote type=3D"cite">Anyone had this kind of problem?<br></blockqu= ote><br>Two thoughts here: ;)<br><br> * The "lose connection to the server"= bit sort of sounds like<br> this bug:<br><br> &nbs= p; <a href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D838816">ht= tps://bugzilla.redhat.com/show_bug.cgi?id=3D838816</a><br><br> = Reckon that's a match?<br><br><br> * Aside from that, you might have to man= ually configure<br> networking for the hosts from the command l= ine. Using the<br> normal Linux commands I mean, not oVir= t specific ones.<br><br> i.e. creating the bridging= and everything manually.<br><br> This is the approach I had to= take last week when trying<br> out Aeolus with oVirt 3.1. &nbs= p;Network layer breaks when<br> adding a 2nd interface, but was= able to work around it<br> by manually creating the bridges fr= om cli, after having<br> defined the logical networks in the oV= irt Web UI.<br><br>The "configure things manually" approach isn't all that<= br>documented either. I kind of stumbled my way through, by<br>lookin= g at the examples here:<br></div></blockquote><div><br></div>I second that!= I just trial and errored my way through bond -> vlan -> bridge. Let= =B4s just say that=B4s for people who are up for a challenge. But luckily, = I=B4m one for sharing:</div><div><br></div><div>Start by defining to load b= onding at boot:</div><div><div># cat > /etc/modprobe.d/bonding.conf <= < EOF</div><div>alias bond0 bonding</div><div><br></div><div>EOF</div><d= iv><br></div><div>Then define the bond. This is LACP mode:</div><div><div>#= cat > /etc/sysconfig/network-scripts/ifcfg-bond0 << EOF</div><div= div><div>TYPE=3DEthernet</div><div>MTU=3D9000</div></div><div><br></div><di= v>EOF</div><div><br></div><div>Then "enslave" the physical NICs to the bond= :</div><div><div># cat > /etc/sysconfig/network-scripts/ifcfg-em1 <&l= t; EOF</div><div><div>NM_CONTROLLED=3D"no"</div><div>BOOTPROTO=3D"none"</di= v><div>DEVICE=3D"em1"</div><div>ONBOOT=3D"yes"</div><div>USERCTL=3Dno</div>= <div>MASTER=3Dbond0</div><div>SLAVE=3Dyes</div><div><br></div></div><div>EO= F</div><div><br></div><div><div># cat > /etc/sysconfig/network-scripts/i= fcfg-em2 << EOF</div><div><div>NM_CONTROLLED=3D"no"</div><div>BOOTPRO= TO=3D"none"</div><div>DEVICE=3D"em2"</div><div>ONBOOT=3D"yes"</div><div>USE= RCTL=3Dno</div><div>MASTER=3Dbond0</div><div>SLAVE=3Dyes</div><div><br></di= v></div><div>EOF</div><div><br></div><div>Then create VLAN interfaces ontop= of the bond. In this example, I=B4m using VLAN ID 1 and 2:</div><div><div>= # cat > /etc/sysconfig/network-scripts/ifcfg-bond0.1 << = EOF</div><div><div><div>DEVICE=3Dbond0.1</div><div>VLAN=3Dyes</div><div>BOO= TPROTO=3Dnone</div><div>NM_CONTROLLED=3Dno</div><div>BRIDGE=3Dbr1</div><div= div>EOF</div><div><br></div><div>Gateway goes into:</div><div><div><span cl= ass=3D"Apple-style-span"># cat > </span>/etc/sysconfig/network<span= class=3D"Apple-style-span"> << EOF</span></div><div>GATEWAY=3DX= XX.XXX.XXX.XXX</div><div><br></div><div>EOF</div></div><div><br></div><div>= This way, you can have almost(4096) as many interfaces as you want with onl= y two physical NICs. I also gave an example on how you tune Jumbo Frames to= be active on some interfaces, and have regular windows size on the rest. J= umbo most only be active on interfaces that isn=B4t routed, since the defau= lt routing size is 1500.</div><div><br></div><div>/Karli</div><div><br></di= v></div></div></div></div></div></div></div><blockquote type=3D"cite"><div>= <br> <a href=3D"http://wiki.ovirt.org/wiki/Installing_VDSM_from_rpm#C= onfiguring_the_bridge_Interface">http://wiki.ovirt.org/wiki/Installing_VDSM= _from_rpm#Configuring_the_bridge_Interface</a><br><br>Is any of that helpfu= l?<br><br>Regards and best wishes,<br><br>Justin Clift<br><br><br><blockquo= te type=3D"cite">Best regards,<br></blockquote><blockquote type=3D"cite">Ri= cardo Esteves.<br></blockquote><br>--<br>Aeolus Community Manager<br><a hre= f=3D"http://www.aeolusproject.org">http://www.aeolusproject.org</a><br><br>= _______________________________________________<br>Users mailing list<br>Us= ers@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br></div></b= lockquote></div><br><div> <div><br class=3D"Apple-interchange-newline"><br></div><div>Med V=E4nliga H= =E4lsningar<br>------------------------------------------------------------= -------------------<br>Karli Sj=F6berg<br>Swedish University of Agricultura= l Sciences<br>Box 7079 (Visiting Address Kron=E5sv=E4gen 8)<br>S-750 07 Upp= sala, Sweden<br>Phone: +46-(0)18-67 15 66</div><div><a href=3D"mailto= :karli.sjoberg@adm.slu.se">karli.sjoberg@slu.se</a></div> </div> <br></body></html>= --_000_E3028A1067B0412D83F3DFD15D668622sluse_--

On 14/08/2012, at 3:59 PM, Karli Sjöberg wrote: <snip>
This way, you can have almost(4096) as many interfaces as you want with only two physical NICs. I also gave an example on how you tune Jumbo Frames to be active on some interfaces, and have regular windows size on the rest. Jumbo most only be active on interfaces that isn´t routed, since the default routing size is 1500.
Oooohhh Aaaahhh... that's really nicely written out. :) Could you be convinced to make a wiki page for it? (just hoping :>) Regards and best wishes, Justin Clift -- Aeolus Community Manager http://www.aeolusproject.org

--_000_38DC2C5444574217A831EFC7E3BFC077sluse_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable 14 aug 2012 kl. 08.30 skrev Justin Clift: On 14/08/2012, at 3:59 PM, Karli Sj=F6berg wrote: <snip> This way, you can have almost(4096) as many interfaces as you want with onl= y two physical NICs. I also gave an example on how you tune Jumbo Frames to= be active on some interfaces, and have regular windows size on the rest. J= umbo most only be active on interfaces that isn=B4t routed, since the defau= lt routing size is 1500. Oooohhh Aaaahhh... that's really nicely written out. :) Could you be convinced to make a wiki page for it? DIY? It=B4s just copy-pasting the instructions basically, but I would be ho= nored to have contributed:) (just hoping :>) Regards and best wishes, Justin Clift -- Aeolus Community Manager http://www.aeolusproject.org Med V=E4nliga H=E4lsningar ---------------------------------------------------------------------------= ---- Karli Sj=F6berg Swedish University of Agricultural Sciences Box 7079 (Visiting Address Kron=E5sv=E4gen 8) S-750 07 Uppsala, Sweden Phone: +46-(0)18-67 15 66 karli.sjoberg@slu.se<mailto:karli.sjoberg@adm.slu.se> --_000_38DC2C5444574217A831EFC7E3BFC077sluse_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <html><head></head><body style=3D"word-wrap: break-word; -webkit-nbsp-mode:= space; -webkit-line-break: after-white-space; "><br><div><div>14 aug 2012 = kl. 08.30 skrev Justin Clift:</div><br class=3D"Apple-interchange-newline">= <blockquote type=3D"cite"><div>On 14/08/2012, at 3:59 PM, Karli Sj=F6berg w= rote:<br><snip><br><blockquote type=3D"cite">This way, you can have a= lmost(4096) as many interfaces as you want with only two physical NICs. I a= lso gave an example on how you tune Jumbo Frames to be active on some inter= faces, and have regular windows size on the rest. Jumbo most only be active= on interfaces that isn=B4t routed, since the default routing size is 1500.= <br></blockquote><br>Oooohhh Aaaahhh... that's really nicely written out. := )<br><br>Could you be convinced to make a wiki page for it?<br></div></bloc= kquote><div><br></div>DIY? It=B4s just copy-pasting the instructions basica= lly, but I would be honored to have contributed:)</div><div><br><blockquote= type=3D"cite"><div><br>(just hoping :>)<br><br>Regards and best wishes,= <br><br>Justin Clift<br><br>--<br>Aeolus Community Manager<br><a href=3D"ht= tp://www.aeolusproject.org">http://www.aeolusproject.org</a><br><br></div><= /blockquote></div><br><div> <span class=3D"Apple-style-span" style=3D"border-collapse: separate; color:= rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: no= rmal; font-weight: normal; letter-spacing: normal; line-height: normal; orp= hans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; = white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizonta= l-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorati= ons-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-wi= dth: 0px; font-size: medium; "><div><br class=3D"Apple-interchange-newline"=
<br></div><div>Med V=E4nliga H=E4lsningar<br>-----------------------------= --------------------------------------------------<br>Karli Sj=F6berg<br>Sw= edish University of Agricultural Sciences<br>Box 7079 (Visiting Address Kro= n=E5sv=E4gen 8)<br>S-750 07 Uppsala, Sweden<br>Phone: +46-(0)18-67 15= 66</div><div><a href=3D"mailto:karli.sjoberg@adm.slu.se">karli.sjoberg@slu= .se</a></div></span> </div> <br></body></html>=
--_000_38DC2C5444574217A831EFC7E3BFC077sluse_--

Hi, There was a bug about this issue before which is solved: https://bugzilla.redhat.com/show_bug.cgi?id=820989 Which version are you using ? Is it built from source or installed from rpms ? A simple test to verify if this fix included is to create the management network (ovirtmgmt) over a nic with static boot protocol (also provide ip, subnet mask and gateway). Than via the Setup Networks drag another nic on top of the management nic. If it fails for the same reason (NETWORK_ATTACH_ILLEGAL_GATEWAY) - your version doesn't include that fix. If this scenario does work - there is a potentially bug. Note that the you shouldn't modify the management network bridge connectivity details (meaning use the same ip, subnet mask and gateway). I think this topic is entitled to its own thread as it was concealed in a long unrelated thread. Thanks, Moti On 08/13/2012 03:55 PM, Ricardo Esteves wrote:
Hi,
I the attached picture "net1.jpg" i have my initial network configuration.
Physical card (em1) with vlan 10 (em1.10) bridged to the ovirtmgmt with IP 192.168.10.25 and default gw 192.168.10.254.
Now i want to bond em1 card with p1p1 card (attached picture net2.jpg)
But if i fill the default gw it gives me this error: The default gateway should be set only on engine network.
If i don't fill the default gw, when i click ok, i loose connection to the server and then after more or less 1 minute the server automaticaly reboots.
Anyone had this kind of problem?
Best regards, Ricardo Esteves.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 14/08/2012, at 5:14 PM, Moti Asayag wrote:
There was a bug about this issue before which is solved: https://bugzilla.redhat.com/show_bug.cgi?id=820989
This bug may be what Karli hit. It's almost definitely not what I hit though. For my setup, already had the ovirtmgmt interface in place (static), and was attempting to add a new logical network on another interface. Wasn't attempting to touch ovirtmgmt logical network at all. Not sure if that helps. + Justin -- Aeolus Community Manager http://www.aeolusproject.org

On Thu, Aug 09, 2012 at 02:17:44PM +0100, Ricardo Esteves wrote:
Ok, i fixed the ssl problem, my ovirt manager machine iptables was blocking the 8443 port.
Neither issue is a good-enough reason for virsh to segfault. Could you provide more information so that some can solve that bug?

Now it's difficult to reproduce the problem, unless i install the host again. The problem seemed to be, when i configured the ovirt engine options on the host it failed to download the certificate because port 8443 was blocked on ovirt engine machine. virsh -r capabilites worked ok at that time but virsh capabilites crashed with segfault -----Original Message----- From: Dan Kenigsberg <danken@redhat.com> To: Ricardo Esteves <ricardo.m.esteves@gmail.com>, Justin Clift <jclift@redhat.com> Cc: Itamar Heim <iheim@redhat.com>, users@ovirt.org Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification Date: Thu, 9 Aug 2012 19:00:42 +0300 On Thu, Aug 09, 2012 at 02:17:44PM +0100, Ricardo Esteves wrote:
Ok, i fixed the ssl problem, my ovirt manager machine iptables was blocking the 8443 port.
Neither issue is a good-enough reason for virsh to segfault. Could you provide more information so that some can solve that bug?

Ok, i reinstalled the host to try to reproduce the problem. I blocked again the 8443 on ovirt manager machine. Configured the ovirt engine options on the host and gave me the error that couldn't download the certificate. I run virsh capabilities and worked ok. Then i rebooted the host and now when i run virsh capabilities it gives me segfault. On dmesg i get this message: virsh -r capabilities runs ok. -----Original Message----- From: Ricardo Esteves <ricardo.m.esteves@gmail.com> To: Dan Kenigsberg <danken@redhat.com> Cc: Justin Clift <jclift@redhat.com>, Itamar Heim <iheim@redhat.com>, users@ovirt.org Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification Date: Thu, 09 Aug 2012 17:31:30 +0100 Now it's difficult to reproduce the problem, unless i install the host again. The problem seemed to be, when i configured the ovirt engine options on the host it failed to download the certificate because port 8443 was blocked on ovirt engine machine. virsh -r capabilites worked ok at that time but virsh capabilites crashed with segfault -----Original Message----- From: Dan Kenigsberg <danken@redhat.com> To: Ricardo Esteves <ricardo.m.esteves@gmail.com>, Justin Clift <jclift@redhat.com> Cc: Itamar Heim <iheim@redhat.com>, users@ovirt.org Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification Date: Thu, 9 Aug 2012 19:00:42 +0300 On Thu, Aug 09, 2012 at 02:17:44PM +0100, Ricardo Esteves wrote:
Ok, i fixed the ssl problem, my ovirt manager machine iptables was blocking the 8443 port.
Neither issue is a good-enough reason for virsh to segfault. Could you provide more information so that some can solve that bug?

And now, after reboot of the node, i get this: [root@blade4 ~]# virsh capabilities Segmentation fault - ----Original Message----- From: Ricardo Esteves <ricardo.m.esteves@gmail.com> To: Itamar Heim <iheim@redhat.com> Cc: users@ovirt.org Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification Date: Wed, 01 Aug 2012 16:52:11 +0100 I didn't disabled anything, but after installing the node when i configure the option "oVirt Engine" it gives an error saying it can't download the certificate, but i had this error with previous versions of the node, and it detected ok the CPU family. This is the output after a fresh install of the node: [root@blade4 ~]# vdsClient -s 0 getVdsCaps | grep -i flags Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2275, in <module> File "/usr/share/vdsm/vdsClient.py", line 403, in do_getCap File "/usr/lib64/python2.7/xmlrpclib.py", line 1224, in __call__ File "/usr/lib64/python2.7/xmlrpclib.py", line 1578, in __request File "/usr/lib64/python2.7/xmlrpclib.py", line 1264, in request File "/usr/lib64/python2.7/xmlrpclib.py", line 1292, in single_request File "/usr/lib64/python2.7/xmlrpclib.py", line 1439, in send_content File "/usr/lib64/python2.7/httplib.py", line 954, in endheaders File "/usr/lib64/python2.7/httplib.py", line 814, in _send_output File "/usr/lib64/python2.7/httplib.py", line 776, in send File "/usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py", line 91, in connect File "/usr/lib64/python2.7/socket.py", line 553, in create_connection gaierror: [Errno -2] Name or service not known [root@blade4 ~]# virsh capabilities <capabilities> <host> <uuid>35303737-3830-435a-4a30-30333035455a</uuid> <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='2'/> <feature name='rdtscp'/> <feature name='dca'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <power_management/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='8'> <cpu id='0'/> <cpu id='1'/> <cpu id='2'/> <cpu id='3'/> <cpu id='4'/> <cpu id='5'/> <cpu id='6'/> <cpu id='7'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/bin/qemu-kvm</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <pae/> <nonpae/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/bin/qemu-kvm</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities> When i add the host to oVirt i get this message: Host blade4.vi.pt moved to Non-Operational state as host does not meet the cluster's minimum CPU level. Missing CPU features : model_Nehalem Best regards, Ricardo Esteves. -----Original Message----- From: Itamar Heim <iheim@redhat.com> To: Ricardo Esteves <ricardo.m.esteves@gmail.com> Cc: users@ovirt.org Subject: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification Date: Wed, 01 Aug 2012 15:43:22 +0300 On 08/01/2012 02:28 PM, Ricardo Esteves wrote:
[root@blade4 <mailto:root@blade4> ~]# vdsClient -s 0 getVdsCaps | grep -i flags Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2275, in <module> File "/usr/share/vdsm/vdsClient.py", line 403, in do_getCap File "/usr/lib64/python2.7/xmlrpclib.py", line 1224, in __call__ File "/usr/lib64/python2.7/xmlrpclib.py", line 1578, in __request File "/usr/lib64/python2.7/xmlrpclib.py", line 1264, in request File "/usr/lib64/python2.7/xmlrpclib.py", line 1292, in single_request File "/usr/lib64/python2.7/xmlrpclib.py", line 1439, in send_content File "/usr/lib64/python2.7/httplib.py", line 954, in endheaders File "/usr/lib64/python2.7/httplib.py", line 814, in _send_output File "/usr/lib64/python2.7/httplib.py", line 776, in send File "/usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py", line 98, in connect File "/usr/lib64/python2.7/ssl.py", line 381, in wrap_socket File "/usr/lib64/python2.7/ssl.py", line 141, in __init__ SSLError: [Errno 0] _ssl.c:340: error:00000000:lib(0):func(0):reason(0)
did you somehow disable ssl? is vdsm running? what's its status in engine? what does this return: virsh capabilities | grep model
-----Original Message----- *From*: Itamar Heim <iheim@redhat.com <mailto:Itamar%20Heim%20%3ciheim@redhat.com%3e>> *To*: Ricardo Esteves <ricardo.m.esteves@gmail.com <mailto:Ricardo%20Esteves%20%3cricardo.m.esteves@gmail.com%3e>> *Cc*: users@ovirt.org <mailto:users@ovirt.org> *Subject*: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification *Date*: Wed, 01 Aug 2012 13:59:27 +0300
On 08/01/2012 12:57 PM, Ricardo Esteves wrote:
Hi,
With the latest version of ovirt my host's CPU is not correctly identified, my host's have an Intel Xeon E5530 (Nehalem family), but is being identified as Conrad family.
My installed versions:
ovirt-engine-3.1.0-2.fc17.noarch ovirt-node-iso-2.5.0-2.0.fc17.iso
Best regards, Ricardo Esteves.
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
please share: vdsClient -s 0 getVdsCaps | grep -i flags

Oh, sorry to hear that, that doesn't sound good. What did change on the server with reboot? I wonder how this is possible. Martin On 08/01/2012 06:32 PM, Ricardo Esteves wrote:
And now, after reboot of the node, i get this:
[root@blade4 <mailto:root@blade4> ~]# virsh capabilities Segmentation fault
- ----Original Message----- *From*: Ricardo Esteves <ricardo.m.esteves@gmail.com <mailto:Ricardo%20Esteves%20%3cricardo.m.esteves@gmail.com%3e>> *To*: Itamar Heim <iheim@redhat.com <mailto:Itamar%20Heim%20%3ciheim@redhat.com%3e>> *Cc*: users@ovirt.org <mailto:users@ovirt.org> *Subject*: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification *Date*: Wed, 01 Aug 2012 16:52:11 +0100
I didn't disabled anything, but after installing the node when i configure the option "oVirt Engine" it gives an error saying it can't download the certificate, but i had this error with previous versions of the node, and it detected ok the CPU family.
This is the output after a fresh install of the node:
[root@blade4 <mailto:root@blade4> ~]# vdsClient -s 0 getVdsCaps | grep -i flags Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2275, in <module> File "/usr/share/vdsm/vdsClient.py", line 403, in do_getCap File "/usr/lib64/python2.7/xmlrpclib.py", line 1224, in __call__ File "/usr/lib64/python2.7/xmlrpclib.py", line 1578, in __request File "/usr/lib64/python2.7/xmlrpclib.py", line 1264, in request File "/usr/lib64/python2.7/xmlrpclib.py", line 1292, in single_request File "/usr/lib64/python2.7/xmlrpclib.py", line 1439, in send_content File "/usr/lib64/python2.7/httplib.py", line 954, in endheaders File "/usr/lib64/python2.7/httplib.py", line 814, in _send_output File "/usr/lib64/python2.7/httplib.py", line 776, in send File "/usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py", line 91, in connect File "/usr/lib64/python2.7/socket.py", line 553, in create_connection gaierror: [Errno -2] Name or service not known
[root@blade4 <mailto:root@blade4> ~]# virsh capabilities <capabilities>
<host> <uuid>35303737-3830-435a-4a30-30333035455a</uuid> <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='2'/> <feature name='rdtscp'/> <feature name='dca'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <power_management/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='8'> <cpu id='0'/> <cpu id='1'/> <cpu id='2'/> <cpu id='3'/> <cpu id='4'/> <cpu id='5'/> <cpu id='6'/> <cpu id='7'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> </host>
<guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/bin/qemu-kvm</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <pae/> <nonpae/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest>
<guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/bin/qemu-kvm</emulator> <machine>pc-0.15</machine> <machine>pc-1.0</machine> <machine canonical='pc-1.0'>pc</machine> <machine>pc-0.14</machine> <machine>pc-0.13</machine> <machine>pc-0.12</machine> <machine>pc-0.11</machine> <machine>pc-0.10</machine> <machine>isapc</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest>
</capabilities>
When i add the host to oVirt i get this message:
Host blade4.vi.pt moved to Non-Operational state as host does not meet the cluster's minimum CPU level. Missing CPU features : model_Nehalem
Best regards, Ricardo Esteves.
-----Original Message----- *From*: Itamar Heim <iheim@redhat.com <mailto:Itamar%20Heim%20%3ciheim@redhat.com%3e>> *To*: Ricardo Esteves <ricardo.m.esteves@gmail.com <mailto:Ricardo%20Esteves%20%3cricardo.m.esteves@gmail.com%3e>> *Cc*: users@ovirt.org <mailto:users@ovirt.org> *Subject*: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification *Date*: Wed, 01 Aug 2012 15:43:22 +0300
On 08/01/2012 02:28 PM, Ricardo Esteves wrote:
[root@blade4 <mailto:root@blade4> ~]# vdsClient -s 0 getVdsCaps | grep -i flags Traceback (most recent call last): File "/usr/share/vdsm/vdsClient.py", line 2275, in <module> File "/usr/share/vdsm/vdsClient.py", line 403, in do_getCap File "/usr/lib64/python2.7/xmlrpclib.py", line 1224, in __call__ File "/usr/lib64/python2.7/xmlrpclib.py", line 1578, in __request File "/usr/lib64/python2.7/xmlrpclib.py", line 1264, in request File "/usr/lib64/python2.7/xmlrpclib.py", line 1292, in single_request File "/usr/lib64/python2.7/xmlrpclib.py", line 1439, in send_content File "/usr/lib64/python2.7/httplib.py", line 954, in endheaders File "/usr/lib64/python2.7/httplib.py", line 814, in _send_output File "/usr/lib64/python2.7/httplib.py", line 776, in send File "/usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py", line 98, in connect File "/usr/lib64/python2.7/ssl.py", line 381, in wrap_socket File "/usr/lib64/python2.7/ssl.py", line 141, in __init__ SSLError: [Errno 0] _ssl.c:340: error:00000000:lib(0):func(0):reason(0)
did you somehow disable ssl? is vdsm running? what's its status in engine?
what does this return: virsh capabilities | grep model
-----Original Message----- *From*: Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:Itamar%20Heim%20%3ciheim@redhat.com%3e>> *To*: Ricardo Esteves <ricardo.m.esteves@gmail.com <mailto:ricardo.m.esteves@gmail.com> <mailto:Ricardo%20Esteves%20%3cricardo.m.esteves@gmail.com%3e>> *Cc*: users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org> *Subject*: Re: [Users] oVIrt 3.1 - Xeon E5530 - Wrong cpu identification *Date*: Wed, 01 Aug 2012 13:59:27 +0300
On 08/01/2012 12:57 PM, Ricardo Esteves wrote:
Hi,
With the latest version of ovirt my host's CPU is not correctly identified, my host's have an Intel Xeon E5530 (Nehalem family), but is being identified as Conrad family.
My installed versions:
ovirt-engine-3.1.0-2.fc17.noarch ovirt-node-iso-2.5.0-2.0.fc17.iso
Best regards, Ricardo Esteves.
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
please share: vdsClient -s 0 getVdsCaps | grep -i flags
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (14)
-
Alan Johnson
-
Ayal Baron
-
Dan Kenigsberg
-
Fabian Deutsch
-
Itamar Heim
-
Johan Kragsterman
-
Justin Clift
-
Karli Sjöberg
-
Martin Kletzander
-
Meni Yakvoe
-
Moti Asayag
-
Nicholas Kesick
-
Ricardo Esteves
-
Ricardo Esteves