[Users] oVirt 3.2 and Solaris 10 guests

Hi, I've a strange network issue with Solaris 10 x86 guests on oVirt 3.2 (beta) with an CentOS 6.3 hypervisor (using dreyou's repository for VDSM packages). My host configuration is the following: Operating system: Other Network interface type: rtl8139 Network Name: ovirtmgmt Disk interface: IDE Disk is working fine and the network interface is detected by Solaris 10 guest, as I can see a rtls0 interface with "ifconfig -a" and Solaris installer also detects this interface. But when I try to send packages over this interface they never reach a target - e.g. ping the gateway and "arp -an" doesn't show it's MAC address. On the hypervisor I can see vnet(x) interface is on ovirtmgmt bridge. Network traffic is working fine for all other VMs (RHEL 6 and Fedora 18) on ovirtmgmt bridge. Btw, when creating a Solaris 10 guest on RHEV 3.1 with RHEL 6.3 hypervisor the exactly same issue occurs - network interface is detected but no traffic passes this interface. On a plain KVM-Host (RHEL 6.3) I can create a Solaris 10 host with virt-manager using rtl8139 Virtual Network Interface Device model on a bridge (network setup is nearly the same as for oVirt and RHEV). On this RHEL 6.3 KVM-host network is working fine for Solaris guests. In the process list (ps -ef) the settings for oVirt and RHEL-KVM seem the be nearly the same (different fd's and option bootindex=3 is missing on KVM): oVirt 3.2: -netdev tap,fd=25,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=00:1a:4a:00:64:9e,bus=pci.0,addr=0x3,bootindex=3 RHEL 6.3 KVM: -netdev tap,fd=36,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:8e:3a:57,bus=pci.0,addr=0x3 KVM-Version is exactly the same on CentOS-oVirt-Hypervisor as on RHEL KVM-host: # rpm -qa | grep kvm qemu-kvm-rhev-tools-0.12.1.2-2.295.el6.10.x86_64 qemu-kvm-rhev-0.12.1.2-2.295.el6.10.x86_64 Do you have any clue why Solaris networking isn't working on oVirt but on RHEL-KVM it is? I didn't try Fedora 18 or oVirt Node as a hypervisor yet - don't think that there will be a difference. Thanks a lot for your feedback. -- Best Regards, René Koch

I can confirm the same issue with Solaris 11 and oVirt 3.1. Everything seems OK, my network card is configured correctly (static IP) but cannot communicate with anyone. On Thu, Feb 14, 2013 at 10:46 AM, René Koch (ovido) <r.koch@ovido.at> wrote:
Hi,
I've a strange network issue with Solaris 10 x86 guests on oVirt 3.2 (beta) with an CentOS 6.3 hypervisor (using dreyou's repository for VDSM packages).
My host configuration is the following: Operating system: Other Network interface type: rtl8139 Network Name: ovirtmgmt Disk interface: IDE
Disk is working fine and the network interface is detected by Solaris 10 guest, as I can see a rtls0 interface with "ifconfig -a" and Solaris installer also detects this interface.
But when I try to send packages over this interface they never reach a target - e.g. ping the gateway and "arp -an" doesn't show it's MAC address. On the hypervisor I can see vnet(x) interface is on ovirtmgmt bridge. Network traffic is working fine for all other VMs (RHEL 6 and Fedora 18) on ovirtmgmt bridge.
Btw, when creating a Solaris 10 guest on RHEV 3.1 with RHEL 6.3 hypervisor the exactly same issue occurs - network interface is detected but no traffic passes this interface.
On a plain KVM-Host (RHEL 6.3) I can create a Solaris 10 host with virt-manager using rtl8139 Virtual Network Interface Device model on a bridge (network setup is nearly the same as for oVirt and RHEV). On this RHEL 6.3 KVM-host network is working fine for Solaris guests.
In the process list (ps -ef) the settings for oVirt and RHEL-KVM seem the be nearly the same (different fd's and option bootindex=3 is missing on KVM):
oVirt 3.2: -netdev tap,fd=25,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=00:1a:4a:00:64:9e,bus=pci.0,addr=0x3,bootindex=3
RHEL 6.3 KVM: -netdev tap,fd=36,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:8e:3a:57,bus=pci.0,addr=0x3
KVM-Version is exactly the same on CentOS-oVirt-Hypervisor as on RHEL KVM-host: # rpm -qa | grep kvm qemu-kvm-rhev-tools-0.12.1.2-2.295.el6.10.x86_64 qemu-kvm-rhev-0.12.1.2-2.295.el6.10.x86_64
Do you have any clue why Solaris networking isn't working on oVirt but on RHEL-KVM it is?
I didn't try Fedora 18 or oVirt Node as a hypervisor yet - don't think that there will be a difference.
Thanks a lot for your feedback.
-- Best Regards, René Koch
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Jean-Francois Saucier (djf_jeff) GPG key : 0xA9E6E953

On Thu, 14 Feb 2013 10:49:13 -0500 Jean-Francois Saucier <jsaucier@gmail.com> wrote:
oVirt 3.2: -netdev tap,fd=25,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=00:1a:4a:00:64:9e,bus=pci.0,addr=0x3,bootindex=3 ^^^^^^^ - realtek, really? Try e1000 which is usually much more reliable or virtio iface. I personally always had terrible experience with emulated rtl8139 by qemu-kvm, e1000 was OK.

On Thu, Feb 14, 2013 at 11:21 AM, Jiri Belka <jbelka@redhat.com> wrote:
On Thu, 14 Feb 2013 10:49:13 -0500 Jean-Francois Saucier <jsaucier@gmail.com> wrote:
oVirt 3.2: -netdev tap,fd=25,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=00:1a:4a:00:64:9e,bus=pci.0,addr=0x3,bootindex=3 ^^^^^^^ - realtek, really? Try e1000 which is usually much more reliable or virtio iface. I personally always had terrible experience with emulated rtl8139 by qemu-kvm, e1000 was OK.
For me, it's the same if I choose the e1000 (which I choose the first time and then try the rtl8139). -- Jean-Francois Saucier (djf_jeff) GPG key : 0xA9E6E953

On Thu, 2013-02-14 at 11:22 -0500, Jean-Francois Saucier wrote:
On Thu, Feb 14, 2013 at 11:21 AM, Jiri Belka <jbelka@redhat.com> wrote: On Thu, 14 Feb 2013 10:49:13 -0500 Jean-Francois Saucier <jsaucier@gmail.com> wrote:
> > oVirt 3.2: > > -netdev tap,fd=25,id=hostnet0 -device > > > > rtl8139,netdev=hostnet0,id=net0,mac=00:1a:4a:00:64:9e,bus=pci.0,addr=0x3,bootindex=3
^^^^^^^ - realtek, really? Try e1000 which is usually much more reliable or virtio iface. I personally always had terrible experience with emulated rtl8139 by qemu-kvm, e1000 was OK.
For me, it's the same if I choose the e1000 (which I choose the first time and then try the rtl8139).
Same for me - e1000 aren't working with oVirt 3.2 and Solaris 10, too.

On Thu, Feb 14, 2013 at 05:25:47PM +0100, René Koch (ovido) wrote:
On Thu, 2013-02-14 at 11:22 -0500, Jean-Francois Saucier wrote:
On Thu, Feb 14, 2013 at 11:21 AM, Jiri Belka <jbelka@redhat.com> wrote: On Thu, 14 Feb 2013 10:49:13 -0500 Jean-Francois Saucier <jsaucier@gmail.com> wrote:
> > oVirt 3.2: > > -netdev tap,fd=25,id=hostnet0 -device > > > > rtl8139,netdev=hostnet0,id=net0,mac=00:1a:4a:00:64:9e,bus=pci.0,addr=0x3,bootindex=3
^^^^^^^ - realtek, really? Try e1000 which is usually much more reliable or virtio iface. I personally always had terrible experience with emulated rtl8139 by qemu-kvm, e1000 was OK.
For me, it's the same if I choose the e1000 (which I choose the first time and then try the rtl8139).
Same for me - e1000 aren't working with oVirt 3.2 and Solaris 10, too.
I've heard rumours that it might be related to the fact that oVirt starts VMs with ACPI enabled by default (there's a way in Engine to turn this off). If this is not the case, please do a more extensive bisection of the qemu command line. You have one "bad" cmdline; start trimming it until it becomes "good".

Thanks a lot for your feedback. I'll try to play around with qemu options next week and try to find out which setting causes this issue. What's the best way for changing this options - using hook scripts I guess... Regards, René -----Original message-----
From:Dan Kenigsberg <danken@redhat.com> Sent: Wednesday 20th February 2013 11:52 To: René Koch <r.koch@ovido.at> Cc: Jean-Francois Saucier <jsaucier@gmail.com>; ovirt-users <users@ovirt.org> Subject: Re: [Users] oVirt 3.2 and Solaris 10 guests
On Thu, Feb 14, 2013 at 05:25:47PM +0100, René Koch (ovido) wrote:
On Thu, 2013-02-14 at 11:22 -0500, Jean-Francois Saucier wrote:
On Thu, Feb 14, 2013 at 11:21 AM, Jiri Belka <jbelka@redhat.com> wrote: On Thu, 14 Feb 2013 10:49:13 -0500 Jean-Francois Saucier <jsaucier@gmail.com> wrote:
> > oVirt 3.2: > > -netdev tap,fd=25,id=hostnet0 -device > > > > rtl8139,netdev=hostnet0,id=net0,mac=00:1a:4a:00:64:9e,bus=pci.0,addr=0x3,bootindex=3
^^^^^^^ - realtek, really? Try e1000 which is usually much more reliable or virtio iface. I personally always had terrible experience with emulated rtl8139 by qemu-kvm, e1000 was OK.
For me, it's the same if I choose the e1000 (which I choose the first time and then try the rtl8139).
Same for me - e1000 aren't working with oVirt 3.2 and Solaris 10, too.
I've heard rumours that it might be related to the fact that oVirt starts VMs with ACPI enabled by default (there's a way in Engine to turn this off).
If this is not the case, please do a more extensive bisection of the qemu command line. You have one "bad" cmdline; start trimming it until it becomes "good".

On Sun, Feb 24, 2013 at 07:18:46PM +0100, René Koch wrote:
Thanks a lot for your feedback.
I'll try to play around with qemu options next week and try to find out which setting causes this issue. What's the best way for changing this options - using hook scripts I guess...
I'm told that unsetting this option is not possible in Engine. So yeah, you are left with either hook script or a temporary hack of diff --git a/vdsm/clientIF.py b/vdsm/clientIF.py index 38aa0d7..b489e0e 100644 --- a/vdsm/clientIF.py +++ b/vdsm/clientIF.py @@ -341,6 +341,7 @@ class clientIF: return res['status']['code'] def createVm(self, vmParams): + vmParams['acpiEnable'] = False self.vmContainerLock.acquire() self.log.info("vmContainerLock acquired by vm %s", vmParams['vmId'])

--=_2cd8fe8e391973736d59625ccf9f3501 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi All,=0A=0AI have this same issue on Pfsense (FreeBSD) and even a fres= h Ubuntu installation.=0A=0AI have installed Ubuntu to see if it was rea= lly an issue with the nics or the routing on Pfsense. On 3.1 this was wo= rking, I'm sure.=0A=0AI have one Ubuntu box running as VM that is connec= ted to the management network and I can access it. When I attach my new= installed VM to it, it cannot ping anything in that network, strange.= =0A=0AI also have seen that when I attach two network e1000 cards to my= Pfsense box that only one is up. Th Realtek ones are both up. With the= e1000 I can only ping the gateway from the PFsense install and nothing= more.=0A=0AWhat is going on here ? I'm already busy with this a long ti= me and cannot fix it.=0A=0AI hope someone can help me out here.=0A=0AChe= ers,=0A=0AMatt=0A=0AOn 24 Feb 2013 22:21:20, Dan Kenigsberg wrote:=0A> O= n Sun, Feb 24, 2013 at 07:18:46PM +0100, Ren=C3=A9 Koch wrote:=0A> > Tha= nks a lot for your feedback.=0A> > =0A> > I'll try to play around with q= emu options next week and try to find out which setting causes this issu= e.=0A> > What's the best way for changing this options - using hook scri= pts I guess...=0A=0A> I'm told that unsetting this option is not possibl= e in Engine. So yeah,=0A> you are left with either hook script or a temp= orary hack of=0A> =0A> diff --git a/vdsm/clientIF.py b/vdsm/clientIF.py= =0A> index 38aa0d7..b489e0e 100644=0A> --- a/vdsm/clientIF.py=0A> +++ b/= vdsm/clientIF.py=0A> @@ -341,6 +341,7 @@ class clientIF:=0A> re= turn res['status']['code']=0A> =0A> def createVm(self, vmParams):= =0A> + vmParams['acpiEnable'] =3D False=0A> self.vmConta= inerLock.acquire()=0A> self.log.info("vmContainerLock acquired= by vm %s",=0A> vmParams['vmId'])=0A> _______= ________________________________________=0A> Users mailing list=0A> User= s@ovirt.org=0A> http://lists.ovirt.org/mailman/listinfo/users=0A=0A=0A --=_2cd8fe8e391973736d59625ccf9f3501 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; char= set=3DUTF-8"><title></title><style type=3D"text/css">.felamimail-body-bl= ockquote {margin: 5px 10px 0 3px;padding-left: 10px;border-left: 2px sol= id #000088;} </style></head><body>Hi All,<br><br>I have this same issue= on Pfsense (FreeBSD) and even a fresh Ubuntu installation.<br><br>I hav= e installed Ubuntu to see if it was really an issue with the nics or the= routing on Pfsense. On 3.1 this was working, I'm sure.<br><br>I have on= e Ubuntu box running as VM that is connected to the management network a= nd I can access it. When I attach my new installed VM to it, it cannot p= ing anything in that network, strange.<br><br>I also have seen that when= I attach two network e1000 cards to my Pfsense box that only one is up.= Th Realtek ones are both up. With the e1000 I can only ping the gateway= from the PFsense install and nothing more.<br><br>What is going on here= ? I'm already busy with this a long time and cannot fix it.<br><br>I ho= pe someone can help me out here.<br><br>Cheers,<br><br>Matt<br><br>On 24= Feb 2013 22:21:20, Dan Kenigsberg wrote:<br><blockquote class=3D"felami= mail-body-blockquote"><meta http-equiv=3D"Content-Type" content=3D"text/= html; charset=3DUTF-8"><title></title><style type=3D"text/css">.felamima= il-body-blockquote {margin: 5px 10px 0 3px;padding-left: 10px;border-lef= t: 2px solid #000088;} </style>On Sun, Feb 24, 2013 at 07:18:46PM +0100,= Ren=C3=A9 Koch wrote:<br><blockquote class=3D"felamimail-body-blockquot= e">Thanks a lot for your feedback.<br><br>I'll try to play around with q= emu options next week and try to find out which setting causes this issu= e.<br>What's the best way for changing this options - using hook scripts= I guess...<br></blockquote><br>I'm told that unsetting this option is n= ot possible in Engine. So yeah,<br>you are left with either hook script= or a temporary hack of<br><br>diff --git a/vdsm/clientIF.py b/vdsm/clie= ntIF.py<br>index 38aa0d7..b489e0e 100644<br>--- a/vdsm/clientIF.py<br>++= + b/vdsm/clientIF.py<br>@@ -341,6 +341,7 @@ class clientIF:<br> = return res['status']['code']<br> <br> def createVm(self, vmParams):= <br>+ vmParams['acpiEnable'] =3D False<br> self.vmContain= erLock.acquire()<br> self.log.info("vmContainerLock acquired by= vm %s",<br> vmParams['vmId'])<br>____________= ___________________________________<br>Users mailing list<br><a href=3D"= #" id=3D"123:Users@ovirt.org" class=3D"tinebase-email-link">Users@ovirt.= org</a><br><a href=3D"http://lists.ovirt.org/mailman/listinfo/users" tar= get=3D"_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br></bl= ockquote><br></body></html> --=_2cd8fe8e391973736d59625ccf9f3501--

On Wed, Feb 27, 2013 at 11:16:53AM +0000, YamakasY wrote:
Hi All,
I have this same issue on Pfsense (FreeBSD) and even a fresh Ubuntu installation.
I have installed Ubuntu to see if it was really an issue with the nics or the routing on Pfsense. On 3.1 this was working, I'm sure.
I have one Ubuntu box running as VM that is connected to the management network and I can access it. When I attach my new installed VM to it, it cannot ping anything in that network, strange.
I also have seen that when I attach two network e1000 cards to my Pfsense box that only one is up. Th Realtek ones are both up. With the e1000 I can only ping the gateway from the PFsense install and nothing more.
What is going on here ? I'm already busy with this a long time and cannot fix it.
I hope someone can help me out here.
Have you tried my wild guess of forcing acpiEnable=False?
Cheers,
Matt
On 24 Feb 2013 22:21:20, Dan Kenigsberg wrote:
On Sun, Feb 24, 2013 at 07:18:46PM +0100, René Koch wrote:
Thanks a lot for your feedback.
I'll try to play around with qemu options next week and try to find out which setting causes this issue. What's the best way for changing this options - using hook scripts I guess...
I'm told that unsetting this option is not possible in Engine. So yeah, you are left with either hook script or a temporary hack of
diff --git a/vdsm/clientIF.py b/vdsm/clientIF.py index 38aa0d7..b489e0e 100644 --- a/vdsm/clientIF.py +++ b/vdsm/clientIF.py @@ -341,6 +341,7 @@ class clientIF: return res['status']['code']
def createVm(self, vmParams): + vmParams['acpiEnable'] = False self.vmContainerLock.acquire() self.log.info("vmContainerLock acquired by vm %s", vmParams['vmId']) _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (6)
-
Dan Kenigsberg
-
Jean-Francois Saucier
-
Jiri Belka
-
René Koch
-
René Koch (ovido)
-
YamakasY