
On 11/15/2012 04:38 PM, Alan Johnson wrote:
On Thu, Nov 15, 2012 at 1:11 AM, Igor Lvovsky <ilvovsky@redhat.com <mailto:ilvovsky@redhat.com>> wrote:
Hi Alan,
If I understand you correctly, you try to add a new VM-VLANed network (sandbox) to interface em1 that already has another VM network (ovirtmgmt).
That's a bit strange - but might be a bug that was already solved. Currently (on nightly build for example) the UI blocks the option to attach a vlanned network to a nic that has already non-vlanned network attached.
If so, this operation is not permited. You can't attach VM-VLANed network and VM-nonVLANed network to same interface.
Good to know. I'll start working on another solution that might work once I get around the blocking bug (more on this in response to Roy shortly). Is there a way to convert the ovirtmgmt network to VLAN'd?
In order to configure ovirtmgmt over a vlan you do the following: 1. Create new Data-Center 2. Edit in the Data-Center the ovirtmgmt under the 'Logical Networks' sub tab and set the required vlan-id for it. 3. Manually configured the ovirtmgmt on the host as a vlan with the same vlan-id. 4. Add a new Cluster to the Data-Center and assign the 'ovirtmgmt' network to it. 5. Add the host with the manually configured 'ovirtmgmt' network to it. At this point you should be able to define more Logical Networks on the Data-Center with vlans, assign them to the cluster and attach them to the host using the 'setup networks' to the same nic on which the 'ovirtmgmt' is defined.
Also, while I have your attention, I expect I will have to enable each VLAN on each host's port of the connected switch, which means I have to set each port with multiple VLANs to trunk mode. Is that right?
Seems tight. If you have more than a single nic on the host, you could consider defining a bond and attach the logical networks (since you mentioned Vlan) on top of it. It yet requires to configure the ovirtmgmt manually on the host as vlan on top of that bond.
To be sure that this is a case, I need to know your vdsm version and vdsm log will be good as well
There is nothing in the log with that time stamp (as Roy has observed, it is not getting that far), but here are the versions just FYI: [root@cloudhost01 ~]# rpm -qa | fgrep vdsm vdsm-python-4.10.0-0.44.14.el6.x86_64 vdsm-xmlrpc-4.10.0-0.44.14.el6.noarch vdsm-4.10.0-0.44.14.el6.x86_64 vdsm-cli-4.10.0-0.44.14.el6.noarch vdsm-gluster-4.10.0-0.44.14.el6.noarch
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users