Hi David) ,

You are looking on the net-dumpxml and it's ok, this is how it should look like - in my setup i see the same, for example -->
# virsh -r net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 ;vdsmdummy;          active     no            no
 direct-pool          active     yes           yes
 vdsm-ovirtmgmt       active     yes           yes
 vdsm-vmfex_net       active     yes           yes

brctl show
bridge name     bridge id               STP enabled     interfaces
;vdsmdummy;             8000.000000000000       no
ovirtmgmt               8000.0025b50a0002       no              enp6s0f0
vmfex_net               8000.0025b50b0002       no              enp7s0f0

# virsh -r net-dumpxml vdsm-vmfex_net
<network>
  <name>vdsm-vmfex_net</name>
  <uuid>bbb7615a-4f7b-4497-a899-58b11f25d5ae</uuid>
  <forward mode='bridge'/>
  <bridge name='vmfex_net'/>
</network>

- What you should be looking for is the dumpxml of the 'DOM'(the VM when it's running on the host) and look for the interface type -->

# virsh -r list
 Id    Name                           State
----------------------------------------------------
 2     v7                             running

# virsh -r dumpxml 2

 <interface type='direct'>
      <mac address='00:00:00:00:00:29'/>
      <source network='direct-pool' dev='enp6s0f1' mode='passthrough'/>
      <virtualport type='802.1Qbh'>
        <parameters profileid='10GbE-VMFEX-B'/>
      </virtualport>
      <target dev='macvtap0'/>
      <model type='virtio'/>
      <link state='up'/>
      <boot order='2'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

- The BZ i reported - https://bugzilla.redhat.com/show_bug.cgi?id=1308839
will be fixed on the next version(not the one you have) and it should prevent you from running the VM with vmfex profile network.


Cheers)


On Thu, Mar 10, 2016 at 12:44 PM, David LeVene <David.LeVene@blackboard.com> wrote:

Hi Michael,

 

Thanks for the additional steps and troubleshooting, things are looking healthier, but the virsh config has not changed. I can see a direct-pool configuration which looks correct.

 

Did it fail to clean up the vdsm-SRIOV interface & associated bridge? Or should they be present as well? I did a reboot of the host and it looks like it sets them all up.

# virsh -r net-list --all

Name                 State      Autostart     Persistent

----------------------------------------------------------

;vdsmdummy;          active     no            no

direct-pool          active     yes           yes

vdsm-ovirtmgmt       active     yes           yes

vdsm-SRIOV           active     yes           yes

 

$ brctl show

bridge name     bridge id               STP enabled     interfaces

;vdsmdummy;             8000.000000000000       no

SRIOV           8000.0025b5000b0f       no              enp7s0f0

ovirtmgmt               8000.0025b5000b4f       no              enp6s0

 

# virsh -r net-dumpxml vdsm-SRIOV

<network>

  <name>vdsm-SRIOV</name>

  <uuid>46b2655c-ce6f-401d-acaf-f5ca9cdb1627</uuid>

  <forward mode='bridge'/>

  <bridge name='SRIOV'/>

</network>

 

# rpm -qa | grep -i vmfex

vdsm-hook-vmfex-dev-4.17.23-1.el7.noarch

 

Found your bug report - is this related as well? As it should be resolved in the version I’m using. https://bugzilla.redhat.com/show_bug.cgi?id=1308839

 

 

Thread-196::ERROR::2016-03-10 20:47:26,566::vm::759::virt.vm::(_startUnderlyingVm) vmId=`2a03a747-5d43-4ba9-a95b-6903fb41cacc`::The vm start process failed

Traceback (most recent call last):

  File "/usr/share/vdsm/virt/vm.py", line 703, in _startUnderlyingVm

    self._run()

  File "/usr/share/vdsm/virt/vm.py", line 1941, in _run

    self._connection.createXML(domxml, flags),

  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 124, in wrapper

    ret = f(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in wrapper

    return func(inst, *args, **kwargs)

  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createXML

    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)

libvirtError: unsupported configuration: filterref is not supported for network interfaces of type direct

Thread-196::INFO::2016-03-10 20:47:26,567::vm::1330::virt.vm::(setDownStatus) vmId=`2a03a747-5d43-4ba9-a95b-6903fb41cacc`::Changed state to Down: unsupported configuration: filterref is not supported for network interfaces of type direct (code=1)

Thread-196::DEBUG::2016-03-10 20:47:26,568::vmchannels::229::vds::(unregister) Delete fileno 35 from listener.

Thread-196::DEBUG::2016-03-10 20:47:26,568::vmchannels::59::vds::(_unregister_fd) Failed to unregister FD from epoll (ENOENT): 35

Thread-196::DEBUG::2016-03-10 20:47:26,569::__init__::206::jsonrpc.Notification::(emit) Sending event {"params": {"2a03a747-5d43-4ba9-a95b-6903fb41cacc": {"status": "Down", "timeOffset": "0", "exitReason": 1, "exitMessage": "unsupported configuration: filterref is not supported for network interfaces of type direct", "exitCode": 1}, "notify_time": 4406685340}, "jsonrpc": "2.0", "method": "|virt|VM_status|2a03a747-5d43-4ba9-a95b-6903fb41cacc"}

VM Channels Listener::DEBUG::2016-03-10 20:47:26,780::vmchannels::135::vds::(_do_del_channels) fileno 35 was removed from listener.

periodic/0::WARNING::2016-03-10 20:47:27,877::periodic::258::virt.periodic.VmDispatcher::(__call__) could not run <class 'virt.periodic.DriveWatermarkMonitor'> on [u'2a03a747-5d43-4ba9-a95b-6903fb41cacc']

mailbox.SPMMonitor::DEBUG::2016-03-10 20:47:27,900::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) /usr/bin/taskset --cpu-list 0-39 dd if=/rhev/data-center/00000001-0001-0001-0001-00000000017a/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000 (cwd None)

mailbox.SPMMonitor::DEBUG::2016-03-10 20:47:27,919::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) SUCCESS: <err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.00310821 s, 329 MB/s\n'; <rc> = 0

jsonrpc.Executor/7::DEBUG::2016-03-10 20:47:28,411::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'VM.destroy' in bridge with {u'vmID': u'2a03a747-5d43-4ba9-a95b-6903fb41cacc'}

jsonrpc.Executor/7::INFO::2016-03-10 20:47:28,412::API::341::vds::(destroy) vmContainerLock acquired by vm 2a03a747-5d43-4ba9-a95b-6903fb41cacc

jsonrpc.Executor/7::DEBUG::2016-03-10 20:47:28,412::vm::3885::virt.vm::(destroy) vmId=`2a03a747-5d43-4ba9-a95b-6903fb41cacc`::destroy Called

jsonrpc.Executor/7::INFO::2016-03-10 20:47:28,412::vm::3815::virt.vm::(releaseVm) vmId=`2a03a747-5d43-4ba9-a95b-6903fb41cacc`::Release VM resources

 

 

Regards

David

 

From: Michael Burman [mailto:mburman@redhat.com]
Sent: Thursday, March 10, 2016 19:07


To: David LeVene <David.LeVene@blackboard.com>
Cc: users@ovirt.org; Dan Kenigsberg <danken@redhat.com>
Subject: Re: [ovirt-users] Configuring the SRIOV virsh device

 

Ok, lets try few things.

- I see you using the latest vdsm 3.6 version and you have the vdsm-hook-vmfex-dev-4.17.23-1.el7.noarch shipped by default. But i also see that you have the old hook installed - vdsm-hook-vmfex-4.17.23-0.el7.centos.noarch, please remove it from the server and re-install the host in ovirt(maintenance+re-install). To make sure you have no conflicts with the old hook and you are using the new one.


- In the VM interface profile dialog please uncheck the 'passthrough' property(first remove the SRIOV network from the VMs vNIC and add it back after editing the profile). The 'passthrough' property shouldn't be checked when using the vmfex-hook.

 

- Last thing, but not must, go to 'Networks' main tab and choose your network 'SRIOV', go to 'Clusters' sub tab and press the 'Manage network' button --> uncheck the 'Required' checkboxes

 from your cluster(because your network attached to 2 servers in your cluster from 3, it is in down state colored with red)

, once you will uncheck the 'Required' from the cluster it will be shown as UP, colored in green.

I believe now it will work for you, please let me know.

Cheers )

 

On Thu, Mar 10, 2016 at 1:51 AM, David LeVene <David.LeVene@blackboard.com> wrote:

Hi Michael,

 

I’m going to include some screen shots - as I’ve got it all setup exactly how you mention below.. but the config on the Hypervisor is still bridged config.

 

All the UCS port profiles are correct as we use this config on standalone KVM/QEMU hosts.

 

See in-line for pics at the appropriate step .

 

Cheers

David

 

From: Michael Burman [mailto:mburman@redhat.com]
Sent: Wednesday, March 09, 2016 19:47
To: David LeVene <David.LeVene@blackboard.com>
Cc: users@ovirt.org; Dan Kenigsberg <danken@redhat.com>
Subject: Re: [ovirt-users] Configuring the SRIOV virsh device

 

Hi again David)

In order to achieve such xml ^^ (the one you describing) you should first of all prepare a proper 'Port Profile' on your UCS manager side.

- In your/from your example you should have 2 Port Profiles([1], [2]) configured in your cisco side(see -  http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/vm_fex/kvm/gui/config_guide/2-1/b_GUI_KVM_VM-FEX_UCSM_Configuration_Guide_2_1/b_GUI_KVM_VM-FEX_UCSM_Configuration_Guide_2_1_chapter_010.html)

[1]  profile-ame1-test1
[2] profile-ame1-prep1

- This port profiles should associate with your service profiles(blades)
 

- If you have those port profiles on your ucs side and they well configured, then you should follow the steps i suggested in the previous e-mail -->

1) Run for example:
engine-config -s CustomDeviceProperties="{type=interface;prop={ifacemacspoof=^(true|false)$;queues=[1-9][0-9]*;vmfex=^[a-zA-Z0-9_.-]{2,32}$;SecurityGroups=^(?:(?:[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}, *)*[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}|)$}}" --cver=3.5

- Restart your ovirt-engine service
- with --cver=3.5 you can specify your cluster version level. ^^
- If you don't want that it will overwrite your current attributes, you have to send them all as well(like on my example above). ^^

 

We are using 3.6.3, with the applied patch from https://gerrit.ovirt.org/#/c/54237

 

# engine-config -g CustomDeviceProperties

CustomDeviceProperties:  version: 3.0

CustomDeviceProperties:  version: 3.1

CustomDeviceProperties:  version: 3.2

CustomDeviceProperties:  version: 3.3

CustomDeviceProperties: {type=interface;prop={SecurityGroups=^(?:(?:[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}, *)*[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}|)$}} version: 3.4

CustomDeviceProperties: {type=interface;prop={SecurityGroups=^(?:(?:[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}, *)*[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}|)$}} version: 3.5

CustomDeviceProperties: {type=interface;prop={ifacemacspoof=^(true|false)$;queues=[1-9][0-9]*;vmfex=^[a-zA-Z0-9_.-]{2,32}$;SecurityGroups=^(?:(?:[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}, *)*[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}|)$}} version: 3.6



2) Install vdsm-hook-vmfex-dev on your server(if you don't using vdsm version 3.6, the vmfex hook isn't installed by default)

 

# rpm -qa | grep -i vmfex

vdsm-hook-vmfex-dev-4.17.23-1.el7.noarch

vdsm-hook-vmfex-4.17.23-0.el7.centos.noarch



3) In ovirt engine you can verify that host have the hooks installed under  'Hosts' main tab - > 'Host Hooks' sub tab
You should see there: a) before_device_migrate_destination b) before_device_create c) before_nic_hotplug



4) Create new network and edit the vNIC profile.
Choose from the CustomDeviceProperties the 'vmfex' key(like in your example) and in the right field enter your Port Profile id/name(the one that is configured on your UCS side)

 


- 'Networks' main tab(create network) >> 'vNIC Profiles' sub tab(edit the profile and add the 'vmfex' key), so it would look like:

vmfex = profile-ame1-test1

and/or

vmfex = profile-ame1-prep1


5) Go to 'Setup Networks' sub tab under 'Hosts' main tab --> attach the network with the vmfex profile to NIC on host(drag it), it should be the additional NIC, not the one that the management network attached to.



6) Create VM or use exist one, create/add new vNIC to the VM with the vmfex profile



7) Run VM

Note - only Test hosts 2 & 3 are currently configured with this network - so I expect Host 1 to fail.

If your port profile configured properly you should see the proper xml, like you described above ^^ -->

Good luck and kind regards,

Michael B

 

 

On Wed, Mar 9, 2016 at 8:49 AM, David LeVene <David.LeVene@blackboard.com> wrote:

Hey All,

 

Still trying to work through this VMFEX stuff, and I know what I want the file to look like at the end.. but not sure how to achieve it from the doco written here

 

http://www.ovirt.org/develop/developer-guide/vdsm/hook/vmfex/

and

http://www.ovirt.org/develop/release-management/features/network/ucs-integration/

 

Currently my device looks like this

 

# virsh -r net-dumpxml vdsm-SRIOV

<network>

  <name>vdsm-SRIOV</name>

 <forward mode='bridge'/>

  <bridge name='SRIOV'/>

</network>

 

 

I want it looking like this, then the networking will be as it should be!

A port group would be a vNIC Profile from the looks of things…

 

  <name>vdsm-SRIOV</name>

  <forward dev='enp6s0f1' mode='passthrough'>                             ß defined as a passthrough device, not a bridge

    <interface dev='enp6s0f1'/>

    <interface dev='enp6s0f2'/>

    <interface dev='enp6s0f3'/>

    <interface dev='enp6s0f4'/>

<   .. list of interfaces available to it which would need to be manually inputted as a hook>

  </forward>

  <portgroup name='ame1-test1'>

    <virtualport type='802.1Qbh'>

      <parameters profileid='profile-ame1-test1'/>

    </virtualport>

  </portgroup>

  <portgroup name='ame1-prep1'>

    <virtualport type='802.1Qbh'>

      <parameters profileid='profile-ame1-prep1'/>

    </virtualport>

  </portgroup>

</network>

 

 

Cheers

David

This email and any attachments may contain confidential and proprietary information of Blackboard that is for the sole use of the intended recipient. If you are not the intended recipient, disclosure, copying, re-distribution or other use of any of this information is strictly prohibited. Please immediately notify the sender and delete this transmission if you received this email in error.


_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




--

Michael Burman
RedHat Israel, RHEV-M QE Network Team

Mobile: 054-5355725
IRC: mburman

This email and any attachments may contain confidential and proprietary information of Blackboard that is for the sole use of the intended recipient. If you are not the intended recipient, disclosure, copying, re-distribution or other use of any of this information is strictly prohibited. Please immediately notify the sender and delete this transmission if you received this email in error.




--

Michael Burman
RedHat Israel, RHEV-M QE Network Team

Mobile: 054-5355725
IRC: mburman

This email and any attachments may contain confidential and proprietary information of Blackboard that is for the sole use of the intended recipient. If you are not the intended recipient, disclosure, copying, re-distribution or other use of any of this information is strictly prohibited. Please immediately notify the sender and delete this transmission if you received this email in error.



--
Michael Burman
RedHat Israel, RHEV-M QE Network Team

Mobile: 054-5355725
IRC: mburman