[Kimchi-devel] [PATCH v7 4/4] Host device passthrough: List VMs that are holding a host device

Aline Manera alinefm at linux.vnet.ibm.com
Mon Jul 14 18:02:25 UTC 2014



On 07/14/2014 12:10 AM, Zhou Zheng Sheng wrote:
> on 2014/07/13 22:50, Aline Manera wrote:
>> On 07/09/2014 04:01 AM, Zhou Zheng Sheng wrote:
>>> Add a "vm_holders" sub-collection under host device resource, so the
>>> front-end can determine if a device is busy or not, and the user can
>>> know which VMs are holding the device.
>>>
>>> This patch scans all VM XML to check if a device is hold by a VM. Also
>>> adds a check to keep the host device assigned to only one VM.
>>>
>>> Example
>>>     curl -k -u root -H "Content-Type: application/json" \
>>>       -H "Accept: application/json" \
>>>       'https://127.0.0.1:8001/host/devices/usb_1_1_6/vm_holders'
>>
>> You don't need to create a new resource for it
>> It should be a parameter of /host/devices/usb_1_1_6
>>
>> Example:
>>
>> GET /host/devices/usb_1_1_6
>> {
>>     ...
>>     holders: [vm1, vm2, ...]
>> }
>>
>
> I agree the ideal solution is as you suggested. I also considered it.
> The reason why I make the holders list a sub-resource is efficiency. To
> find out which VMs are holding one device, it has to scan all the VM
> XMLs. Due to the limitation of the current controller and model
> framework, each resource lookup is an isolated operation without any
> common context. So it has to scan all the VMs for each device instead of
> scanning all VMs just one pass and calculating an inverse device index.
> We can imaging, when /host/devices is requested, there would be a
> scanning storm. I also believe that resource like
> /host/devices/usb_1_1_6 makes sense even without virtualization
> information in it. So I decided to "delay" the lookup of the VM holders
> and make it a sub-collection.
>
> My original thought is that, firstly the user gets a general idea of all
> devices from the showing of listing /host/devices. If the user is
> interested in assigning a certain device, he would select it, then the
> front-end requests for the details for the affected devices in the same
> iommu group and VMs holding the device.

Got it.

I agree in doing that way for now.

But the best solution would have a way to share the common context 
between resource lookup calls to avoid this performance call you mentioned.

Suggestions are welcome!

>
>>> Should output a list like following.
>>>     [
>>>       {
>>>         "state":"shutoff",
>>>         "name":"fedora20"
>>>       },
>>>       {
>>>         "state":"running",
>>>         "name":"f20xfce-slave"
>>>       }
>>>     ]
>>>
>>> If there is no VM holding the device, it prints an empty list [].
>>>
>>> v5:
>>>     When assigning a device to VM, check if there are other VMs holding
>>> the device and raise an exception. Move the VMHoldersModel to
>>> vmhostdevs.py to avoid circular import problem.
>>>
>>> Signed-off-by: Zhou Zheng Sheng <zhshzhou at linux.vnet.ibm.com>
>>> ---
>>>    src/kimchi/control/host.py     |  7 +++++++
>>>    src/kimchi/i18n.py             |  2 ++
>>>    src/kimchi/model/vmhostdevs.py | 24 ++++++++++++++++++++++++
>>>    3 files changed, 33 insertions(+)
>>>
>>> diff --git a/src/kimchi/control/host.py b/src/kimchi/control/host.py
>>> index 15f2343..efc31ea 100644
>>> --- a/src/kimchi/control/host.py
>>> +++ b/src/kimchi/control/host.py
>>> @@ -110,11 +110,18 @@ class PassthroughAffectedDevices(Collection):
>>>            self.model_args = (device_id, )
>>>
>>>
>>> +class VMHolders(SimpleCollection):
>>> +    def __init__(self, model, device_id):
>>> +        super(VMHolders, self).__init__(model)
>>> +        self.model_args = (device_id, )
>>> +
>>> +
>>>    class Device(Resource):
>>>        def __init__(self, model, id):
>>>            super(Device, self).__init__(model, id)
>>>            self.passthrough_affected_devices = \
>>>                PassthroughAffectedDevices(self.model, id)
>>> +        self.vm_holders = VMHolders(self.model, id)
>>>
>>>        @property
>>>        def data(self):
>>> diff --git a/src/kimchi/i18n.py b/src/kimchi/i18n.py
>>> index bee734a..bbe40be 100644
>>> --- a/src/kimchi/i18n.py
>>> +++ b/src/kimchi/i18n.py
>>> @@ -96,6 +96,8 @@ messages = {
>>>                            "Please enable Intel VT-d or AMD IOMMU in
>>> your BIOS, then verify the Kernel is compiled with IOMMU support. "
>>>                            "For Intel CPU, add intel_iommu=on to your
>>> Kernel parameter in /boot/grub2/grub.conf. "
>>>                            "For AMD CPU, add iommu=pt iommu=1."),
>>> +    "KCHVMHDEV0004E": _("The host device %(dev_name)s should be
>>> assigned to just one VM. "
>>> +                        "Currently the following VM(s) are holding
>>> the device: %(names)s."),
>>>
>>>        "KCHVMIF0001E": _("Interface %(iface)s does not exist in virtual
>>> machine %(name)s"),
>>>        "KCHVMIF0002E": _("Network %(network)s specified for virtual
>>> machine %(name)s does not exist"),
>>> diff --git a/src/kimchi/model/vmhostdevs.py
>>> b/src/kimchi/model/vmhostdevs.py
>>> index 4569bd0..5b10bf6 100644
>>> --- a/src/kimchi/model/vmhostdevs.py
>>> +++ b/src/kimchi/model/vmhostdevs.py
>>> @@ -119,6 +119,11 @@ class VMHostDevsModel(object):
>>>                DevicesModel(conn=self.conn).get_list(_passthrough='true')
>>>            if dev_name not in eligible_dev_names:
>>>                raise InvalidParameter('KCHVMHDEV0002E', {'dev_name':
>>> dev_name})
>>> +        holders = VMHoldersModel(conn=self.conn).get_list(dev_name)
>>> +        if holders:
>>> +            names = ', '.join([holder['name'] for holder in holders])
>>> +            raise InvalidOperation('KCHVMHDEV0004E', {'dev_name':
>>> dev_name,
>>> +                                                      'names': names})
>>>
>>>        def create(self, vmid, params):
>>>            dev_name = params['name']
>>> @@ -297,3 +302,22 @@ class VMHostDevModel(object):
>>>                    xmlstr = etree.tostring(e)
>>>                    dom.detachDeviceFlags(
>>>                        xmlstr, get_vm_config_flag(dom, mode='all'))
>>> +
>>> +
>>> +class VMHoldersModel(object):
>>> +    def __init__(self, **kargs):
>>> +        self.conn = kargs['conn']
>>> +
>>> +    def get_list(self, device_id):
>>> +        devsmodel = VMHostDevsModel(conn=self.conn)
>>> +
>>> +        conn = self.conn.get()
>>> +        doms = conn.listAllDomains(0)
>>> +
>>> +        res = []
>>> +        for dom in doms:
>>> +            dom_name = dom.name()
>>> +            if device_id in devsmodel.get_list(dom_name):
>>> +                state = DOM_STATE_MAP[dom.info()[0]]
>>> +                res.append({"name": dom_name, "state": state})
>>> +        return res
>>>
>
>




More information about the Kimchi-devel mailing list