----- Original Message -----
From: "Michal Skrivanek"
<michal.skrivanek(a)redhat.com>
To: "Martin Polednik" <mpoledni(a)redhat.com>
Cc: devel(a)ovirt.org
Sent: Wednesday, June 25, 2014 9:15:38 AM
Subject: Re: [ovirt-devel] [vdsm] Infrastructure design for node (host) devices
On Jun 24, 2014, at 12:26 , Martin Polednik <mpoledni(a)redhat.com> wrote:
> Hello,
>
> I'm actively working on getting host device passthrough (pci, usb and scsi)
> exposed in VDSM, but I've encountered growing complexity of this feature.
>
> The devices are currently created in the same manner as virtual devices and
> their reporting is done via hostDevices list in getCaps. As I implemented
> usb and scsi devices, the size of this list grew almost twice - and that is
> on a laptop.
To be fair, laptops do have quite a bit of devices :P
>
> Similar problem is with the devices themselves, they are closely tied to
> host
> and currently, engine would have to keep their mapping to VMs, reattach
> back
> loose devices and handle all of this in case of migration.
In general, with host device passthrough the simpler and first step would be
that the VM gets pinned to the host if it uses a host device.
If we want to make migration possible, we should whitelist devices or device
classes for which it is not troublesome, but I would start small and have the
VM pinned.
>
> I would like to hear your opinion on building something like host device
> pool
> in VDSM. The pool would be populated and periodically updated (to handle
> hot(un)plugs) and VMs/engine could query it for free/assigned/possibly
> problematic
I'm not sure about making a pool. Having a verb for getting host devices sounds
good though (Specially with the pinning solution, as the engine would only need
to poll when the VM is pinned).
How costly is it to list them? It shouldn't be much costlier than navigating
sysfs and a potential libvirt call, right?
> devices (which could be reattached by the pool). This has added
benefit of
> requiring fewer libvirt calls, but a bit more complexity and possibly one
> thread.
> The persistence of the pool on VDSM restart could be kept in config or
> constructed
> from XML.
best would be if we can live without too much persistence. Can we find out
the actual state of things including VM mapping on vdsm startup?
I would actually go for not keeping this data in memory unless it proves really
expensive (as I say above).
>
> I'd need new API verbs to allow engine to communicate with the pool,
> possibly leaving caps as they are and engine could detect the presence of
> newer
> vdsm by presence of these API verbs.
+1
> The vmCreate call would remain almost
> the
> same, only with the addition of new device for VMs (where the detach and
> tracking
> routine would be communicated with the pool).
> _______________________________________________
> Devel mailing list
> Devel(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________
Devel mailing list
Devel(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel