[ovirt-devel] [VDSM] Correct implementation of virt-sysprep job

Nir Soffer nsoffer at redhat.com
Mon Dec 5 08:05:36 UTC 2016


On Sun, Dec 4, 2016 at 8:50 PM, Shmuel Melamud <smelamud at redhat.com> wrote:
>
> Hi!
>
> I'm currently working on integration of virt-sysprep into oVirt.
>
> Usually, if user creates a template from a regular VM, and then creates new VMs from this template, these new VMs inherit all configuration of the original VM, including SSH keys, UDEV rules, MAC addresses, system ID, hostname etc. It is unfortunate, because you cannot have two network devices with the same MAC address in the same network, for example.
>
> To avoid this, user must clean all machine-specific configuration from the original VM before creating a template from it. You can do this manually, but there is virt-sysprep utility that does this automatically.
>
> Ideally, virt-sysprep should be seamlessly integrated into template creation process. But the first step is to create a simple button: user selects a VM, clicks the button and oVirt executes virt-sysprep on the VM.
>
> virt-sysprep works directly on VM's filesystem. It accepts list of all disks of the VM as parameters:
>
> virt-sysprep -a disk1.img -a disk2.img -a disk3.img
>
> The architecture is as follows: command on the Engine side runs a job on VDSM side and tracks its success/failure. The job on VDSM side runs virt-sysprep.
>
> The question is how to implement the job correctly?
>
> I thought about using storage jobs, but they are designed to work only with a single volume, correct?

New storage verbs are volume based. This make it easy to manage
them on the engine side, and will allow parallelizing volume operations
on single or multiple hosts.

A storage volume job is using sanlock lease on the modified volume
and volume generation number. If a host running pending jobs becomes
non-responsive and cannot be fenced, we can detect the state of
the job, fence the job, and start the job on another host.

In the SPM task, if a host becomes non-responsive and cannot be
fenced, the whole setup is stuck, there is no way to perform any
storage operation.

> Is is possible to use them with operation that is performed on multiple volumes?
> Or, alternatively, is it possible to use some kind of 'VM jobs' - that work on VM at whole?

We can do:

1. Add jobs with multiple volumes leases - can make error handling very
    complex. How do tell a job state if you have multiple leases? which
    volume generation you use?

2. Use volume job using one of the volumes (the boot volume?). This does
    not protect the other volumes from modification but engine is responsible
    for this.

3. Use new "vm jobs", using a vm lease (should be available this week
on master).
    This protects a vm during sysprep from starting the vm.
    We still need a generation to detect the job state, I think we can
use the sanlock
    lease generation for this.

I like the last option since sysprep is much like running a vm.

> How v2v solves this problem?

It does not.

v2v predates storage volume jobs. It does not use volume leases and generation
and does have any way to recover if a host running v2v becomes non-responsive
and cannot be fenced.

It also does not use the jobs framework and does not use a thread pool for
v2v jobs, so it has no limit on the number of storage operations on a host.

Nir



More information about the Devel mailing list