<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div></div><div><br></div><div><br>On 07 Dec 2016, at 09:17, Oved Ourfali <<a href="mailto:oourfali@redhat.com">oourfali@redhat.com</a>> wrote:<br><br></div><blockquote type="cite"><div><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Dec 6, 2016 at 11:12 PM, Adam Litke <span dir="ltr"><<a href="mailto:alitke@redhat.com" target="_blank">alitke@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 06/12/16 22:06 +0200, Arik Hadas wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Adam,<br>
</blockquote>
<br>
:) You seem upset. Sorry if I touched on a nerve...<span class=""><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Just out of curiosity: when you write "v2v has promised" - what exactly do you<br>
mean? the tool? Richard Jones (the maintainer of virt-v2v)? Shahar and I that<br>
implemented the integration with virt-v2v? I'm not aware of such a promise by<br>
any of these options :)<br>
</blockquote>
<br></span>
Some history...<br>
<br>
Earlier this year Nir, Francesco (added), Shahar, and I began<br>
discussing the similarities between what storage needed to do with<br>
external commands and what was designed specifically for v2v. I am<br>
not sure if you were involved in the project at that time. The plan<br>
was to create common infrastructure that could be extended to fit the<br>
unique needs of the verticals. The v2v code was going to be moved<br>
over to the new infrastructure (see [1]) and the only thing that<br>
stopped the initial patch was lack of a VMWare testing environment for<br>
verification.<br>
<br>
At that time storage refocused on developing verbs that used the new<br>
infrastructure and have been maintaining its suitability for general<br>
use. Conversion of v2v -> Host Jobs is obviously a lower priority<br>
item and much more difficult now due to the early missed opportunity.<span class=""><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Anyway, let's say that you were given such a promise by someone and thus<br>
consider that mechanism to be deprecated - it doesn't really matter.<br>
</blockquote>
<br></span>
I may be biased but I think my opinion does matter.<span class=""><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
The current implementation doesn't well fit to this flow (it requires<br>
per-volume job, it creates leases that are not needed for template's disks,<br>
...) and with the "next-gen API" with proper support for virt flows not even<br>
being discussed with us (and iiuc also not with the infra team) yet, I don't<br>
understand what do you suggest except for some strong, though irrelevant,<br>
statements.<br>
</blockquote>
<br></span>
If you are willing to engage in a good-faith technical discussion I am<br>
sure I can help you to understand. These operations to storage demand<br>
some form of locking protection. If volume leases aren't appropriate then<br>
perhaps we should use the VM Leases / xleases that Nir is finishing<br>
off for 4.1 now.<span class=""><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I suggest loud and clear to reuse (not to add dependencies, not to enhance, ..)<br>
an existing mechanism for a very similar flow of virt-v2v that works well and<br>
simple.<br>
</blockquote>
<br></span>
I clearly remember discussions involving infra (hello Oved), virt<br>
(hola Michal), and storage where we decided that new APIs performing<br>
async operations involving external commands should use the HostJobs<br>
infrastructure instead of adding more information to Host Stats.<br>
These were the "famous" entity polling meetings.<br>
<br>
Of course plans can change but I have never been looped into any such<br>
discussions.<span class=""><br>
<br></span></blockquote><div><br></div><div>Well, I think that when someone builds a good infrastructure he first needs to talk to all consumers and make sure it fits.</div><div>In this case it seems like most work was done to fit the storage use-case, and now you check whether it can fit others as well....</div><div><br></div><div>IMO it makes much more sense to use events where possible (and you've promised to use those as well, but I don't see you doing that...). v2v should use events for sure, and they have promised to do that in the past, instead of using the v2v jobs. The reason events weren't used originally with the v2v feature, was that it was too risky and the events infrastructure was added too late in the game.</div></div></div></div></div></blockquote><div><br></div>Revisiting and refactoring code which is already in use is always a bit of luxury we can rarely prioritize. So indeed v2v is not using events. The generalization work has been done to some extent, but there is no incentive to rewrite it completely. <div>On the other hand we are now trying to add events to migration progress reporting and hand over since that area is being touched due to post-copy enhancements. </div><div>So, when there is a practical chance to improve functionality by utilizing events it indeed should be the first choice</div><div><br><blockquote type="cite"><div><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Do you "promise" to implement your "next gen API" for 4.1 as an alternative?<br>
</blockquote>
<br></span>
I guess we need the design first.<div class="HOEnZb"><div class="h5"><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Tue, Dec 6, 2016 at 5:04 PM, Adam Litke <<a href="mailto:alitke@redhat.com" target="_blank">alitke@redhat.com</a>> wrote:<br>
<br>
On 05/12/16 11:17 +0200, Arik Hadas wrote:<br>
<br>
<br>
<br>
On Mon, Dec 5, 2016 at 10:05 AM, Nir Soffer <<a href="mailto:nsoffer@redhat.com" target="_blank">nsoffer@redhat.com</a>> wrote:<br>
<br>
On Sun, Dec 4, 2016 at 8:50 PM, Shmuel Melamud <<a href="mailto:smelamud@redhat.com" target="_blank">smelamud@redhat.com</a>><br>
wrote:<br>
><br>
> Hi!<br>
><br>
> I'm currently working on integration of virt-sysprep into oVirt.<br>
><br>
> Usually, if user creates a template from a regular VM, and then<br>
creates<br>
new VMs from this template, these new VMs inherit all configuration<br>
of the<br>
original VM, including SSH keys, UDEV rules, MAC addresses, system<br>
ID,<br>
hostname etc. It is unfortunate, because you cannot have two network<br>
devices with the same MAC address in the same network, for example.<br>
><br>
> To avoid this, user must clean all machine-specific configuration<br>
from<br>
the original VM before creating a template from it. You can do this<br>
manually, but there is virt-sysprep utility that does this<br>
automatically.<br>
><br>
> Ideally, virt-sysprep should be seamlessly integrated into<br>
template<br>
creation process. But the first step is to create a simple button:<br>
user<br>
selects a VM, clicks the button and oVirt executes virt-sysprep on<br>
the VM.<br>
><br>
> virt-sysprep works directly on VM's filesystem. It accepts list of<br>
all<br>
disks of the VM as parameters:<br>
><br>
> virt-sysprep -a disk1.img -a disk2.img -a disk3.img<br>
><br>
> The architecture is as follows: command on the Engine side runs a<br>
job on<br>
VDSM side and tracks its success/failure. The job on VDSM side runs<br>
virt-sysprep.<br>
><br>
> The question is how to implement the job correctly?<br>
><br>
> I thought about using storage jobs, but they are designed to work<br>
only<br>
with a single volume, correct?<br>
<br>
New storage verbs are volume based. This make it easy to manage<br>
them on the engine side, and will allow parallelizing volume<br>
operations<br>
on single or multiple hosts.<br>
<br>
A storage volume job is using sanlock lease on the modified volume<br>
and volume generation number. If a host running pending jobs becomes<br>
non-responsive and cannot be fenced, we can detect the state of<br>
the job, fence the job, and start the job on another host.<br>
<br>
In the SPM task, if a host becomes non-responsive and cannot be<br>
fenced, the whole setup is stuck, there is no way to perform any<br>
storage operation.<br>
> Is is possible to use them with operation that is performed on<br>
multiple<br>
volumes?<br>
> Or, alternatively, is it possible to use some kind of 'VM jobs' -<br>
that<br>
work on VM at whole?<br>
<br>
We can do:<br>
<br>
1. Add jobs with multiple volumes leases - can make error handling<br>
very<br>
complex. How do tell a job state if you have multiple leases?<br>
which<br>
volume generation you use?<br>
<br>
2. Use volume job using one of the volumes (the boot volume?). This<br>
does<br>
not protect the other volumes from modification but engine is<br>
responsible<br>
for this.<br>
<br>
3. Use new "vm jobs", using a vm lease (should be available this<br>
week<br>
on master).<br>
This protects a vm during sysprep from starting the vm.<br>
We still need a generation to detect the job state, I think we<br>
can<br>
use the sanlock<br>
lease generation for this.<br>
<br>
I like the last option since sysprep is much like running a vm.<br>
> How v2v solves this problem?<br>
<br>
It does not.<br>
<br>
v2v predates storage volume jobs. It does not use volume leases and<br>
generation<br>
and does have any way to recover if a host running v2v becomes<br>
non-responsive<br>
and cannot be fenced.<br>
<br>
It also does not use the jobs framework and does not use a thread<br>
pool for<br>
v2v jobs, so it has no limit on the number of storage operations on<br>
a host.<br>
<br>
<br>
Right, but let's be fair and present the benefits of v2v-jobs as well:<br>
1. it is the simplest "infrastructure" in terms of LOC<br>
<br>
<br>
It is also deprecated. V2V has promised to adopt the richer Host Jobs<br>
API in the future.<br>
<br>
<br>
2. it is the most efficient mechanism in terms of interactions between<br>
the<br>
engine and VDSM (it doesn't require new verbs/call, the data is<br>
attached to<br>
VdsStats; probably the easiest mechanism to convert to events)<br>
<br>
<br>
Engine is already polling the host jobs API so I am not sure I agree<br>
with you here.<br>
<br>
<br>
3. it is the most efficient implementation in terms of interaction with<br>
the<br>
database (no date is persisted into the database, no polling is done)<br>
<br>
<br>
Again, we're already using the Host Jobs API. We'll gain efficiency<br>
by migrating away from the old v2v API and having a single, unified<br>
approach (Host Jobs).<br>
<br>
<br>
Currently we have 3 mechanisms to report jobs:<br>
1. VM jobs - that is currently used for live-merge. This requires the<br>
VM entity<br>
to exist in VDSM, thus not suitable for virt-sysprep.<br>
<br>
<br>
Correct, not appropriate for this application.<br>
<br>
<br>
2. storage jobs - complicated infrastructure, targeted for recovering<br>
from<br>
failures to maintain storage consistency. Many of the things this<br>
infrastructure knows to handle is irrelevant for virt-sysprep flow, and<br>
the<br>
fact that virt-sysprep is invoked on VM rather than particular disk<br>
makes it<br>
less suitable.<br>
<br>
<br>
These are more appropriately called HostJobs and the have the<br>
following semantics:<br>
- They represent an external process running on a single host<br>
- They are not persisted. If the host or vdsm restarts, the job is<br>
aborted<br>
- They operate on entities. Currently storage is the first adopter<br>
of the infrastructure but virt was going to adopt these for the<br>
next-gen API. Entities can be volumes, storage domains, vms,<br>
network interfaces, etc.<br>
- Job status and progress is reported by the Host Jobs API. If a job<br>
is not present, then the underlying entitie(s) must be polled by<br>
engine to determine the actual state.<br>
<br>
<br>
3. V2V jobs - no mechanism is provided to resume failed jobs, no<br>
leases, etc<br>
<br>
<br>
This is the old infra upon which Host Jobs are built. v2v has<br>
promised to move to Host Jobs in the future so we should not add new<br>
dependencies to this code.<br>
<br>
<br>
I have some arguments for using V2V-like jobs [1]:<br>
1. creating template from vm is rarely done - if host goes unresponsive<br>
or any<br>
other failure is detected we can just remove the template and report<br>
the error<br>
<br>
<br>
We can chose this error handling with Host Jobs as well.<br>
<br>
<br>
2. the phase of virt-sysprep is, unlike typical storage operation,<br>
short -<br>
reducing the risk of failures during the process <br>
<br>
<br>
Reduced risk of failures is never an excuse to have lax error<br>
handling. The storage flavored host jobs provide tons of utilities<br>
for making error handling standardized, easy to implement, and<br>
correct.<br>
<br>
<br>
3. during the operation the VM is down - by locking the VM/template and<br>
its<br>
disks on the engine side, we render leases-like mechanism redundant<br>
<br>
<br>
Eventually we want to protect all operations on storage with sanlock<br>
leases. This is safer and allows for a more distributed approach to<br>
management. Again, the use of leases correctly in host jobs requires<br>
about 5 lines of code. The benefits of standardization far outweigh<br>
any perceived simplification resulting from omitting it.<br>
<br>
<br>
4. in the worst case - the disk will not be corrupted (only some of the<br>
data<br>
might be removed).<br>
<br>
<br>
Again, the way engine chooses to handle job failures is independent of<br>
the mechanism. Let's separate that from this discussion.<br>
<br>
<br>
So I think that the mechanism for storage jobs is an over-kill for this<br>
case.<br>
We can keep it simple by generalise the V2V-job for other virt-tools<br>
jobs, like<br>
virt-sysprep.<br>
<br>
<br>
I think we ought to standardize on the Host Jobs framework where we<br>
can collaborate on unit tests, standardized locking and error<br>
handling, abort logic, etc. When v2v moves to host jobs then we will<br>
have a unified method of handling ephemeral jobs that are tied to<br>
entities.<br>
<br>
--<br>
Adam Litke<br>
<br>
<br>
</blockquote>
<br></div></div><span class="HOEnZb"><font color="#888888">
-- <br>
Adam Litke<br>
</font></span></blockquote></div><br></div></div>
</div></blockquote></div></body></html>