I'll try and sum up my responses here:
About being things being closed for discussion:
This stuff had been put up for discussion multiple times
over more that a year. There comes a time when a decision
has to be made.
Unless I get compelling enough arguments (ie. an actual use
case) I will have to keep things as they are.
As for task ID format not being enforced:
This was also up for discussion ages ago.
We already had issues with useless validations
causing us problems over time.
As I said, I don't care if the engine enforces
it to be a UUID. As long as the only argument is
"I don't want people that interface VDSM to do stupid
things" my response it still going to be "I am not here
to make sure other projects are doing their job correctly".
So unless I get a valid real world reason to reopen that
discussion things I'm going leave things as they are.
I will be adding length limitation to the ID to prevent abuse.
Now for the meaty bit:
The main reason against having job control is that
it doesn't work on a clustered message based system.
You need to keep 3 things in mind:
1. Messages can get lost (either requests or responses)
2. Messages can arrive out of order.
3. The same message can arrive multiple times.
3 You can try and minimize in the actual messaging layer
but 1 and two have to be handled by the actual algorithms.
Lets take a general request.
Using getAllRunningJobs() I have no way
of knowing if:
1. The message is still in transit
2. The request is already over (thus was cleared)
3. VDSM crashed.
4. Response was sent but never arrived.
5. Any combination of the cases above.
That is what is important about having idempotency or
having things be bound to entities.
Binding things to an entity gives you the information
you need to resolve those issues.
without that you will find it very hard to know what exactly is
going on.
For setupNetwork() I would just make it idempotent.
You just send the same command again with the same end state.
It would be a no-op if it's the current network topology
is what it's supposed to be.
You could also have a special field containing the "version"
of the configuration (I would make it a hash or a UUID and
not a running number) that you would persist locally on the
host after you finished configuring since the local host is the
scope of setupNetworks().
It would allow you to not care about any of the error state
keep sending the same configuration if you think something
bad happened until you the is what you expect it to be or
and error response actually manages find it's way back
to you. By using the same task ID you are guaranteed
to only have the operation running once at a time.
I don't mind helping anyone with making their algorithms
work but there is no escaping from the limitations
listed above. If we want to make oVirt truly scalable
and robust we have to start thinking about algorithms
that work despite of errors and not just have error flows.
Notice I don't even mention different systems of persistence
and some tasks that you should be able to get state information
about from more than one host. Some "Jobs" can survive
a VDSM restart since it's not in VDSM like stuff in gluster or QEmu.
To make it clear, the task API shouldn't really be that useful.
Task IDs are just there to match requests to responses internally
because as I explained, jobs are hard to manage generally in
such a system.
This by no way means that if we see a use case emerging that requires
some sort of infra we would not do it. I just think it would probably
be tied to some common algorithm or idiom than something truly generic
used by every API call.
Hope I made things clearer, sorry if I came out a bit rude.
I'm off, I have my country's birthday to celebrate.
----- Original Message -----
From: "Adam Litke" <alitke(a)redhat.com>
To: "Saggi Mizrahi" <smizrahi(a)redhat.com>
Cc: "Federico Simoncelli" <fsimonce(a)redhat.com>, "Dan
Kenigsberg" <danken(a)redhat.com>, "ybronhei"
<ybronhei(a)redhat.com>, "Barak Azulay" <bazulay(a)redhat.com>,
devel(a)ovirt.org, "Allon Mureinik" <amureini(a)redhat.com>
Sent: Monday, May 5, 2014 5:38:10 PM
Subject: vdsm tasks API design discussion
On 04/05/14 10:24 -0400, Saggi Mizrahi wrote:
>The thread became a bit too long for me to follow who said what when.
>
>So I'll just say how things are going to work:
This is not a very good way to frame a discussion. The above sentence
suggests that you are completely closed off to new ideas or working
together as a community. -1.
>VDSM is going to have a a non disk persistent tasks.
>The only reason I'm even giving a way to list running
>tasks is so that legacy operations can use it.
Have you checked with your stakeholders to make sure this limitation
is okay for the future intended use cases? I can say for sure that
you haven't asked those of us who are working on live merge.
>New verbs should not rely on the task ID. You should put
>the status on an object so the lifetime of the job is
>bound to that object and have the actual commands
>return quickly. The Task-ID is used internally to match
>requests with responses. In an ideal world without BC
>I wouldn't even have verbs to query for running tasks.
>
>You should poll the object or, in the future, use events
>to report progress.
While this approach sounds pretty nice at first glance I doubt that
these task semantics will remain simple for long. You are asking
every API that implements an asynchronous operation to define its own
"contract" for what it means to have a job still running. This means
engine will need to account for corner cases such as vdsm
crash/restart in different ways for each verb implemented. For
example, the setupNetworks verb would (probably?) be aborted by a vdsm
crash/restart but a live merge job would not (since it is tied to the
qemu process).
It would be far simpler to have vdsm return a simple list of job ids
(potentially with abstracted cursor information). Vdsm knows the
details about whether operations are still running and can provide
that pretty easily.
>As an example, instead of having
>startVM() return when the VM is up.
>You should have it return when VDSM has the VM
>object (in some map) and you should poll the
>status of the VM through the VM ID.
Right, I think we do that today. It's pretty easy to manage an async
object creation because you can think of the object like a long
running job and vdsm provides a list jobs verb (list).
>Instead of having copyImage() return when the
>copy is complete it should return when the
>all the metadata was created and persisted
>on the target image so that you can track
>the target image instead of the task.
Yep, another simple case because you are clearly creating an object
and you have a list-object verb. Let's try a more difficult case:
setupNetworks. How would that one work (and please provide some
details -- it's important)? Does engine need to duplicate the network
model and query every single aspect of all interfaces (down to the mtu
setting) in order to ensure that everything was set as it should be?
setupNetworks() gets passed an end state anyway.
>Also, you should try and make your commands idempotent so
>to simplify the flows further.
Good idea, but not always practical.
>Even though the job idiom appears to be simpler it is harder
>to manage in a clustered environment as tracking job IDs is
>much harder to coordinate than state changes.
I'm not ready to paint with such a broad brush. State machines can be
really complex. You may end up exposing too much vdsm internal
information in order to give engine enough context to understand state
changes.
>As for Task IDs being UUIDs. As I said before. VDSM will not
>enforce IDs given from the engine to be UUIDs they will be
>treated as opaque strings. There is no reason to validate that
>they are UUIDs in VDSM. It's adding a limitation on the API
>for no reason.
Again, you're writing as if the debate is closed. The purpose of a
task ID is to correlate an initial request with some future
correspondence (either an out of order return value or in the state of
another object). By allowing it to be free-form you are just begging
for it to be abused in the future. This reminds me of 'specParams',
'customProperties' and the 'options' free-form dictionary parameters
that some vdsm APIs have.
There should be no reason why taskID should be used to trojan-horse
some contextual data into vdsm.
>TaskID in the HTTP header will only work for storage verbs.
>All other subsystems will have to move to the json-rpc to
>get that. Seeing as json-rpc in VDSM is targeted for 3.5
>it shouldn't be that much of an issue.
Live merge is a really important feature for 3.5. I'm hesitant to depend on
jsonrpc unless we are absolutely certain that it will be ready to go
in time for us to integrate with it. Also, we need to understand the
upgrade/BC characteristics of jsonrpc because they will suddenly
impact live merge as well. I'd specifically like to hear what Allon
thinks about adding this dependency.
I am not sure you're designing a tasks API that is going to be
generally useful. It seems I will need to work around the missing
features in your design be adding extra complexity to my feature.
Isn't this what we're trying to avoid?
>
>----- Original Message -----
>> From: "Adam Litke" <alitke(a)redhat.com>
>> To: "Dan Kenigsberg" <danken(a)redhat.com>
>> Cc: smizrahi(a)redhat.com, "ybronhei" <ybronhei(a)redhat.com>,
devel(a)ovirt.org
>> Sent: Thursday, May 1, 2014 8:28:14 PM
>> Subject: Re: [ovirt-devel] short recap of last vdsm call (15.4.2014)
>>
>> On 01/05/14 17:53 +0100, Dan Kenigsberg wrote:
>> >On Wed, Apr 30, 2014 at 01:26:18PM -0400, Adam Litke wrote:
>> >> On 30/04/14 14:22 +0100, Dan Kenigsberg wrote:
>> >> >On Tue, Apr 22, 2014 at 02:54:29PM +0300, ybronhei wrote:
>> >> >>hey,
>> >> >>
>> >> >>somehow we missed the summary of this call, and few
"big" issues
>> >> >>were raised there. so i would like to share it with all and
hear
>> >> >>more comments
>> >> >>
>> >> >>- task id in http header - allows engine to initiate calls with
id
>> >> >>instead of following vdsm response - federico already started
this
>> >> >>work, and this is mandatory for live merge feature afaiu.
>> >> >
>> >> >Adam, Federico, may I revisit this question from another angle?
>> >> >
>> >> >Why does Vdsm needs to know live-merge's task id?
>> >> >As far as I understand, (vmid, disk id) are enough to identify a
live
>> >> >merge process.
>> >>
>> >> A vmId + diskId can uniquely identify a block job at a single moment
>> >> in time since qemu guarantees that only a single block job can run at
>> >> any given point in time. But this gives us no way to differentiate
>> >> two sequential jobs that run on the same disk. Therefore, without
>> >> having an engine-supplied jobID, we can never be sure if a one job
>> >> finished and another started since the last time we polled stats.
>> >
>> >Why would Engine ever want to initiate a new live merge of a
>> >(vmId,diskId) before it has a conclusive result of the previous
>> >success/failure of the previous attempt? As far as I understand, this
>> >should never happen, and it's actually good for the API to force
>> >avoidence of such a case.
>> >
>> >> Additionally, engine-supplied UUIDs is part of a developing framework
>> >> for next-generation async tasks. Engine prefers to use a single
>> >> identifier to represent any kind of task (rather than some problem
>> >> domain specific combination of UUIDs). Adhering to this rule will
>> >> help us to converge on a single implementation of ng async tasks
>> >> moving forward.
>> >
>> >I do not think that having a (virtual) table of
>> > task_id -> vmId,diskId
>> >in Vdsm is much simpler than having it on the Engine machine.
>>
>> It needs to go somewhere. As the designers of the API we felt it
>> would be better for vdsm to hide the semantics of when a vmId,diskId
>> tuple can be considered a unique identifier. If we ever do generalize
>> the concept of a transient task to other users (setupNetworks, etc) it
>> would be a far more consumable API if engine didn't need to handle a
>> bunch of special cases about what constitutes a "job ID" and the
>> specifics of its lifetime. UUIDs are simple and already
>> well-supported. Why make it more difficult than it has to be?
>>
>> >
>> >I still find the nothion of a new framework for async tasks quite
>> >useful. But as I requested before, I think we should design it first,
>> >so it fits all conceivable users. In particular, if we should not tie it
>> >to the existence of a running VM. We'd better settle on persistence
>> >semantics that works for everybody (such as network tasks).
>> >
>> >Last time, the idea was struck down by Saggi and others from infra, who
>> >are afraid to repeat mistakes from the current task framework.
>>
>> Several famous quotes apply here. The only thing we have to fear is
>> fear itself :) Sometimes perfect is the enemy of good. Tasks
>> redesign was always going to be driven by the need to implement one
>> feature at first. It just so happens that we volunteered to take a
>> stab at it for live merge. It's clear that we won't be able to
>> completely replace the old tasks and get this feature out in one pass.
>> We believe the general principles of our tasks are generally
>> extensible to cover new use cases in the future:
>>
>> * Jobs are given an engine-supplied UUID when started
>> * There is a well-known way to check if a job is running or not
>> * There is a well-known way to test if a finished job succeeded or
>> failed.
>>
>> I believe we did spend quite a bit of time in March coming up with a
>> design for NG tasks.
>>
>> Unfortunately it was infra who made our jobs vm-specific by requiring
>> the job status to be passed by getVMStats rather than an
>> object-agnostic getJobsStatus stand-alone API that could conglomerate
>> all job types into a single response.
>>
>> >>
>> >> >
>> >> >If we do not have a task id, we do not need to worry on how to
pass
>> >> >it,
>> >> >and where to persist it.
>> >>
>> >> There are at least 3 reasons to persist a block job ID:
>> >> * To associate a specific block job operation with a specific
>> >> engine-initiated flow.
>> >> * So that you can clean up after a job that completed when vdsm could
>> >> not receive the completion event.
>> >
>> >But if Vdsm dies before it managed to clean up, Engine would have to
>> >perform the cleanup via another host. So having this short-loop cleanup
>> >is redundant.
>> >
>>
>> Fair enough. We'll be doing the volume chain scan for every native VM
>> disk at VM startup. The only exception is if we are recovering and
>> the running VM's recovery file does not show any outstanding block
>> jobs. In that case we have definitive information that a rescan will
>> not be required.
>>
>> >> * Since we must ask libvirt about block job events on a per VM, per
>> >> disk basis, tracking the devices on which we expect block jobs
>> >> enables us to eliminate wasteful calls to libvirt.
>> >
>> >This can be done by an in-memory cache.
>>
>> Sure, but then you miss out on the other benefits I've enumerated.
>>
>> >
>> >>
>> >> Hope this makes the rationale a bit clearer...
>> >
>> >Yes, but I am not yet convinced...
>> >
>>
>> --
>> Adam Litke
>>
--
Adam Litke