Re: [ovirt-devel] [ovirt-users] Unremovable disks created through the API

On Tue, Mar 6, 2018 at 11:19 PM, Richard W.M. Jones <rjones@redhat.com> wrote:
On Tue, Mar 06, 2018 at 11:14:40PM +0200, Arik Hadas wrote:
On Tue, Mar 6, 2018 at 9:18 PM, Richard W.M. Jones <rjones@redhat.com> wrote:
I've been playing with disk uploads through the API. As a result I now have lots of disks in the states "Paused by System" and "Paused by User". They are not attached to any VM, and I'm logged in as admin@internal, but there seems to be no way to use them. Even worse I've now run out of space so can't do anything else.
How can I remove them?
Screenshot: http://oirase.annexia.org/tmp/ovirt.png
Hi Richard,
Selecting Upload->Cancel at that tab will remove such a disk. Note that it may take a minute or two.
Yes, that works, thanks.
(Moving to devel-list) BTW, I think that the import process should include a preliminary phase where ovirt-engine is informed that the import process starts. Currently, IIUC, the new process is designed to be: 1. virt-v2v uploads the disks 2. virt-v2v calls an API with the OVF configuration so ovirt-engine will add the VM and attach the uploaded disks to that VM IMHO, the process should be comprised of: 1. virt-v2v calls an API with the (probably partial since the OS and other things are unknown at that point) OVF configuration 2. virt-v2v uploads the disks 3. virt-v2v provides the up-to-date configuration Step #1 will enable ovirt-engine: 1. Most importantly, to cleanup uploaded disks in case of an error during the import process. Otherwise, we require the client to clean them up, which can be challenging (e.g., if the virt-v2v process crashes). 2. To inform the user that the process has started - so he won't get surprised by seeing disks being uploaded suddenly. That will give a context to these upload operations. 3. To inform the user about the progress of the import process, much like we do today when importing VMs from vSphere to RHV. 4. To perform validations on the (partial) VM configuration, e.g., verifying that no VM with the same name exists/verifying there is enough space (optionally mapping different disks to different storage devices) and so on, before uploading the disks. The gaps I see: 1. We don't have a command for step #1 yet but that's something we can provide relatively quickly. We need it also to support uploading an OVA via oVirt's webadmin. 2. We have a command for step #3, but it is not exposed via the API.
Rich.
-- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~ rjones Read my programming and virtualization blog: http://rwmj.wordpress.com Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW

On Wed, Mar 07, 2018 at 10:42:31AM +0200, Arik Hadas wrote:
(Moving to devel-list) BTW, I think that the import process should include a preliminary phase where ovirt-engine is informed that the import process starts.
Currently, IIUC, the new process is designed to be: 1. virt-v2v uploads the disks 2. virt-v2v calls an API with the OVF configuration so ovirt-engine will add the VM and attach the uploaded disks to that VM
Yes, this is how it happens now, see: https://www.redhat.com/archives/libguestfs/2018-March/msg00021.html
IMHO, the process should be comprised of: 1. virt-v2v calls an API with the (probably partial since the OS and other things are unknown at that point) OVF configuration
Almost nothing is known at this point, I'm not sure what we could provide. Perhaps just number and virtual size of disks. It doesn't sound like it would be OVF, but something else. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-df lists disk usage of guests without needing to install any software inside the virtual machine. Supports Linux and Windows. http://people.redhat.com/~rjones/virt-df/

On Wed, Mar 7, 2018 at 12:05 PM, Richard W.M. Jones <rjones@redhat.com> wrote:
On Wed, Mar 07, 2018 at 10:42:31AM +0200, Arik Hadas wrote:
(Moving to devel-list) BTW, I think that the import process should include a preliminary phase where ovirt-engine is informed that the import process starts.
Currently, IIUC, the new process is designed to be: 1. virt-v2v uploads the disks 2. virt-v2v calls an API with the OVF configuration so ovirt-engine will add the VM and attach the uploaded disks to that VM
Yes, this is how it happens now, see:
https://www.redhat.com/archives/libguestfs/2018-March/msg00021.html
IMHO, the process should be comprised of: 1. virt-v2v calls an API with the (probably partial since the OS and other things are unknown at that point) OVF configuration
Almost nothing is known at this point, I'm not sure what we could provide. Perhaps just number and virtual size of disks. It doesn't sound like it would be OVF, but something else.
Interesting, that contradicts my intuition - I would imagine that most of the things are actually known (the things that appear in the top-level part of the domain xml: memory size, memory size, num of CPUs, name,.. ) and only things that depend on the content of the disks or things that depend on installations during the conversion are unknown. But anyway, it is enough IMO to send the name, memory, CPU and size of the disks to present something useful to the user and make the necessary validations at that point.
Rich.
-- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~ rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-df lists disk usage of guests without needing to install any software inside the virtual machine. Supports Linux and Windows. http://people.redhat.com/~rjones/virt-df/

On Wed, Mar 07, 2018 at 01:26:39PM +0200, Arik Hadas wrote:
Interesting, that contradicts my intuition - I would imagine that most of the things are actually known (the things that appear in the top-level part of the domain xml: memory size, memory size, num of CPUs, name,.. ) and only things that depend on the content of the disks or things that depend on installations during the conversion are unknown. But anyway, it is enough IMO to send the name, memory, CPU and size of the disks to present something useful to the user and make the necessary validations at that point.
Some of those things are known, but they didn't seem to me to be that interesting for oVirt to know in advance. In any case what's precisely known before conversion is: (1) The 'source' struct and sub-structs: https://github.com/libguestfs/libguestfs/blob/ba53251ab912b8bac9e00c1022adc6... (2) The 'overlay' struct (one per disk): https://github.com/libguestfs/libguestfs/blob/ba53251ab912b8bac9e00c1022adc6... Note only virtual disk size is known, which is near to useless for provisioning storage. (3) The 'target' struct (one per disk): https://github.com/libguestfs/libguestfs/blob/ba53251ab912b8bac9e00c1022adc6... What's unknown are guest capabilities (hence nothing about what devices should be presented to the guest), inspection data, target bus mapping, real size of disks, etc. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top

On Wed, Mar 7, 2018 at 1:41 PM, Richard W.M. Jones <rjones@redhat.com> wrote:
Interesting, that contradicts my intuition - I would imagine that most of the things are actually known (the things that appear in the top-level
of the domain xml: memory size, memory size, num of CPUs, name,.. ) and only things that depend on the content of the disks or things that depend on installations during the conversion are unknown. But anyway, it is enough IMO to send the name, memory, CPU and size of
On Wed, Mar 07, 2018 at 01:26:39PM +0200, Arik Hadas wrote: part the
disks to present something useful to the user and make the necessary validations at that point.
Some of those things are known, but they didn't seem to me to be that interesting for oVirt to know in advance. In any case what's precisely known before conversion is:
(1) The 'source' struct and sub-structs:
https://github.com/libguestfs/libguestfs/blob/ ba53251ab912b8bac9e00c1022adc6ba9bdf70a3/v2v/types.mli#L59
(2) The 'overlay' struct (one per disk):
https://github.com/libguestfs/libguestfs/blob/ ba53251ab912b8bac9e00c1022adc6ba9bdf70a3/v2v/types.mli#L175
Note only virtual disk size is known, which is near to useless for provisioning storage.
(3) The 'target' struct (one per disk):
https://github.com/libguestfs/libguestfs/blob/ ba53251ab912b8bac9e00c1022adc6ba9bdf70a3/v2v/types.mli#L191
What's unknown are guest capabilities (hence nothing about what devices should be presented to the guest), inspection data, target bus mapping, real size of disks, etc.
I see. I think it is sufficient - the information from the 'source' struct seems enough just to produce a representative VM entity in the database that would be reflected in the UI with status 'importing' and for general validations, and the estimated size on the 'target' struct would be enough for storage validations and optionally for choosing the right target storage domain. The other things are relatively hidden in oVirt's UI and can be added at the last phase. BTW, that's how import from VMware/Xen currently works - we add a VM entity based on the domain XML we get from vCenter and at the last phase add the missing parts when getting the generated OVF from virt-v2v. So for instance, the VM would have no graphics device until that last phase.
Rich.
-- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~ rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top

On 07 Mar 2018, at 14:20, Arik Hadas <ahadas@redhat.com> wrote: On Wed, Mar 7, 2018 at 1:41 PM, Richard W.M. Jones <rjones@redhat.com> wrote:
Interesting, that contradicts my intuition - I would imagine that most of the things are actually known (the things that appear in the top-level
of the domain xml: memory size, memory size, num of CPUs, name,.. ) and only things that depend on the content of the disks or things that depend on installations during the conversion are unknown. But anyway, it is enough IMO to send the name, memory, CPU and size of
On Wed, Mar 07, 2018 at 01:26:39PM +0200, Arik Hadas wrote: part the
disks to present something useful to the user and make the necessary validations at that point.
Some of those things are known, but they didn't seem to me to be that interesting for oVirt to know in advance. In any case what's precisely known before conversion is:
(1) The 'source' struct and sub-structs:
https://github.com/libguestfs/libguestfs/blob/ ba53251ab912b8bac9e00c1022adc6ba9bdf70a3/v2v/types.mli#L59
(2) The 'overlay' struct (one per disk):
https://github.com/libguestfs/libguestfs/blob/ ba53251ab912b8bac9e00c1022adc6ba9bdf70a3/v2v/types.mli#L175
Note only virtual disk size is known, which is near to useless for provisioning storage.
(3) The 'target' struct (one per disk):
https://github.com/libguestfs/libguestfs/blob/ ba53251ab912b8bac9e00c1022adc6ba9bdf70a3/v2v/types.mli#L191
What's unknown are guest capabilities (hence nothing about what devices should be presented to the guest), inspection data, target bus mapping, real size of disks, etc.
I see. I think it is sufficient - the information from the 'source' struct seems enough just to produce a representative VM entity in the database that would be reflected in the UI with status 'importing' and for general validations, For having an entity, yes. For meaningful validations, not really. It's not so interesting to see much in oVirt, at least not for the virt-v2v command line usage and the estimated size on the 'target' struct would be enough for storage validations and optionally for choosing the right target storage domain. The other things are relatively hidden in oVirt's UI and can be added at the last phase. What I do not like that much about is that you change the VM at the end substantially, and for the import duration it's locked anyway The name has a value, and progress, but again, only for a oVirt GUI user - and when you are driving the process from cmdline it's just not that important It's not so difficult to do it like that "externally" - just blindly create a completely stub VM in your caller app(wrapper; another integration), and then replace it. We do not need to push that into v2v directly BTW, that's how import from VMware/Xen currently works - we add a VM entity based on the domain XML we get from vCenter and at the last phase add the missing parts when getting the generated OVF from virt-v2v. So for instance, the VM would have no graphics device until that last phase.
Rich.
-- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~ rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On Wed, Mar 7, 2018 at 11:48 PM, Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 07 Mar 2018, at 14:20, Arik Hadas <ahadas@redhat.com> wrote:
On Wed, Mar 7, 2018 at 1:41 PM, Richard W.M. Jones <rjones@redhat.com> wrote:
Interesting, that contradicts my intuition - I would imagine that most of the things are actually known (the things that appear in the top-level
of the domain xml: memory size, memory size, num of CPUs, name,.. ) and only things that depend on the content of the disks or things that depend on installations during the conversion are unknown. But anyway, it is enough IMO to send the name, memory, CPU and size of
On Wed, Mar 07, 2018 at 01:26:39PM +0200, Arik Hadas wrote: part the
disks to present something useful to the user and make the necessary validations at that point.
Some of those things are known, but they didn't seem to me to be that interesting for oVirt to know in advance. In any case what's precisely known before conversion is:
(1) The 'source' struct and sub-structs:
https://github.com/libguestfs/libguestfs/blob/ba53251ab912b8 bac9e00c1022adc6ba9bdf70a3/v2v/types.mli#L59
(2) The 'overlay' struct (one per disk):
https://github.com/libguestfs/libguestfs/blob/ba53251ab912b8 bac9e00c1022adc6ba9bdf70a3/v2v/types.mli#L175
Note only virtual disk size is known, which is near to useless for provisioning storage.
(3) The 'target' struct (one per disk):
https://github.com/libguestfs/libguestfs/blob/ba53251ab912b8 bac9e00c1022adc6ba9bdf70a3/v2v/types.mli#L191
What's unknown are guest capabilities (hence nothing about what devices should be presented to the guest), inspection data, target bus mapping, real size of disks, etc.
I see. I think it is sufficient - the information from the 'source' struct seems enough just to produce a representative VM entity in the database that would be reflected in the UI with status 'importing' and for general validations,
For having an entity, yes. For meaningful validations, not really.
I was thinking mostly about validating the name and the amount of free space in the target storage domain. But in the future, when/if the imported VMs could be from different architecture (PPC?) or different CPU type (q35?) we can also validate that the target cluster is valid.
It's not so interesting to see much in oVirt, at least not for the virt-v2v command line usage
Functionality-wise, I agree. But in a serious management application you don't expect to see memory=0 or memory=N/A when you open the general tab of the VM that is being imported. If we can easily get the information from the 'source' struct, I think we should.
and the estimated size on the 'target' struct would be enough for storage validations and optionally for choosing the right target storage domain. The other things are relatively hidden in oVirt's UI and can be added at the last phase.
What I do not like that much about is that you change the VM at the end substantially, and for the import duration it's locked anyway The name has a value, and progress, but again, only for a oVirt GUI user - and when you are driving the process from cmdline it's just not that important
Right, but I tend to think that it's better to generalize the process in a way that the cmdline flow will make an extra step(s) rather than having separate and different flows.
It's not so difficult to do it like that "externally" - just blindly create a completely stub VM in your caller app(wrapper; another integration), and then replace it. We do not need to push that into v2v directly
It depends on the amount and quality of validations you want to make at the beginning of the process and whether or not ovirt-engine should instruct v2v where to upload the disks to (maybe there should be an option to upload a disk without specifying the target storage domain so it will be selected automatically). v2v will anyway need to trigger a call to ovirt-engine in order to create that context, that wrapper command. If the data we need at the beginning is already available to v2v at that point, providing it to that call shouldn't be that complex, right?
BTW, that's how import from VMware/Xen currently works - we add a VM entity based on the domain XML we get from vCenter and at the last phase add the missing parts when getting the generated OVF from virt-v2v. So for instance, the VM would have no graphics device until that last phase.
Rich.
-- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

Hi, this sounds like a good idea in general. Few interconnected questions though... On Wed, 7 Mar 2018 10:42:31 +0200 Arik Hadas <ahadas@redhat.com> wrote:
IMHO, the process should be comprised of: 1. virt-v2v calls an API with the (probably partial since the OS and other things are unknown at that point) OVF configuration 2. virt-v2v uploads the disks 3. virt-v2v provides the up-to-date configuration
Step #1 will enable ovirt-engine: 1. Most importantly, to cleanup uploaded disks in case of an error during the import process. Otherwise, we require the client to clean them up, which can be challenging (e.g., if the virt-v2v process crashes).
Who will handle the removal in case of problems? Engine after timeout? Or is the only benefit that administrator can remove all disks in one step by removing the VM? Note that the uploads do not timeout at the moment. However strange that might be. So I assume removing the disks/VM will be impossible anyway because of locking.
2. To inform the user that the process has started - so he won't get surprised by seeing disks being uploaded suddenly. That will give a context to these upload operations.
The uploaded disks will still remain unattached though. Or do you plan for Engine to create and attach the disks?
3. To inform the user about the progress of the import process, much like we do today when importing VMs from vSphere to RHV.
How will this be handled? Will Engine report the progress in the Virtual Machines view and compute something based on the upload progress? Tomas
4. To perform validations on the (partial) VM configuration, e.g., verifying that no VM with the same name exists/verifying there is enough space (optionally mapping different disks to different storage devices) and so on, before uploading the disks.
The gaps I see: 1. We don't have a command for step #1 yet but that's something we can provide relatively quickly. We need it also to support uploading an OVA via oVirt's webadmin. 2. We have a command for step #3, but it is not exposed via the API.
-- Tomáš Golembiovský <tgolembi@redhat.com>

On Wed, Mar 7, 2018 at 1:01 PM, Tomáš Golembiovský <tgolembi@redhat.com> wrote:
Hi,
this sounds like a good idea in general. Few interconnected questions though...
On Wed, 7 Mar 2018 10:42:31 +0200 Arik Hadas <ahadas@redhat.com> wrote:
IMHO, the process should be comprised of: 1. virt-v2v calls an API with the (probably partial since the OS and other things are unknown at that point) OVF configuration 2. virt-v2v uploads the disks 3. virt-v2v provides the up-to-date configuration
Step #1 will enable ovirt-engine: 1. Most importantly, to cleanup uploaded disks in case of an error during the import process. Otherwise, we require the client to clean them up, which can be challenging (e.g., if the virt-v2v process crashes).
Who will handle the removal in case of problems? Engine after timeout? Or is the only benefit that administrator can remove all disks in one step by removing the VM?
So I was thinking that the wrapper command will hold the amount of disks that should be uploaded for the VM. If for some predefined period of time no new disk is added or an upload doesn't make any progress (assuming the uploads are done sequentially), to fail the import operation and that would roll back the resources (disks, VMs) that were created as part of the import process.
Note that the uploads do not timeout at the moment. However strange that might be. So I assume removing the disks/VM will be impossible anyway because of locking.
Yeah, I think no timeout was defined because uploading from the browser is relatively fragile and we didn't want the upload to fail, and the partial disks to be removed, due to browser issues but rather to be able to resume their upload. But different logic can be implemented in the wrapping command. As for locking, we don't have to call RemoveDisk but instead, to terminate the upload which will eventually remove the disks.
2. To inform the user that the process has started - so he won't get surprised by seeing disks being uploaded suddenly. That will give a context to these upload operations.
The uploaded disks will still remain unattached though. Or do you plan for Engine to create and attach the disks?
Right, so when we have that "context" and a VM entity in the databse, we can attach the disks to the VM when they are created.
3. To inform the user about the progress of the import process, much like we do today when importing VMs from vSphere to RHV.
How will this be handled? Will Engine report the progress in the Virtual Machines view and compute something based on the upload progress?
Yes, I don't see why not showing that in the status field (at the VMs tab) as we do today for VMs that are being imported. The engine would need to know the estimated actual sizes of the disks in order to compute it though.
Tomas
4. To perform validations on the (partial) VM configuration, e.g., verifying that no VM with the same name exists/verifying there is enough space (optionally mapping different disks to different storage devices) and so on, before uploading the disks.
The gaps I see: 1. We don't have a command for step #1 yet but that's something we can provide relatively quickly. We need it also to support uploading an OVA via oVirt's webadmin. 2. We have a command for step #3, but it is not exposed via the API.
-- Tomáš Golembiovský <tgolembi@redhat.com>

On Wed, Mar 07, 2018 at 03:12:58PM +0200, Arik Hadas wrote:
If for some predefined period of time no new disk is added or an upload doesn't make any progress (assuming the uploads are done sequentially), to fail the import operation and that would roll back the resources (disks, VMs) that were created as part of the import process.
At the moment we're actually trying to remove the disk on failure. However the disk_service.remove() API does nothing (doesn't even fail). Perhaps because the transfer isn't finalized on the error path? Anyway the code - which needs review - is: https://www.redhat.com/archives/libguestfs/2018-March/msg00024.html Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW

On Wed, Mar 7, 2018 at 3:37 PM, Richard W.M. Jones <rjones@redhat.com> wrote:
On Wed, Mar 07, 2018 at 03:12:58PM +0200, Arik Hadas wrote:
If for some predefined period of time no new disk is added or an upload doesn't make any progress (assuming the uploads are done sequentially), to fail the import operation and that would roll back the resources (disks, VMs) that were created as part of the import process.
At the moment we're actually trying to remove the disk on failure. However the disk_service.remove() API does nothing (doesn't even fail). Perhaps because the transfer isn't finalized on the error path?
That's weird, it should fail to acquire the disk's lock. In any case, I think that removing the disk this way would be wrong as we also store an entity in the image_transfer table that should be removed (otherwise, another attempt to upload the same disk would probably fail). Daniel/Nir, I can't find a way to cancel an ongoing upload-image operation through the API (as we have in the webadmin), am I missing it or is it missing?
Anyway the code - which needs review - is:
https://www.redhat.com/archives/libguestfs/2018-March/msg00024.html
Rich.
-- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~ rjones Read my programming and virtualization blog: http://rwmj.wordpress.com Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW

On Wed, 7 Mar 2018 15:12:58 +0200 Arik Hadas <ahadas@redhat.com> wrote:
On Wed, Mar 7, 2018 at 1:01 PM, Tomáš Golembiovský <tgolembi@redhat.com> wrote:
Hi,
this sounds like a good idea in general. Few interconnected questions though...
On Wed, 7 Mar 2018 10:42:31 +0200 Arik Hadas <ahadas@redhat.com> wrote:
IMHO, the process should be comprised of: 1. virt-v2v calls an API with the (probably partial since the OS and other things are unknown at that point) OVF configuration 2. virt-v2v uploads the disks 3. virt-v2v provides the up-to-date configuration
Step #1 will enable ovirt-engine: 1. Most importantly, to cleanup uploaded disks in case of an error during the import process. Otherwise, we require the client to clean them up, which can be challenging (e.g., if the virt-v2v process crashes).
Who will handle the removal in case of problems? Engine after timeout? Or is the only benefit that administrator can remove all disks in one step by removing the VM?
So I was thinking that the wrapper command will hold the amount of disks that should be uploaded for the VM. If for some predefined period of time no new disk is added or an upload doesn't make any progress (assuming the uploads are done sequentially), to fail the import operation and that would roll back the resources (disks, VMs) that were created as part of the import process.
Note that the uploads do not timeout at the moment. However strange that might be. So I assume removing the disks/VM will be impossible anyway because of locking.
Yeah, I think no timeout was defined because uploading from the browser is relatively fragile and we didn't want the upload to fail, and the partial disks to be removed, due to browser issues but rather to be able to resume their upload. But different logic can be implemented in the wrapping command.
Would it make sense to add the timeout parameter to API? That way the client can choose whether it wants one and how big. No timeout could still be the default. Tomas
As for locking, we don't have to call RemoveDisk but instead, to terminate the upload which will eventually remove the disks.
2. To inform the user that the process has started - so he won't get surprised by seeing disks being uploaded suddenly. That will give a context to these upload operations.
The uploaded disks will still remain unattached though. Or do you plan for Engine to create and attach the disks?
Right, so when we have that "context" and a VM entity in the databse, we can attach the disks to the VM when they are created.
3. To inform the user about the progress of the import process, much like we do today when importing VMs from vSphere to RHV.
How will this be handled? Will Engine report the progress in the Virtual Machines view and compute something based on the upload progress?
Yes, I don't see why not showing that in the status field (at the VMs tab) as we do today for VMs that are being imported. The engine would need to know the estimated actual sizes of the disks in order to compute it though.
Tomas
4. To perform validations on the (partial) VM configuration, e.g., verifying that no VM with the same name exists/verifying there is enough space (optionally mapping different disks to different storage devices) and so on, before uploading the disks.
The gaps I see: 1. We don't have a command for step #1 yet but that's something we can provide relatively quickly. We need it also to support uploading an OVA via oVirt's webadmin. 2. We have a command for step #3, but it is not exposed via the API.
-- Tomáš Golembiovský <tgolembi@redhat.com>
-- Tomáš Golembiovský <tgolembi@redhat.com>
participants (4)
-
Arik Hadas
-
Michal Skrivanek
-
Richard W.M. Jones
-
Tomáš Golembiovský