Combining Virtual machine image with multiple disks attached

I have few VMs in Redhat Virtualisation environment RHeV ( using Rhevm4.1 ) managed by a third party Now I am in the process of migrating those VMs to my cloud setup with OpenStack ussuri version with KVM hypervisor and Glance storage. The third party is making down each VM and giving the each VM image with their attached volume disks along with it. There are three folders which contain images for each VM . These folders contain the base OS image, and attached LVM disk images ( from time to time they added hard disks and used LVM for storing data ) where data is stored. Is there a way to get all these images to be exported as Single image file Instead of multiple image files from Rhevm it self. Is this possible ? If possible how to combine e all these disk images to a single image and that image can upload to our cloud glance storage as a single image ? Is this possible or Am I asking a Stupid question which theoretically not possible ? Thanks in Advance for sharing your thoughts, Kris.

On Mon, Aug 2, 2021 at 12:22 PM <kkchn.in@gmail.com> wrote:
I have few VMs in Redhat Virtualisation environment RHeV ( using Rhevm4.1 ) managed by a third party
Now I am in the process of migrating those VMs to my cloud setup with OpenStack ussuri version with KVM hypervisor and Glance storage.
The third party is making down each VM and giving the each VM image with their attached volume disks along with it.
There are three folders which contain images for each VM . These folders contain the base OS image, and attached LVM disk images ( from time to time they added hard disks and used LVM for storing data ) where data is stored.
Is there a way to get all these images to be exported as Single image file Instead of multiple image files from Rhevm it self. Is this possible ?
If possible how to combine e all these disk images to a single image and that image can upload to our cloud glance storage as a single image ?
It is not clear what is the vm you are trying to export. If you share the libvirt xml of this vm it will be more clear. You can use "sudo virsh -r dumpxml vm-name". RHV supports download of disks to one image per disk, which you can move to another system. We also have export to ova, which creates one tar file with all exported disks, if this helps. Nir

I have asked our VM maintainer to run the command # virsh -r dumpxml vm-name_blah //as Super user But no output : No matching domains found that was the TTY output on that rhevm node when I executed the command. Then I tried to execute # virsh list // it doesn't list any VMs !!! ( How come this ? Does the Rhevm node need to enable any CLI with License key or something to list Vms or to dumpxml with virsh ? or its CLI commands ? Any way I want to know what I have to ask the maintainer to provide a working a working CLI or ? which do the tasks expected to do with command line utilities in rhevm. I have one more question : Which command can I execute on an rhevm node to manually export ( not through GUI portal) a VMs to required format ? For example; 1. I need to get one VM and disks attached to it as raw images. Is this possible how? and another 2. VM and disk attached to it as Ova or( what other good format) which suitable to upload to glance ? Each VMs are around 200 to 300 GB with disk volumes ( so where should be the images exported to which path to specify ? to the host node(if the host doesn't have space or NFS mount ? how to specify the target location where the VM image get stored in case of NFS mount ( available ?) Thanks in advance On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Mon, Aug 2, 2021 at 12:22 PM <kkchn.in@gmail.com> wrote:
I have few VMs in Redhat Virtualisation environment RHeV ( using
Rhevm4.1 ) managed by a third party
Now I am in the process of migrating those VMs to my cloud setup with
OpenStack ussuri version with KVM hypervisor and Glance storage.
The third party is making down each VM and giving the each VM image
with their attached volume disks along with it.
There are three folders which contain images for each VM . These folders contain the base OS image, and attached LVM disk images (
from time to time they added hard disks and used LVM for storing data ) where data is stored.
Is there a way to get all these images to be exported as Single image
file Instead of multiple image files from Rhevm it self. Is this possible ?
If possible how to combine e all these disk images to a single image and
that image can upload to our cloud glance storage as a single image ?
It is not clear what is the vm you are trying to export. If you share the libvirt xml of this vm it will be more clear. You can use "sudo virsh -r dumpxml vm-name".
RHV supports download of disks to one image per disk, which you can move to another system.
We also have export to ova, which creates one tar file with all exported disks, if this helps.
Nir

On Tue, Aug 3, 2021 at 7:29 PM KK CHN <kkchn.in@gmail.com> wrote:
I have asked our VM maintainer to run the command
# virsh -r dumpxml vm-name_blah //as Super user
But no output : No matching domains found that was the TTY output on that rhevm node when I executed the command.
Then I tried to execute # virsh list // it doesn't list any VMs !!! ( How come this ? Does the Rhevm node need to enable any CLI with License key or something to list Vms or to dumpxml with virsh ? or its CLI commands ?
RHV undefine the vms when they are not running.
Any way I want to know what I have to ask the maintainer to provide a working a working CLI or ? which do the tasks expected to do with command line utilities in rhevm.
If the vm is not running you can get the vm configuration from ovirt using the API: GET /api/vms/{vm-id} You may need more API calls to get info about the disks, follow the <links> in the returned xml.
I have one more question : Which command can I execute on an rhevm node to manually export ( not through GUI portal) a VMs to required format ?
For example; 1. I need to get one VM and disks attached to it as raw images. Is this possible how?
and another 2. VM and disk attached to it as Ova or( what other good format) which suitable to upload to glance ?
Arik can add more info on exporting.
Each VMs are around 200 to 300 GB with disk volumes ( so where should be the images exported to which path to specify ? to the host node(if the host doesn't have space or NFS mount ? how to specify the target location where the VM image get stored in case of NFS mount ( available ?)
You have 2 options: - Download the disks using the SDK - Export the VM to OVA When exporting to OVA, you will always get qcow2 images, which you can later convert to raw using "qemu-img convert" When downloading the disks, you control the image format, for example this will download the disk in any format, collapsing all snapshots to the raw format: $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw This requires ovirt.conf file: $ cat ~/.config/ovirt.conf [engine-dev] engine_url = https://engine-dev username = admin@internal password = mypassword cafile = /etc/pki/vdsm/certs/cacert.pem Nir
Thanks in advance
On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Mon, Aug 2, 2021 at 12:22 PM <kkchn.in@gmail.com> wrote:
I have few VMs in Redhat Virtualisation environment RHeV ( using Rhevm4.1 ) managed by a third party
Now I am in the process of migrating those VMs to my cloud setup with OpenStack ussuri version with KVM hypervisor and Glance storage.
The third party is making down each VM and giving the each VM image with their attached volume disks along with it.
There are three folders which contain images for each VM . These folders contain the base OS image, and attached LVM disk images ( from time to time they added hard disks and used LVM for storing data ) where data is stored.
Is there a way to get all these images to be exported as Single image file Instead of multiple image files from Rhevm it self. Is this possible ?
If possible how to combine e all these disk images to a single image and that image can upload to our cloud glance storage as a single image ?
It is not clear what is the vm you are trying to export. If you share the libvirt xml of this vm it will be more clear. You can use "sudo virsh -r dumpxml vm-name".
RHV supports download of disks to one image per disk, which you can move to another system.
We also have export to ova, which creates one tar file with all exported disks, if this helps.
Nir

On Tue, Aug 3, 2021 at 7:29 PM KK CHN <kkchn.in@gmail.com> wrote:
I have asked our VM maintainer to run the command
# virsh -r dumpxml vm-name_blah //as Super user
But no output : No matching domains found that was the TTY output on
that rhevm node when I executed the command.
Then I tried to execute # virsh list // it doesn't list any VMs
!!! ( How come this ? Does the Rhevm node need to enable any CLI with License key or something to list Vms or to dumpxml with virsh ? or its CLI commands ?
RHV undefine the vms when they are not running.
Any way I want to know what I have to ask the maintainer to provide a working a working CLI or ? which do the tasks expected to do with command line utilities in rhevm.
If the vm is not running you can get the vm configuration from ovirt using the API:
GET /api/vms/{vm-id}
You may need more API calls to get info about the disks, follow the <links> in the returned xml.
I have one more question : Which command can I execute on an rhevm node to manually export ( not through GUI portal) a VMs to required format ?
For example; 1. I need to get one VM and disks attached to it as raw images. Is this possible how?
and another 2. VM and disk attached to it as Ova or( what other good format) which suitable to upload to glance ?
Arik can add more info on exporting.
Each VMs are around 200 to 300 GB with disk volumes ( so where should be the images exported to which path to specify ? to the host node(if the host doesn't have space or NFS mount ? how to specify the target location where the VM image get stored in case of NFS mount ( available ?)
You have 2 options: - Download the disks using the SDK - Export the VM to OVA
When exporting to OVA, you will always get qcow2 images, which you can later convert to raw using "qemu-img convert"
When downloading the disks, you control the image format, for example this will download the disk in any format, collapsing all snapshots to the raw format:
$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
To perform this which modules/packages need to be installed in the rhevm host node ? Does the rhevm hosts come with python3 installed by default ? or I need to install python3 on rhevm node ? Then using pip3 to install
On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer <nsoffer@redhat.com> wrote: the download_disk.py / what the module name to install this sdk ? any dependency before installing this sdk ? like java need to be installed on the rhevm node ? One doubt: came across virt v2v while google search, can virtv2v be used in rhevm node to export VMs to images ? or only from other hypervisors to rhevm only virt v2v supports ? This requires ovirt.conf file: // ovirt.conf file need to be created ? or already there in any rhevm node?
$ cat ~/.config/ovirt.conf [engine-dev] engine_url = https://engine-dev username = admin@internal password = mypassword cafile = /etc/pki/vdsm/certs/cacert.pem
Nir
Thanks in advance
On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Mon, Aug 2, 2021 at 12:22 PM <kkchn.in@gmail.com> wrote:
I have few VMs in Redhat Virtualisation environment RHeV ( using
Now I am in the process of migrating those VMs to my cloud setup
with OpenStack ussuri version with KVM hypervisor and Glance storage.
The third party is making down each VM and giving the each VM image
with their attached volume disks along with it.
There are three folders which contain images for each VM . These folders contain the base OS image, and attached LVM disk images
( from time to time they added hard disks and used LVM for storing data ) where data is stored.
Is there a way to get all these images to be exported as Single
image file Instead of multiple image files from Rhevm it self. Is this
Rhevm4.1 ) managed by a third party possible ?
If possible how to combine e all these disk images to a single image
and that image can upload to our cloud glance storage as a single image ?
It is not clear what is the vm you are trying to export. If you share the libvirt xml of this vm it will be more clear. You can use "sudo virsh -r dumpxml vm-name".
RHV supports download of disks to one image per disk, which you can move to another system.
We also have export to ova, which creates one tar file with all exported disks, if this helps.
Nir

On Wednesday, 4 August 2021 03:54:36 CEST KK CHN wrote:
On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Aug 3, 2021 at 7:29 PM KK CHN <kkchn.in@gmail.com> wrote:
I have asked our VM maintainer to run the command
# virsh -r dumpxml vm-name_blah //as Super user
But no output : No matching domains found that was the TTY output on
that rhevm node when I executed the command.
Then I tried to execute # virsh list // it doesn't list any VMs
!!! ( How come this ? Does the Rhevm node need to enable any CLI with License key or something to list Vms or to dumpxml with virsh ? or its CLI commands ?
RHV undefine the vms when they are not running.
Any way I want to know what I have to ask the maintainer to provide
a working a working CLI or ? which do the tasks expected to do with command line utilities in rhevm.
If the vm is not running you can get the vm configuration from ovirt
using the API: GET /api/vms/{vm-id}
You may need more API calls to get info about the disks, follow the <links> in the returned xml.
I have one more question : Which command can I execute on an rhevm
node to manually export ( not through GUI portal) a VMs to required format ?
For example; 1. I need to get one VM and disks attached to it as
raw images. Is this possible how?
and another 2. VM and disk attached to it as Ova or( what other good
format) which suitable to upload to glance ?
Arik can add more info on exporting.
Each VMs are around 200 to 300 GB with disk volumes ( so where should
be the images exported to which path to specify ? to the host node(if the host doesn't have space or NFS mount ? how to specify the target location where the VM image get stored in case of NFS mount ( available ?)
You have 2 options: - Download the disks using the SDK - Export the VM to OVA
When exporting to OVA, you will always get qcow2 images, which you can later convert to raw using "qemu-img convert"
When downloading the disks, you control the image format, for example this will download
the disk in any format, collapsing all snapshots to the raw format: $ python3
/usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
To perform this which modules/packages need to be installed in the rhevm
host node ? Does the rhevm hosts come with python3 installed by default ? or I need to install python3 on rhevm node ?
You don't have to install anything on oVirt hosts. SDK has to be installed on the machine from which you run the script. See https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/README.adoc for more details, how to install and use it.
Then using pip3 to install the download_disk.py / what the module name to install this sdk ? any dependency before installing this sdk ? like java need to be installed on the rhevm node ?
One doubt: came across virt v2v while google search, can virtv2v be used in rhevm node to export VMs to images ? or only from other hypervisors to rhevm only virt v2v supports ?
This requires ovirt.conf file: // ovirt.conf file need to be created ? or already there in any rhevm node?
again, this has to be on the machine from which you run the script
$ cat ~/.config/ovirt.conf [engine-dev] engine_url = https://engine-dev username = admin@internal password = mypassword cafile = /etc/pki/vdsm/certs/cacert.pem
Nir
Thanks in advance
On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Mon, Aug 2, 2021 at 12:22 PM <kkchn.in@gmail.com> wrote:
I have few VMs in Redhat Virtualisation environment RHeV ( using
Rhevm4.1 ) managed by a third party
Now I am in the process of migrating those VMs to my cloud setup
with OpenStack ussuri version with KVM hypervisor and Glance storage.
The third party is making down each VM and giving the each VM image
with their attached volume disks along with it.
There are three folders which contain images for each VM . These folders contain the base OS image, and attached LVM disk images
( from time to time they added hard disks and used LVM for storing data ) where data is stored.
Is there a way to get all these images to be exported as Single
image file Instead of multiple image files from Rhevm it self. Is this possible ?
If possible how to combine e all these disk images to a single image
and that image can upload to our cloud glance storage as a single image ?>
It is not clear what is the vm you are trying to export. If you share the libvirt xml of this vm it will be more clear. You can use "sudo virsh -r dumpxml
vm-name".
RHV supports download of disks to one image per disk, which you can move to another system.
We also have export to ova, which creates one tar file with all
exported disks,
if this helps.
Nir

Appreciate all for sharing the valuable information. 1. I am downloading centos 8 as the Python Ovirt SDK installation says it works on Centos 8 and Need to setup a VM with this OS and install ovirt Python SDK on this VM. The requirement is that this Centos 8 VM should able to communicate with the Rhevm 4.1 Host node where the ovirt shell ( Rhevm Shell [connected] # is available right ? 2. pinging to the host with "Rhevm Shell [connected]# " and that should be ssh ed from the CentOS 8 VM where python3 and oVirt SDK installed and going to execute the script (with ovirt configuration file on this VM.). Is these two connectivity checks are enough for executing the script ? or any other protocols need to be enabled in the firewall between these two machine? 3. while googling I saw a post https://users.ovirt.narkive.com/CeEW3lcj/ovirt-users-clone-and-export-vm-by-... action vm myvm export --storage_domain-name myexport Will this command export ? and which format it will export to the export domain ? Is there any option to provide with this command to specify any supported format the vm image to be exported ? This need to be executed from "Rhevm Shell [connected]# " TTY right ? On Wed, Aug 4, 2021 at 1:00 PM Vojtech Juranek <vjuranek@redhat.com> wrote:
On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Aug 3, 2021 at 7:29 PM KK CHN <kkchn.in@gmail.com> wrote:
I have asked our VM maintainer to run the command
# virsh -r dumpxml vm-name_blah //as Super user
But no output : No matching domains found that was the TTY output on
that rhevm node when I executed the command.
Then I tried to execute # virsh list // it doesn't list any VMs
!!! ( How come this ? Does the Rhevm node need to enable any CLI with License key or something to list Vms or to dumpxml with virsh ? or its CLI commands ?
RHV undefine the vms when they are not running.
Any way I want to know what I have to ask the maintainer to
a working a working CLI or ? which do the tasks expected to do with command line utilities in rhevm.
If the vm is not running you can get the vm configuration from ovirt
using the API: GET /api/vms/{vm-id}
You may need more API calls to get info about the disks, follow the <links> in the returned xml.
I have one more question : Which command can I execute on an rhevm
node to manually export ( not through GUI portal) a VMs to
required
format ?
For example; 1. I need to get one VM and disks attached to it as
raw images. Is this possible how?
and another 2. VM and disk attached to it as Ova or( what other good
format) which suitable to upload to glance ?
Arik can add more info on exporting.
Each VMs are around 200 to 300 GB with disk volumes ( so where should
be the images exported to which path to specify ? to the host node(if
On Wednesday, 4 August 2021 03:54:36 CEST KK CHN wrote: provide the
host doesn't have space or NFS mount ? how to specify the target location where the VM image get stored in case of NFS mount ( available ?)
You have 2 options: - Download the disks using the SDK - Export the VM to OVA
When exporting to OVA, you will always get qcow2 images, which you can later convert to raw using "qemu-img convert"
When downloading the disks, you control the image format, for example this will download
the disk in any format, collapsing all snapshots to the raw format: $ python3
/usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
To perform this which modules/packages need to be installed in the rhevm
host node ? Does the rhevm hosts come with python3 installed by default ? or I need to install python3 on rhevm node ?
You don't have to install anything on oVirt hosts. SDK has to be installed on the machine from which you run the script. See
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/README.adoc
for more details, how to install and use it.
Then using pip3 to install the download_disk.py / what the module name to install this sdk ? any dependency before installing this sdk ? like java need to be installed on the rhevm node ?
One doubt: came across virt v2v while google search, can virtv2v be used in rhevm node to export VMs to images ? or only from other hypervisors to rhevm only virt v2v supports ?
This requires ovirt.conf file: // ovirt.conf file need to be created ? or already there in any rhevm node?
again, this has to be on the machine from which you run the script
$ cat ~/.config/ovirt.conf [engine-dev] engine_url = https://engine-dev username = admin@internal password = mypassword cafile = /etc/pki/vdsm/certs/cacert.pem
Nir
Thanks in advance
On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Mon, Aug 2, 2021 at 12:22 PM <kkchn.in@gmail.com> wrote:
I have few VMs in Redhat Virtualisation environment RHeV ( using
Rhevm4.1 ) managed by a third party
Now I am in the process of migrating those VMs to my cloud setup
with OpenStack ussuri version with KVM hypervisor and Glance storage.
The third party is making down each VM and giving the each VM image
with their attached volume disks along with it.
There are three folders which contain images for each VM . These folders contain the base OS image, and attached LVM disk images
( from time to time they added hard disks and used LVM for storing data ) where data is stored.
Is there a way to get all these images to be exported as Single
image file Instead of multiple image files from Rhevm it self. Is this possible ?
If possible how to combine e all these disk images to a single image
and that image can upload to our cloud glance storage as a single image ?>
It is not clear what is the vm you are trying to export. If you share the libvirt xml of this vm it will be more clear. You can use "sudo virsh -r dumpxml
vm-name".
RHV supports download of disks to one image per disk, which you can move to another system.
We also have export to ova, which creates one tar file with all
exported disks,
if this helps.
Nir

Hi all, I have installed the ovirt-engine-sdk-python using pip3 in my python3 virtaul environment in my personal laptop (base) kris@my-ThinkPad-X270:~$ pip install ovirt-engine-sdk-python Collecting ovirt-engine-sdk-python Downloading ovirt-engine-sdk-python-4.4.14.tar.gz (335 kB) |████████████████████████████████| 335 kB 166 kB/s Collecting pycurl>=7.19.0 Downloading pycurl-7.43.0.6.tar.gz (222 kB) |████████████████████████████████| 222 kB 496 kB/s Requirement already satisfied: six in ./training/lib/python3.7/site-packages (from ovirt-engine-sdk-python) (1.15.0) Building wheels for collected packages: ovirt-engine-sdk-python, pycurl Building wheel for ovirt-engine-sdk-python (setup.py) ... done Created wheel for ovirt-engine-sdk-python: filename=ovirt_engine_sdk_python-4.4.14-cp37-cp37m-linux_x86_64.whl size=300970 sha256=128ee03642c36094d62a04435bf5def1e5f8eb2c800f97132e8da02665d227a8 Stored in directory: /home/kris/.cache/pip/wheels/2f/65/60/6a222dcdec777ae59bacb3f51a0a93e6cf9547b82cb0102db6 Building wheel for pycurl (setup.py) ... done Created wheel for pycurl: filename=pycurl-7.43.0.6-cp37-cp37m-linux_x86_64.whl size=269770 sha256=2cf705a2246041f9eaa5cbb19c530470edad7f476a0a7eb04c4877931699bea9 Stored in directory: /home/kris/.cache/pip/wheels/f2/32/dc/9ccf4566cfe0a7a11ee304c11af36a1e341a16fa30e74fb26e Successfully built ovirt-engine-sdk-python pycurl Installing collected packages: pycurl, ovirt-engine-sdk-python Successfully installed ovirt-engine-sdk-python-4.4.14 pycurl-7.43.0.6 WARNING: You are using pip version 21.0.1; however, version 21.2.2 is available. You should consider upgrading via the '/home/kris/training/bin/python3.7 -m pip install --upgrade pip' command. (base) kris@my-ThinkPad-X270:~$ which pip /home/kris/anaconda3/bin/pip (base) kris@my-ThinkPad-X270:~$ pip --version pip 21.0.1 from /home/kris/training/lib/python3.7/site-packages/pip (python 3.7) (base) kris@my-ThinkPad-X270:~$ pwd /home/kris and Created file in the user kris home directory in the same laptop // Is what I am doing right ? (base) kris@my-ThinkPad-X270:~$ cat ~/.config/ovirt.conf [engine-dev] engine_url=https://engine-dev // what is this engine url ? its the rhevm ovirt url this our service provider may can provide right ? username=admin@internal password=mypassword cafile=/etc/pki/vdsm/certs/cacert.pem // I dont have any cacert.pem file in my laptop's /etc/pki/vdsm/certs/cacert.pem no folder at all like this (base) kris@my-ThinkPad-X270:~$ pwd /home/kris (base) kris@my-ThinkPad-X270:~$ But I couldn't find any examples folder where I can find the download_disk.py // So I have downloaded files for ovirt-engne-sdk-python-4.1.3.tar.gz and untarred the files where I am able to find the download_disk.py ##### (base) kris@my-ThinkPad-X270:~/OVIRT_PYTON_SDK_SOURCE_FILES/ovirt-engine-sdk-python-4.1.3/examples$ ls add_affinity_label.py add_vm_from_template_version.py follow_vm_links.py set_vm_lease_storage_domain.py add_bond.py add_vm_nic.py get_display_ticket.py set_vm_serial_number.py add_cluster.py add_vm.py import_external_vm.py show_summary.py add_data_center.py add_vm_snapshot.py import_vm.py start_vm.py add_floating_disk.py add_vm_with_sysprep.py list_affinity_labels.py start_vm_with_boot_devices.py add_group.py add_vnc_console.py list_glance_images.py start_vm_with_cloud_init.py add_host.py assign_affinity_label_to_vm.py list_roles.py stop_vm.py add_independet_vm.py assign_permission.py list_tags_of_vm.py test_connection.py add_instance_type.py assign_tag_to_vm.py list_tags.py unassign_tag_to_vm.py add_mac_pool.py attach_nfs_data_storage_domain.py list_vm_disks.py update_data_center.py add_nfs_data_storage_domain.py attach_nfs_iso_storage_domain.py list_vm_snapshots.py update_fencing_options.py add_nfs_iso_storage_domain.py change_vm_cd.py list_vms.py update_quota_limits.py add_openstack_image_provider.py clone_vm_from_snapshot.py page_vms.py upload_disk.py add_role.py connection_builder.py remove_host.py vm_backup.py add_tag.py disable_compression.py remove_tag.py add_user_ssh_public_key.py download_disk.py remove_vm.py add_vm_disk.py enable_serial_console.py search_vms.py (base) kris@my-ThinkPad-X270:~/OVIRT_PYTON_SDK_SOURCE_FILES/ovirt-engine-sdk-python-4.1.3/examples$ Can I execute now the following from my laptop ? so that it will connect to the rhevm host node and download the disks ? (base) kris@my-ThinkPad-X270:$ python3 download_disk.py -c engine-dev MY_vm_blah_Id /var/tmp/disk1.raw //is this correct ? My laptop doesn't have space to accommodate 300 GB so can I attache a usb harddisk and can I specify its mount point ? or any other suggestions or correcton ? Because its a live host. I can't do trail and error on service maintainer's rhevm host machines. kindly correct me if any thing wrong in my steps . I have to perform this script running on mylaptop to rhevm host machines without breaking anything. Kindly guide me. Kris On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Aug 3, 2021 at 7:29 PM KK CHN <kkchn.in@gmail.com> wrote:
I have asked our VM maintainer to run the command
# virsh -r dumpxml vm-name_blah //as Super user
But no output : No matching domains found that was the TTY output on
that rhevm node when I executed the command.
Then I tried to execute # virsh list // it doesn't list any VMs
!!! ( How come this ? Does the Rhevm node need to enable any CLI with License key or something to list Vms or to dumpxml with virsh ? or its CLI commands ?
RHV undefine the vms when they are not running.
Any way I want to know what I have to ask the maintainer to provide a working a working CLI or ? which do the tasks expected to do with command line utilities in rhevm.
If the vm is not running you can get the vm configuration from ovirt using the API:
GET /api/vms/{vm-id}
You may need more API calls to get info about the disks, follow the <links> in the returned xml.
I have one more question : Which command can I execute on an rhevm node to manually export ( not through GUI portal) a VMs to required format ?
For example; 1. I need to get one VM and disks attached to it as raw images. Is this possible how?
and another 2. VM and disk attached to it as Ova or( what other good format) which suitable to upload to glance ?
Arik can add more info on exporting.
Each VMs are around 200 to 300 GB with disk volumes ( so where should be the images exported to which path to specify ? to the host node(if the host doesn't have space or NFS mount ? how to specify the target location where the VM image get stored in case of NFS mount ( available ?)
You have 2 options: - Download the disks using the SDK - Export the VM to OVA
When exporting to OVA, you will always get qcow2 images, which you can later convert to raw using "qemu-img convert"
When downloading the disks, you control the image format, for example this will download the disk in any format, collapsing all snapshots to the raw format:
$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
This requires ovirt.conf file:
$ cat ~/.config/ovirt.conf [engine-dev] engine_url = https://engine-dev username = admin@internal password = mypassword cafile = /etc/pki/vdsm/certs/cacert.pem
Nir
Thanks in advance
On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Mon, Aug 2, 2021 at 12:22 PM <kkchn.in@gmail.com> wrote:
I have few VMs in Redhat Virtualisation environment RHeV ( using
Now I am in the process of migrating those VMs to my cloud setup
with OpenStack ussuri version with KVM hypervisor and Glance storage.
The third party is making down each VM and giving the each VM image
with their attached volume disks along with it.
There are three folders which contain images for each VM . These folders contain the base OS image, and attached LVM disk images
( from time to time they added hard disks and used LVM for storing data ) where data is stored.
Is there a way to get all these images to be exported as Single
image file Instead of multiple image files from Rhevm it self. Is this
Rhevm4.1 ) managed by a third party possible ?
If possible how to combine e all these disk images to a single image
and that image can upload to our cloud glance storage as a single image ?
It is not clear what is the vm you are trying to export. If you share the libvirt xml of this vm it will be more clear. You can use "sudo virsh -r dumpxml vm-name".
RHV supports download of disks to one image per disk, which you can move to another system.
We also have export to ova, which creates one tar file with all exported disks, if this helps.
Nir

On Thu, Aug 5, 2021 at 5:12 PM KK CHN <kkchn.in@gmail.com> wrote:
I have installed the ovirt-engine-sdk-python using pip3 in my python3 virtaul environment in my personal laptop
I'm not sure this is the right version. Use the rpms provided by ovirt instead. ...
and Created file in the user kris home directory in the same laptop // Is what I am doing right ?
(base) kris@my-ThinkPad-X270:~$ cat ~/.config/ovirt.conf [engine-dev]
This can be any name you like for this setup.
engine_url=https://engine-dev // what is this engine url ? its the rhevm ovirt url this our service provider may can provide right ?
This is your engine url, the same url you access engine UI.
username=admin@internal password=mypassword cafile=/etc/pki/vdsm/certs/cacert.pem // I dont have any cacert.pem file in my laptop's /etc/pki/vdsm/certs/cacert.pem no folder at all like this
This path works on ovirt host. You can download engine cafile from your engine using: curl -k 'https://engine-dev/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA'
engine-dev.pem
and use the path to the cafile: cafile=/home/kris/engine-dev.pem ...
But I couldn't find any examples folder where I can find the download_disk.py // So I have downloaded files for ovirt-engne-sdk-python-4.1.3.tar.gz
and untarred the files where I am able to find the download_disk.py
You need to use ovirt sdk from 4.4. 4.1 sdk is too old. Also if you try to run this on another host, you need to install more packages. 1. Install ovirt release rpm dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm 2. Install required packages dnf install python3-ovirt-engine-sdk4 ovirt-imageio-client $ rpm -q python3-ovirt-engine-sdk4 ovirt-imageio-client python3-ovirt-engine-sdk4-4.4.13-1.el8.x86_64 ovirt-imageio-client-2.2.0-1.el8.x86_64 $ find /usr/share/ -name download_disk.py /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py Now you can use download_disk.py to download images from ovirt setup. ...
Can I execute now the following from my laptop ? so that it will connect to the rhevm host node and download the disks ?
Yes
(base) kris@my-ThinkPad-X270:$ python3 download_disk.py -c engine-dev MY_vm_blah_Id /var/tmp/disk1.raw //is this correct ?
Almost, see the help: $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py -h usage: download_disk.py [-h] -c CONFIG [--debug] [--logfile LOGFILE] [-f {raw,qcow2}] [--use-proxy] [--max-workers MAX_WORKERS] [--buffer-size BUFFER_SIZE] [--timeout-policy {legacy,pause,cancel}] disk_uuid filename Download disk positional arguments: disk_uuid Disk UUID to download. filename Path to write downloaded image. optional arguments: -h, --help show this help message and exit -c CONFIG, --config CONFIG Use engine connection details from [CONFIG] section in ~/.config/ovirt.conf. --debug Log debug level messages to logfile. --logfile LOGFILE Log file name (default example.log). -f {raw,qcow2}, --format {raw,qcow2} Downloaded file format. For best compatibility, use qcow2 (default qcow2). --use-proxy Download via proxy on the engine host (less efficient). --max-workers MAX_WORKERS Maximum number of workers to use for download. The default (4) improves performance when downloading a single disk. You may want to use lower number if you download many disks in the same time. --buffer-size BUFFER_SIZE Buffer size per worker. The default (4194304) gives good performance with the default number of workers. If you use smaller number of workers you may want use larger value. --timeout-policy {legacy,pause,cancel} The action to be made for a timed out transfer Example command to download disk id 3649d84b-6f35-4314-900a-5e8024e3905c from engine configuration myengine to file disk.img, converting the format to raw: $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py -c myengine --format raw 3649d84b-6f35-4314-900a-5e8024e3905c disk.img [ 0.0 ] Connecting... [ 0.5 ] Creating image transfer... [ 2.8 ] Transfer ID: 62c99f08-e58c-4cc2-8c72-9aa9be835d0f [ 2.8 ] Transfer host name: host4 [ 2.8 ] Downloading image... [ 100.00% ] 6.00 GiB, 11.62 seconds, 528.83 MiB/s [ 14.4 ] Finalizing image transfer... You can check the image with qemu-img info: $ qemu-img info disk.img image: disk.img file format: raw virtual size: 6 GiB (6442450944 bytes) disk size: 1.69 GiB
My laptop doesn't have space to accommodate 300 GB so can I attache a usb harddisk and can I specify its mount point ?
This will work, or you can mount NFS server and download to the mountpoint, e.g: $ mount | grep storage storage:/export/02 on /tmp/export type nfs4 (rw,relatime,vers=4.2,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.52,local_lock=none,addr=192.168.122.32) $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py -c myengine --format raw 3649d84b-6f35-4314-900a-5e8024e3905c /tmp/export/disk.img [ 0.0 ] Connecting... [ 0.2 ] Creating image transfer... [ 2.4 ] Transfer ID: cb9d5a0f-1c3c-4d1f-aa49-85a1aedd0108 [ 2.4 ] Transfer host name: host4 [ 2.4 ] Downloading image... [ 100.00% ] 6.00 GiB, 11.19 seconds, 549.27 MiB/s [ 13.6 ] Finalizing image transfer... or any other suggestions or correcton ? Because its a live host. I can't do trail and error on service maintainer's rhevm host machines. It makes sense to do all this on a vm, but downloading the image on the actual host can be much faster, since we avoid the network. However there is one big issue, I noticed that you mentioned ovirt 4.1 in your original mail. This version is too old, you cannot use downoad_disk.py from ovirt 4.4. to download images from ovirt 4.1. In 4.1 you can download images from the UI, but only if the vm has no snapshots, and only in the actual format of the disk (e.g. qcow2). If you need a raw image you will have to convert the image later using: qemu-img convert -f qcow2 -O raw disk.qcow2 disk.raw You can upgrade this system or at least part of it to 4.4 (engine and one host) and then you can download all the disk using the sdk, or use export ova if ova works better for your use case. Nir
participants (4)
-
KK CHN
-
kkchn.in@gmail.com
-
Nir Soffer
-
Vojtech Juranek