
How to migrate virtual servers between different data centers that do not share the same SAN? There are limitations, including connection technology, depending on the Storage Model used. For example, Dell PowerStore uses only 1 iSCSI network and Dell EMC Compellent, 2 separate iSCSI networks. This connection is not possible, even for security and redundancy reasons. Exports Domains are marked as discontinued in 4.4.x. Are there plans for 4.4.x or 4.5.x Live Migrate Storage to be implemented?

On Tue, Mar 16, 2021 at 3:27 PM Daniel Gurgel < danieldemoraisgurgel@gmail.com> wrote:
How to migrate virtual servers between different data centers that do not share the same SAN? There are limitations, including connection technology, depending on the Storage Model used.
For example, Dell PowerStore uses only 1 iSCSI network and Dell EMC Compellent, 2 separate iSCSI networks. This connection is not possible, even for security and redundancy reasons.
Exports Domains are marked as discontinued in 4.4.x.
Export domain is deprecated, but it is still available
Are there plans for 4.4.x or 4.5.x Live Migrate Storage to be implemented?
I don't know about plans for live migration, but offline migration is available since 3.6 by detaching the storage domain from one DC, and attaching it to another DC (or completely separate setup). I'm not sure why networking should be an issue. This is the most efficient way because no data is copied, but it works only if you want to move the entire storage domain from one DC to another. If you want to move only a few VMs, you can export the VM to OVA and import the OVA on the other DC. Another way, which may require more work but can be faster is to download the vm disks and upload them to the other storage. Example - downloding disk from one DC/system: # python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py \ -c engine --disk-format=qcow2 --disk-sparse --sd-name fc_0 disk1.qcow2 And uploading to another DC/system, using different format: # python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py \ -c engine --disk-format=raw --disk-sparse --sd-name nfs_0 disk1.qcow2 This works for powered off VM. If you want move a running VM, it is possible to use the backup_vm.py example to download all the disk of a running VM. Then you can upload them to another system and create a VM from the disks. Here is an example backup downloading all the 9 disks of a running vm: # python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/backup_vm.py -c engine full --backup-dir /dev/shm/my-backup 20c83016-4c95-42d1-8aec-f75bdf62f471 [ 0.0 ] Starting full backup for VM 20c83016-4c95-42d1-8aec-f75bdf62f471 [ 1.9 ] Waiting until backup b9125cb5-c2eb-4f1e-953e-c2b8d5065996 is ready [ 34.4 ] Created checkpoint '222a88f7-14f2-46e5-a3aa-b618c79a6445' (to use in --from-checkpoint-uuid for the next incremental backup) [ 34.5 ] Creating image transfer for disk d6e3cca3-ca73-4169-b76e-f9928e727f9b [ 35.7 ] Image transfer d360c489-2fdb-4f4c-9d32-4bdb7d83a948 is ready [ 100.00% ] 3.00 GiB, 0.07 seconds, 43.54 GiB/s [ 35.7 ] Finalizing image transfer [ 37.8 ] Creating image transfer for disk f7fec022-dd6d-4170-b0f1-cab3bfa59e52 [ 39.0 ] Image transfer ba3fc564-2d88-4cc6-a0e7-220b132082ed is ready [ 100.00% ] 1.00 GiB, 1.41 seconds, 726.56 MiB/s [ 40.4 ] Finalizing image transfer [ 44.5 ] Creating image transfer for disk 6797a952-6219-4baf-9b37-c7905b3e8431 [ 45.7 ] Image transfer 3b8e5bf6-bc95-4d3e-bc76-a169f44259df is ready [ 100.00% ] 2.00 GiB, 0.06 seconds, 34.33 GiB/s [ 45.7 ] Finalizing image transfer [ 47.8 ] Creating image transfer for disk e57e7263-3683-4e7e-8f91-1b3d2784ba7f [ 49.0 ] Image transfer 9dca0ce2-bcae-45f9-8372-5c3444392e06 is ready [ 100.00% ] 1.00 GiB, 1.49 seconds, 686.21 MiB/s [ 50.5 ] Finalizing image transfer [ 51.5 ] Creating image transfer for disk def1bf0b-f4bb-4c55-acad-3407b4fa51a3 [ 52.7 ] Image transfer 9420754f-26ae-463e-9b4b-8d16bbc639bf is ready [ 100.00% ] 1.00 GiB, 1.49 seconds, 689.46 MiB/s [ 54.2 ] Finalizing image transfer [ 58.3 ] Creating image transfer for disk 505a9363-d7b0-4a31-ab24-a02157d5da0a [ 59.4 ] Image transfer 82cb3e00-cedb-486e-b0a7-938b7ad85ddf is ready [ 100.00% ] 3.00 GiB, 0.06 seconds, 49.46 GiB/s [ 59.5 ] Finalizing image transfer [ 61.5 ] Creating image transfer for disk 29943aa5-2ffc-408e-950b-a2f6ded72f89 [ 62.7 ] Image transfer fa9bc0dc-5167-4b3c-adc5-394cfb3c3b76 is ready [ 100.00% ] 3.00 GiB, 0.06 seconds, 51.15 GiB/s [ 62.7 ] Finalizing image transfer [ 64.8 ] Creating image transfer for disk 524cff40-3f54-4d16-bd94-9902e6619190 [ 65.9 ] Image transfer 7d053497-3830-4548-a9fb-d75f7b816dde is ready [ 100.00% ] 10.00 GiB, 3.36 seconds, 2.98 GiB/s [ 69.3 ] Finalizing image transfer [ 72.4 ] Creating image transfer for disk ff71c7d3-5af3-4943-b922-b5baeb0be67c [ 73.5 ] Image transfer 5cb9efb9-537c-4e16-b9b3-44ef44ab59dd is ready [ 100.00% ] 2.00 GiB, 0.06 seconds, 33.31 GiB/s [ 73.5 ] Finalizing image transfer [ 74.6 ] Creating image transfer for disk b49df79e-7b26-402b-bf11-328fddeca1ec [ 75.7 ] Image transfer 3ccc44f2-ba6d-4545-bc96-1b924b2c1dd8 is ready [ 100.00% ] 2.00 GiB, 0.10 seconds, 20.20 GiB/s [ 75.8 ] Finalizing image transfer [ 77.9 ] Finalizing backup [ 78.0 ] Waiting until backup is finalized [ 78.2 ] Full backup completed successfully # ls -lhs /dev/shm/my-backup/ total 5.4G 196K -rw-r--r--. 1 root root 193K Mar 16 22:58 29943aa5-2ffc-408e-950b-a2f6ded72f89.202103162258.full.qcow2 196K -rw-r--r--. 1 root root 193K Mar 16 22:58 505a9363-d7b0-4a31-ab24-a02157d5da0a.202103162258.full.qcow2 2.4G -rw-r--r--. 1 root root 2.4G Mar 16 22:59 524cff40-3f54-4d16-bd94-9902e6619190.202103162258.full.qcow2 196K -rw-r--r--. 1 root root 193K Mar 16 22:58 6797a952-6219-4baf-9b37-c7905b3e8431.202103162258.full.qcow2 196K -rw-r--r--. 1 root root 193K Mar 16 22:59 b49df79e-7b26-402b-bf11-328fddeca1ec.202103162258.full.qcow2 196K -rw-r--r--. 1 root root 193K Mar 16 22:58 d6e3cca3-ca73-4169-b76e-f9928e727f9b.202103162258.full.qcow2 1.1G -rw-r--r--. 1 root root 1.1G Mar 16 22:58 def1bf0b-f4bb-4c55-acad-3407b4fa51a3.202103162258.full.qcow2 1.1G -rw-r--r--. 1 root root 1.1G Mar 16 22:58 e57e7263-3683-4e7e-8f91-1b3d2784ba7f.202103162258.full.qcow2 1.1G -rw-r--r--. 1 root root 1.1G Mar 16 22:58 f7fec022-dd6d-4170-b0f1-cab3bfa59e52.202103162258.full.qcow2 196K -rw-r--r--. 1 root root 193K Mar 16 22:59 ff71c7d3-5af3-4943-b922-b5baeb0be67c.202103162258.full.qcow2 With some more work, one can build semi-live migration using backup and upload. Nir

Nir, thank you for your help and practical examples. I'll try to simulate. In fact, most companies that actually have different offices do not share the same SAN/Storage structure , there are physical limitations and network details and time that make procedures impossible - As I said, storages that work differently, are also a limiting factor. Export Domain, we can count today, but in six months, we don't know if it'll still be there. Most hypervisors support Live Migrate/Off line migrate between "data centers". It would be great to have this feature up to oVirt 4.5.x

On Wed, Mar 17, 2021 at 12:01 AM Daniel Gurgel < danieldemoraisgurgel@gmail.com> wrote:
Nir, thank you for your help and practical examples. I'll try to simulate.
In fact, most companies that actually have different offices do not share the same SAN/Storage structure , there are physical limitations and network details and time that make procedures impossible - As I said, storages that work differently, are also a limiting factor.
Export Domain, we can count today, but in six months, we don't know if it'll still be there.
Even if export domains will still be there, you'd probably want to switch to an alternative that is better tested these days and that would comply with recent/future changes. Let me be more concrete here - we're adding TPM [1] + have already added NVRAM and there's no plan to preserve them when exporting and importing from an export domain. Nir touched the trade-off between the alternatives that were described and I'll add to that: 1. Detach+attach date domain is indeed nice in the sense that you don't copy data but if you intend to use it as an alternative to export domains, be aware that you have to have all the disks of the VM/template on the storage domain(s) you detach and when detaching a storage domain, the entities (VMs/templates) are "unregistered" (i.e., removed) from the original data center. 2. Export/Import OVAs is the closest replacement for export domains - if you want the ability to export a VM/template from one data center and import it elsewhere while ensuring the VM/template includes all its resources (disks, TPM, NVRAM, etc) at the target data center and without affecting the original data center, that's the mechanism I'd go with. We made significant fixes for that recently so I'd suggest updating to 4.4.4. [1] https://www.ovirt.org/develop/release-management/features/virt/tpm-device.ht...
Most hypervisors support Live Migrate/Off line migrate between "data centers". It would be great to have this feature up to oVirt 4.5.x _______________________________________________ Devel mailing list -- devel@ovirt.org To unsubscribe send an email to devel-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WYRHNO2V6RF4FS...

On Wed, Mar 17, 2021 at 12:02 AM Daniel Gurgel < danieldemoraisgurgel@gmail.com> wrote:
Nir, thank you for your help and practical examples. I'll try to simulate.
In fact, most companies that actually have different offices do not share the same SAN/Storage structure , there are physical limitations and network details and time that make procedures impossible - As I said, storages that work differently, are also a limiting factor.
Export Domain, we can count today, but in six months, we don't know if it'll still be there.
Export domain is likely to remain in all 4.4.z, but I don't recommend to use it because it will be impossible to support if you have issues with it.
Most hypervisors support Live Migrate/Off line migrate between "data centers". It would be great to have this feature up to oVirt 4.5.x
I guess that by "data center" you mean different physical data center and not oVirt data center logical entity. In this case you may not be able to access storage connected to one oVirt setup from the other oVirt setup. Is this the use case you are interested in? Please try to describe the use case in more detail. Nir

Nir, exactly! Different data center (physical) - As I said, in addition to limitations of network connections and latency, different subnets/vlans and different storage technologies. In this case, one suggestion would be to allow Live Storage Migrate through the management interface (ovirtmgmt), assuming that there is routing/connection between data centers managed by the same Engine/Manager - the storage domain layer would be abstracted at that point, but it would be allowed to copy a disk between different storage domains, in different physical/logical data centers. At the end of the process, a pause could be made on the VM for the process to be successfully terminated and there is no data interruption/corruption. I don't know if I'm being clear. it would only be a suggestion for this to be implemented, because currently XenServer/XCP and VMWare are able to perform this type of migration. It is an important feature and that many users miss when they migrate to the Virt.
participants (3)
-
Arik Hadas
-
Daniel Gurgel
-
Nir Soffer