Manually moving disks from FC to iSCSI

Hello, I have a source oVirt environment with storage domain on FC I have a destination oVirt environment with storage domain on iSCSI The two environments can communicate only via the network of their respective hypervisors. The source environment, in particular, is almost isolated and I cannot attach an export domain to it or something similar. So I'm going to plan a direct move through dd of the disks of some VMs The workflow would be On destination create a new VM with same config and same number of disks of the same size of corresponding source ones. Also I think same allocation policy (thin provision vs preallocated) Using lvs -o+lv_tags I can detect the names of my origin and destination LVs, corresponding to the disks When a VM is powered down, the LV that maps the disk will be not open, so I have to force its activation (both on source and on destination) lvchange --config 'global {use_lvmetad=0}' -ay vgname/lvname copy source disk with dd through network (I use gzip to limit network usage basically...) on src_host: dd if=/dev/src_vg/src_lv bs=1024k | gzip | ssh dest_host "gunzip | dd bs=1024k of=/dev/dest_vg/dest_lv" deactivate LVs on source and dest lvchange --config 'global {use_lvmetad=0}' -an vgname/lvname Try to power on the VM on destination Some questions: - about overall workflow - about dd flags, in particular if source disks are thin vs preallocated Thanks, Gianluca

On Tue, Jul 11, 2017 at 2:59 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, I have a source oVirt environment with storage domain on FC I have a destination oVirt environment with storage domain on iSCSI The two environments can communicate only via the network of their respective hypervisors. The source environment, in particular, is almost isolated and I cannot attach an export domain to it or something similar. So I'm going to plan a direct move through dd of the disks of some VMs
The workflow would be On destination create a new VM with same config and same number of disks of the same size of corresponding source ones. Also I think same allocation policy (thin provision vs preallocated) Using lvs -o+lv_tags I can detect the names of my origin and destination LVs, corresponding to the disks When a VM is powered down, the LV that maps the disk will be not open, so I have to force its activation (both on source and on destination)
lvchange --config 'global {use_lvmetad=0}' -ay vgname/lvname
copy source disk with dd through network (I use gzip to limit network usage basically...) on src_host: dd if=/dev/src_vg/src_lv bs=1024k | gzip | ssh dest_host "gunzip | dd bs=1024k of=/dev/dest_vg/dest_lv"
deactivate LVs on source and dest
lvchange --config 'global {use_lvmetad=0}' -an vgname/lvname
Try to power on the VM on destination
Some questions: - about overall workflow - about dd flags, in particular if source disks are thin vs preallocated
Thanks, Gianluca
Some further comments: - probably better/safe to use SPM hosts for lvchange commands both on source and target, as this imply metadata manipulation, correct? - when disks are preallocated, no problems, but when they are thin, I can be in this situation source disk defined as 90Gb disk and during time it has expanded up to 50Gb dest disk at the beginning just after creation will normally be of few GB (eg 4Gb), so the dd command will fail when fulll... Does this mean that it will be better to create dest disk as preallocated anyway or is it safe to run lvextend -L+50G dest_vg/dest_lv from command line? Will oVirt recognize its actual size or what?

Hi, I suggest another solution for you (which wasn't tested as a whole but much simpler in my opinion). On the source environment: - Create a local DC with a single local storage domain (let's call it sd1). - Deactivate and detach the FC domain from the existing shared DC and attach it to the local DC (From oVirt 4.1, attaching a shared storage domain to a local data center is possible). - Register FC domain VMs (from 'VM import' sub-tab under the FC domain). - Move the disks from the FC domain to the local domain. - Deactivate, detach and remove *without format* the local domain from the source environment. - Maintenance the host in the local data center (let's call it host1). On the destination environment: - Add host1 to the destination environment and create a local data center with this host and a new local domain. - Import the existing local domain (sd1) to the local data center and register its VMs. - Deactivate and detach the iSCSI domain from the existing data center and attach it to the local data center. - Move all the disks from the local domain to the iSCSI domain. - Deactivate and detach the iSCSI domain from the local data center, attach and activate it in the shared data center and register its VMs. Thanks, ELAD BEN AHARON SENIOR QUALITY ENGINEER Red Hat Israel Ltd. <https://www.redhat.com/> 34 Jerusalem Road, Building A, 1st floor Ra'anana, Israel 4350109 ebenahar@redhat.com T: +972-9-7692007/8272007 <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> On Tue, Jul 11, 2017 at 4:14 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Tue, Jul 11, 2017 at 2:59 PM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
Hello, I have a source oVirt environment with storage domain on FC I have a destination oVirt environment with storage domain on iSCSI The two environments can communicate only via the network of their respective hypervisors. The source environment, in particular, is almost isolated and I cannot attach an export domain to it or something similar. So I'm going to plan a direct move through dd of the disks of some VMs
The workflow would be On destination create a new VM with same config and same number of disks of the same size of corresponding source ones. Also I think same allocation policy (thin provision vs preallocated) Using lvs -o+lv_tags I can detect the names of my origin and destination LVs, corresponding to the disks When a VM is powered down, the LV that maps the disk will be not open, so I have to force its activation (both on source and on destination)
lvchange --config 'global {use_lvmetad=0}' -ay vgname/lvname
copy source disk with dd through network (I use gzip to limit network usage basically...) on src_host: dd if=/dev/src_vg/src_lv bs=1024k | gzip | ssh dest_host "gunzip | dd bs=1024k of=/dev/dest_vg/dest_lv"
deactivate LVs on source and dest
lvchange --config 'global {use_lvmetad=0}' -an vgname/lvname
Try to power on the VM on destination
Some questions: - about overall workflow - about dd flags, in particular if source disks are thin vs preallocated
Thanks, Gianluca
Some further comments:
- probably better/safe to use SPM hosts for lvchange commands both on source and target, as this imply metadata manipulation, correct? - when disks are preallocated, no problems, but when they are thin, I can be in this situation
source disk defined as 90Gb disk and during time it has expanded up to 50Gb dest disk at the beginning just after creation will normally be of few GB (eg 4Gb), so the dd command will fail when fulll... Does this mean that it will be better to create dest disk as preallocated anyway or is it safe to run lvextend -L+50G dest_vg/dest_lv from command line? Will oVirt recognize its actual size or what?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Tue, Jul 11, 2017 at 3:33 PM, Elad Ben Aharon <ebenahar@redhat.com> wrote:
Hi,
I suggest another solution for you (which wasn't tested as a whole but much simpler in my opinion).
On the source environment: - Create a local DC with a single local storage domain (let's call it sd1). - Deactivate and detach the FC domain from the existing shared DC and attach it to the local DC (From oVirt 4.1, attaching a shared storage domain to a local data center is possible). - Register FC domain VMs (from 'VM import' sub-tab under the FC domain). - Move the disks from the FC domain to the local domain. - Deactivate, detach and remove *without format* the local domain from the source environment. - Maintenance the host in the local data center (let's call it host1).
On the destination environment: - Add host1 to the destination environment and create a local data center with this host and a new local domain. - Import the existing local domain (sd1) to the local data center and register its VMs. - Deactivate and detach the iSCSI domain from the existing data center and attach it to the local data center. - Move all the disks from the local domain to the iSCSI domain. - Deactivate and detach the iSCSI domain from the local data center, attach and activate it in the shared data center and register its VMs.
Thanks,
ELAD BEN AHARON
SENIOR QUALITY ENGINEER
Red Hat Israel Ltd. <https://www.redhat.com/>
34 Jerusalem Road, Building A, 1st floor
Ra'anana, Israel 4350109
ebenahar@redhat.com T: +972-9-7692007/8272007 <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
Unfortunately the source environment is placed in a different site and I cannot connect a "fast" local domain to it: no internal disks and old blade system so that no fast USB channel ALso, in the destination iSCSI I have many VMs running and I cannot it put it down. I have to transfer 2 VMs for a total of about 500Gb of storage

On Tue, Jul 11, 2017 at 3:14 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Tue, Jul 11, 2017 at 2:59 PM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
Hello, I have a source oVirt environment with storage domain on FC I have a destination oVirt environment with storage domain on iSCSI The two environments can communicate only via the network of their respective hypervisors. The source environment, in particular, is almost isolated and I cannot attach an export domain to it or something similar. So I'm going to plan a direct move through dd of the disks of some VMs
The workflow would be On destination create a new VM with same config and same number of disks of the same size of corresponding source ones. Also I think same allocation policy (thin provision vs preallocated) Using lvs -o+lv_tags I can detect the names of my origin and destination LVs, corresponding to the disks When a VM is powered down, the LV that maps the disk will be not open, so I have to force its activation (both on source and on destination)
lvchange --config 'global {use_lvmetad=0}' -ay vgname/lvname
copy source disk with dd through network (I use gzip to limit network usage basically...) on src_host: dd if=/dev/src_vg/src_lv bs=1024k | gzip | ssh dest_host "gunzip | dd bs=1024k of=/dev/dest_vg/dest_lv"
deactivate LVs on source and dest
lvchange --config 'global {use_lvmetad=0}' -an vgname/lvname
Try to power on the VM on destination
Some questions: - about overall workflow - about dd flags, in particular if source disks are thin vs preallocated
Thanks, Gianluca
Some further comments:
- probably better/safe to use SPM hosts for lvchange commands both on source and target, as this imply metadata manipulation, correct? - when disks are preallocated, no problems, but when they are thin, I can be in this situation
source disk defined as 90Gb disk and during time it has expanded up to 50Gb dest disk at the beginning just after creation will normally be of few GB (eg 4Gb), so the dd command will fail when fulll... Does this mean that it will be better to create dest disk as preallocated anyway or is it safe to run lvextend -L+50G dest_vg/dest_lv from command line? Will oVirt recognize its actual size or what?
So I've done, both for thin provisioned (without having done snapshot on them, see below) and preallocated disks, and it seems to work, at least booting an OS boot disk copied over this way. I have one further doubt. For a VM I have a disk defined as thin provisioned and 90Gb in size. Some weeks ago I created a snapshot for it. Now, before copying over the LV, I have delete this snapshot. But I see that at the end of the process, the size of the LV backing the VM disk is now actually 92Gb, so I presume that my dd over network will fail.... What could I do to cover this scenario? What would be the command at os level in case I choose "move disk" in web admin gui to move a disk from a SD to another one? Thanks, Gianluca

The move command will first create the disk structure according to the snapshots on the destination , then a 'qemui-mg convert' will be performed for each snapshot. On Wed, Jul 12, 2017 at 1:31 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Tue, Jul 11, 2017 at 3:14 PM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
On Tue, Jul 11, 2017 at 2:59 PM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
Hello, I have a source oVirt environment with storage domain on FC I have a destination oVirt environment with storage domain on iSCSI The two environments can communicate only via the network of their respective hypervisors. The source environment, in particular, is almost isolated and I cannot attach an export domain to it or something similar. So I'm going to plan a direct move through dd of the disks of some VMs
The workflow would be On destination create a new VM with same config and same number of disks of the same size of corresponding source ones. Also I think same allocation policy (thin provision vs preallocated) Using lvs -o+lv_tags I can detect the names of my origin and destination LVs, corresponding to the disks When a VM is powered down, the LV that maps the disk will be not open, so I have to force its activation (both on source and on destination)
lvchange --config 'global {use_lvmetad=0}' -ay vgname/lvname
copy source disk with dd through network (I use gzip to limit network usage basically...) on src_host: dd if=/dev/src_vg/src_lv bs=1024k | gzip | ssh dest_host "gunzip | dd bs=1024k of=/dev/dest_vg/dest_lv"
deactivate LVs on source and dest
lvchange --config 'global {use_lvmetad=0}' -an vgname/lvname
Try to power on the VM on destination
Some questions: - about overall workflow - about dd flags, in particular if source disks are thin vs preallocated
Thanks, Gianluca
Some further comments:
- probably better/safe to use SPM hosts for lvchange commands both on source and target, as this imply metadata manipulation, correct? - when disks are preallocated, no problems, but when they are thin, I can be in this situation
source disk defined as 90Gb disk and during time it has expanded up to 50Gb dest disk at the beginning just after creation will normally be of few GB (eg 4Gb), so the dd command will fail when fulll... Does this mean that it will be better to create dest disk as preallocated anyway or is it safe to run lvextend -L+50G dest_vg/dest_lv from command line? Will oVirt recognize its actual size or what?
So I've done, both for thin provisioned (without having done snapshot on them, see below) and preallocated disks, and it seems to work, at least booting an OS boot disk copied over this way.
I have one further doubt.
For a VM I have a disk defined as thin provisioned and 90Gb in size. Some weeks ago I created a snapshot for it. Now, before copying over the LV, I have delete this snapshot. But I see that at the end of the process, the size of the LV backing the VM disk is now actually 92Gb, so I presume that my dd over network will fail.... What could I do to cover this scenario?
What would be the command at os level in case I choose "move disk" in web admin gui to move a disk from a SD to another one?
Thanks, Gianluca
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (3)
-
Elad Ben Aharon
-
Fred Rolland
-
Gianluca Cecchi