[ovirt-users] Disaster Recovery Testing

Nir Soffer nsoffer at redhat.com
Wed Feb 15 20:15:14 UTC 2017


On Wed, Feb 15, 2017 at 9:30 PM, Gary Lloyd <g.lloyd at keele.ac.uk> wrote:
> Hi Nir thanks for the guidance
>
> We started to use ovirt a good few years ago now (version 3.2).
>
> At the time iscsi multipath wasn't supported, so we made our own
> modifications to vdsm and this worked well with direct lun.
> We decided to go with direct lun in case things didn't work out with OVirt
> and in that case we would go back to using vanilla kvm / virt-manager.
>
> At the time I don't believe that you could import iscsi data domains that
> had already been configured into a different installation, so we replicated
> each raw VM volume using the SAN to another server room for DR purposes.
> We use Dell Equallogic and there is a documented limitation of 1024 iscsi
> connections and 256 volume replications. This isn't a problem at the moment,
> but the more VMs that we have the more conscious I am about us reaching
> those limits (we have around 300 VMs at the moment and we have a vdsm hook
> that closes off iscsi connections if a vm is migrated /powered off).
>
> Moving to storage domains keeps the number of iscsi connections / replicated
> volumes down and we won't need to make custom changes to vdsm when we
> upgrade.
> We can then use the SAN to replicate the storage domains to another data
> centre and bring that online with a different install of OVirt (we will have
> to use these arrays for at least the next 3 years).
>
> I didn't realise that each storage domain contained the configuration
> details/metadata for the VMs.
> This to me is an extra win as we can recover VMs faster than we can now if
> we have to move them to a different data centre in the event of a disaster.
>
>
> Are there any maximum size / vm limits or recommendations for each storage
> domain ?

The recommended limit in rhel 6 was 350 lvs per storage domain. We believe this
limit is not correct for rhel 7 and recent ovirt versions. We are
testing currently
1000 lvs per storage domain, but we did not finish testing yet, so I cannot say
what is the recommended limit yet.

Preallocated disk has one lv, if you have thin disk, you have one lv
per snapshot.

There is no practical limit to the size of a storage domain.

> Does Ovirt support moving VM's between different storage domain type e.g.
> ISCSI to gluster ?

Sure, you can move vm disks from any storage domain to any storage domain
(except ceph).

>
>
> Many Thanks
>
> Gary Lloyd
> ________________________________________________
> I.T. Systems:Keele University
> Finance & IT Directorate
> Keele:Staffs:IC1 Building:ST5 5NB:UK
> +44 1782 733063
> ________________________________________________
>
> On 15 February 2017 at 18:56, Nir Soffer <nsoffer at redhat.com> wrote:
>>
>> On Wed, Feb 15, 2017 at 2:32 PM, Gary Lloyd <g.lloyd at keele.ac.uk> wrote:
>> > Hi
>> >
>> > We currently use direct lun for our virtual machines and I would like to
>> > move away from doing this and move onto storage domains.
>> >
>> > At the moment we are using an ISCSI SAN and we use on replicas created
>> > on
>> > the SAN for disaster recovery.
>> >
>> > As a test I thought I would replicate an existing storage domain's
>> > volume
>> > (via the SAN) and try to mount again as a separate storage domain (This
>> > is
>> > with ovirt 4.06 (cluster mode 3.6))
>>
>> Why do want to replicate a storage domain and connect to it?
>>
>> > I can log into the iscsi disk but then nothing gets listed under Storage
>> > Name / Storage ID (VG Name)
>> >
>> >
>> > Should this be possible or will it not work due the the uids being
>> > identical
>> > ?
>>
>> Connecting 2 storage domains with same uid will not work. You can use
>> either
>> the old or the new, but not both at the same time.
>>
>> Can you explain how replicating the storage domain volume is related to
>> moving from direct luns to storage domains?
>>
>> If you want to move from direct lun to storage domain, you need to create
>> a new disk on the storage domain, and copy the direct lun data to the new
>> disk.
>>
>> We don't support this yet, but you can copy manually like this:
>>
>> 1. Find the lv of the new disk
>>
>>     lvs -o name --select "{IU_<new-disk-uuid>} = lv_tags" vg-name
>>
>> 2. Activate the lv
>>
>>     lvchange -ay vg-name/lv-name
>>
>> 3. Copy the data from the lun
>>
>>     qemu-img convert -p -f raw -O raw -t none -T none
>> /dev/mapper/xxxyyy /dev/vg-name/lv-name
>>
>> 4. Deactivate the disk
>>
>>     lvchange -an vg-name/lv-name
>>
>> Nir
>
>


More information about the Users mailing list