[ovirt-users] Convert local storage domain to shared

Gianluca Cecchi gianluca.cecchi at gmail.com
Fri Dec 1 00:25:52 UTC 2017


On Wed, Nov 29, 2017 at 6:49 PM, Demeter Tibor <tdemeter at itsmart.hu> wrote:

> Hi,
>
> Yes, I understand what do you talk about. It isn't too safe..:(
> We have terrabytes under that VM.
> I could make a downtime at most for eight hours (maybe), but meanwhile I
> have to copy 3 TB of vdisks. Firstly I need export (with a gigabit nic) to
> export domain, and back under 10gbe nic.
> I don't know how is enough this.
>
> Thanks
>
> Tibor


Hi Tibor,
I'm in shortage of time these days, but I have to admit your problem was so
intriguing that I couldn't resist and I decided to try and reproduce it.
All happened on my laptop with Fedora 26 (time to upgrade? not enough
time... ;-)

So this is the test environment of all vms inside virt-manager:

1) Create 3.5.6 environment

- CentOS 6.6 VM (this was the iso I had at hand...) with hostname
c6engine35 where I installed oVirt 3.5.6 as engine
- CentOS 6.6 VM with hostname c6rhv35 (sorry for the rhv in the name but
these weeks I'm also working on it so it came out quite naturally...) were
I installed the Hypervisor of 3.5.6 repo

I created a local DC on top of a directory of the hypervisor (/ldomain)
I created a CentOS 6.6 VM in this storage domain with a 4Gb disk

2) Detach the local domain from DC

HERE YOUR THEORETICAL DOWNTIME BEGINS

To do so I powered off the test VM and created a fake further local domain
based on another directory of c6rhv35
Then put into maintenance the local domain to be imported in 4.1
The fake local domain becomes the master.
Detach the local domain.


3) Create 4.1.7 environment (in your case it is already there..)
- CentOS 7.4 VM with hostname c7engine41 where I installed oVirt 4.1.7 as
engine
- CentOS 7.4 VM with hostname c7rhv41 were I installed the Hypervisor of
4.1.7 repo

I created a shared DC NFSDC with a cluster NFSCL
To speed things I exported a directory from the engine and used it to
create an NFS storage domain (DATANFS) for the 4.1 host and activated it

4) Shutdown 3.5 environment and start/configure the 3.5 hypervisor to
export its previously local storage domain directory

Start c6rhv35 in single user mode
chkconfig service_name off

for this service_name:
ebtables ip6tables iptables libvirt-guests libvirtd momd numad sanlock
supervdsmd vdsmd wdmd

reboot
create an entry in /etc/exports

/ldomain c7rhv41.localdomain.local(rw)

service nfs start

set up accordingly the /etc/hosts of the servers involved so that all know
all...

5) import domain in 4.1
Select Storage -> Import domain and put

c6rhv35.localdomain.local:/ldomain

You will get a warning about it being already part of another DC:
https://drive.google.com/file/d/1HjFZhW6fCkasPak0jQH5k49Bdsg1NLSN/view?usp=sharing

Approve operation and you arrive here:
https://drive.google.com/file/d/10d1ea0TbPCZhoaAf7br5IVqnvZx0LzSu/view?usp=sharing

Activate the domain and you arrive here:
https://drive.google.com/file/d/1-4sMfVVj5WyaglPI8zhWUsdJqkVkxzAT/view?usp=sharing

Now you can proceed importing your VMs; in my case only the testvm
Select he imported storage domain and then the "VM Import" tab; select the
VM and "Import":

https://drive.google.com/file/d/18yjPvoHjTw6mOhUrlHJ2RpsdPph4qBxL/view?usp=sharing

https://drive.google.com/file/d/1CrCzVUYC3vI4aQ2ly83b3uAQ3QQh1xhm/view?usp=sharing

Note that it is an immediate operation, and not depending on the size of
the disks of the VM itself
At the end you get your VM imported; here details:

https://drive.google.com/file/d/1W00TpIKAQ7cWUit_tLIQkm30wj5j56AN/view?usp=sharing
https://drive.google.com/file/d/1qq7sZV2vwapRRdjbi21Z43OOBM0m2NuY/view?usp=sharing

While you import, you can then gradually start your VMs, so that your
downtime becomes partial and not total
https://drive.google.com/file/d/1kwrzSJBXISC0wBTtZIpdh3yJdA0g3G0A/view?usp=sharing

When you have started all your imported VMs, YOUR THEORETICAL DOWNTIME ENDS

Your VMS are now running on your old local storage, exported from your old
3.5 host to your new 4.1 hosts via NFS

You can now execute live storage migration of your disks one by one to the
desired 4.1 storage domain:
https://drive.google.com/file/d/1p6OOgDBbOFFGgy3uuWT-V8VnCMaxk4iP/view?usp=sharing

and at the end of the move
https://drive.google.com/file/d/1dUuKQQxI0r4Bhz-N0TmRcsnjwQl1bwU6/view?usp=sharing

Obviously there are many caveats in a real environment such as:

- actual oVirt origin and target version could differ from mine and
behavior be different
- network visibility between the two oVirt environments
- layout of the logical networks of the two oVirt environments: when you
import you could need to change logical network and have conflicting MACS:
in my test scenario it was all on ovirtmgmt with the same macs range
- live storage migration of TB of disks.. not tested yet (by me at
least)....
- other things that don't come to mind right now....

HIH,
Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171201/c68ba007/attachment.html>


More information about the Users mailing list