[ovirt-users] Convert local storage domain to shared

Demeter Tibor tdemeter at itsmart.hu
Wed Nov 29 17:49:26 UTC 2017


Hi,

Yes, I understand what do you talk about. It isn't too safe..:(
We have terrabytes under that VM.
I could make a downtime at most for eight hours (maybe), but meanwhile I have to copy 3 TB of vdisks. Firstly I need export (with a gigabit nic) to export domain, and back under 10gbe nic.
I don't know how is enough this.

Thanks

Tibor
----- 2017. nov.. 29., 18:26, Christopher Cox ccox at endlessnow.com írta:

> On 11/29/2017 09:39 AM, Demeter Tibor wrote:
>>
>> Dear Users,
>>
>> We have an old ovirt3.5 install with a local and a shared clusters. Meanwhile we
>> created a new data center, that based on 4.1 and it use only shared
>> infrastructure.
>> I would like to migrate an big VM from the old local datacenter to our new, but
>> I don't have enough downtime.
>>
>> Is it possible to convert the old local storage to shared (by share via NFS) and
>> attach that as new storage domain to the new cluster?
>> I just want to import VM and copy (while running) with live storage migration
>> function.
>>
>> I know, the official way for move vms between ovirt clusters is the export
>> domain, but it has very big disks.
>>
>> What can I do?
> 
> Just my opinion, but if you don't figure out a way to have occasional downtime,
> you'll probably pay the price with unplanned downtime eventually (and it could
> be painful).
> 
> Define "large disks"?  Terabytes?
> 
> I know for a fact that if you don't have good network segmentation that live
> migrations of large disks can be very problematic.  And I'm not talking about
> what you're wanting to do.  I'm just talking about storage migration.
> 
> We successfully migrated hundreds of VMs from a 3.4 to a 3.6 (on new blades and
> storage) last year over time using the NFS export domain method.
> 
> If storage is the same across DC's, you might be able to shortcut this with
> minimal downtime, but I'm pretty sure there will be some downtime.
> 
> I've seen large storage migrations render entire nodes offline (not nice) due to
> non-isolated paths or QoS.
> 
> 
> 
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


More information about the Users mailing list