<div dir='auto'><div><br><div class="gmail_extra"><br><div class="gmail_quote">Den 29 nov. 2017 18:49 skrev Demeter Tibor <tdemeter@itsmart.hu>:<br type="attribution"><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>Hi,<br><br>Yes, I understand what do you talk about. It isn't too safe..:(<br>We have terrabytes under that VM.<br>I could make a downtime at most for eight hours (maybe), but meanwhile I have to copy 3 TB of vdisks. Firstly I need export (with a gigabit nic) to export domain, and back under 10gbe nic.<br>I don't know how is enough this.<br></div></blockquote></div></div></div><div dir="auto"><br></div><div dir="auto">Well, just counting the numbers, let's start with the optimistic approach and say that you can move 100 MB/s for 8 hours:</div><div dir="auto">100*60*60*8=<span style="font-family: sans-serif;">2880000 MB</span></div><div dir="auto">And then just divide that with 1024*1024 to get to Tera:</div><div dir="auto"><span style="font-family: sans-serif;">2880000%(1024^2)=</span><font face="sans-serif">2.74658203125 TB</font><br></div><div dir="auto"><font face="sans-serif"><br></font></div><div dir="auto"><font face="sans-serif">So roughly 2.7 TB in 8 hours, and that's very optimistic! If you're more pessimistic, adjust the number of MB you think (or better yet, tested) that you'll be able to send per second to get a more accurate answer.</font></div><div dir="auto"><font face="sans-serif"><br></font></div><div dir="auto"><font face="sans-serif">The question is how much you can do without any downtime, I don't know myself, but the devs should:</font></div><div dir="auto"><font face="sans-serif"><br></font></div><div dir="auto"><font face="sans-serif">@devs</font></div><div dir="auto"><font face="sans-serif">Is it possible to do live exports? I mean to keep exporting and just sync the delta? If not, that would be an awesome RFE, since it would drastically reduce the downtime for these kinds of operations.</font></div><div dir="auto"><font face="sans-serif"><br></font></div><div dir="auto"><font face="sans-serif">/K</font></div><div dir="auto"><font face="sans-serif"><br></font></div><div dir="auto"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><br>Thanks<br><br>Tibor<br>----- 2017. nov.. 29., 18:26, Christopher Cox ccox@endlessnow.com írta:<br><br>> On 11/29/2017 09:39 AM, Demeter Tibor wrote:<br>>><br>>> Dear Users,<br>>><br>>> We have an old ovirt3.5 install with a local and a shared clusters. Meanwhile we<br>>> created a new data center, that based on 4.1 and it use only shared<br>>> infrastructure.<br>>> I would like to migrate an big VM from the old local datacenter to our new, but<br>>> I don't have enough downtime.<br>>><br>>> Is it possible to convert the old local storage to shared (by share via NFS) and<br>>> attach that as new storage domain to the new cluster?<br>>> I just want to import VM and copy (while running) with live storage migration<br>>> function.<br>>><br>>> I know, the official way for move vms between ovirt clusters is the export<br>>> domain, but it has very big disks.<br>>><br>>> What can I do?<br>> <br>> Just my opinion, but if you don't figure out a way to have occasional downtime,<br>> you'll probably pay the price with unplanned downtime eventually (and it could<br>> be painful).<br>> <br>> Define "large disks"? Terabytes?<br>> <br>> I know for a fact that if you don't have good network segmentation that live<br>> migrations of large disks can be very problematic. And I'm not talking about<br>> what you're wanting to do. I'm just talking about storage migration.<br>> <br>> We successfully migrated hundreds of VMs from a 3.4 to a 3.6 (on new blades and<br>> storage) last year over time using the NFS export domain method.<br>> <br>> If storage is the same across DC's, you might be able to shortcut this with<br>> minimal downtime, but I'm pretty sure there will be some downtime.<br>> <br>> I've seen large storage migrations render entire nodes offline (not nice) due to<br>> non-isolated paths or QoS.<br>> <br>> <br>> <br>> _______________________________________________<br>> Users mailing list<br>> Users@ovirt.org<br>> http://lists.ovirt.org/mailman/listinfo/users<br>_______________________________________________<br>Users mailing list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br></div></blockquote></div><br></div></div></div>