On Thu, Jan 17, 2019 at 6:57 PM Arik Hadas <ahadas(a)redhat.com> wrote:
On Thu, Jan 17, 2019 at 7:54 PM Arik Hadas <ahadas(a)redhat.com> wrote:
>
>
> On Thu, Jan 17, 2019 at 6:54 PM Gianluca Cecchi <
> gianluca.cecchi(a)gmail.com> wrote:
>
>> On Thu, Jan 17, 2019 at 5:42 PM Gianluca Cecchi <
>> gianluca.cecchi(a)gmail.com> wrote:
>>
>>> On Thu, Jan 17, 2019 at 4:47 PM Gianluca Cecchi <
>>> gianluca.cecchi(a)gmail.com> wrote:
>>>
>>>> On Thu, Jan 17, 2019 at 4:24 PM Arik Hadas <ahadas(a)redhat.com>
wrote:
>>>>
>>>>>
>>>>>
>>>>> On Thu, Jan 17, 2019 at 4:53 PM Gianluca Cecchi <
>>>>> gianluca.cecchi(a)gmail.com> wrote:
>>>>>
>>>>>> Hello,
>>>>>> I have two different oVirt 4.2 environments and I want to
migrate
>>>>>> some big VMs from one to another.
>>>>>> I'm not able to detach and attach the block based domain
where are
>>>>>> the disks of source.
>>>>>> And I cannot use export domain functionality.
>>>>>>
>>>>>
>>>>> you can export them to ova on some device that can later be mounted
>>>>> to the destination environment.
>>>>> this is similar to the export domain functionality - but you
didn't
>>>>> specify why the export domain functionality is not applicable for
you.
>>>>>
>>>>
>>>>
Tried by I got error.
The VM from which I try to create OVA is composed by 3 disks: 15 + 60 + 440
Gb
This is the sequence of events seen in engine:
Starting to export Vm dbatest5 as a Virtual Appliance 1/17/19 5:33:35 PM
VDSM ov200 command TeardownImageVDS failed: Cannot deactivate Logical
Volume: ('General Storage Exception: ("5 [] [\' Logical volume
fa33df49-b09d-4f86-9719-ede649542c21/08abaac5-ef82-4755-adc5-7341ce1cde33
in
use.\']\\nfa33df49-b09d-4f86-9719-ede649542c21/[\'08abaac5-ef82-4755-adc5-7341ce1cde33\']",)',)
1/17/19 9:48:02 PM
Failed to export Vm dbatest5 as a Virtual Appliance to path
/export/ovirt/dbatest5.ova on Host ov200 1/17/19 9:48:03 PM
Disk dbatest5_Disk1 was successfully removed from domain ovsd3750 (User
admin@internal-authz). 1/17/19 9:48:04 PM
Disk dbatest5_Disk2 was successfully removed from domain ovsd3750 (User
admin@internal-authz). 1/17/19 9:48:05 PM
Disk dbatest5_Disk3 was successfully removed from domain ovsd3750 (User
admin@internal-authz). 1/17/1 99:48:05 PM
And this left this file
[root@ov200 ~]# ll /export/ovirt/dbatest5.ova.tmp
-rw-r--r--. 1 root root 552574404608 Jan 17 22:47
/export/ovirt/dbatest5.ova.tmp
[root@ov200 ~]#
The ".tmp" extension worried me about possibly not completed ova... is this
the case... anyway I tried then to import it, see below
I have not understtod which LV it tries to deactivate...
Ah, ok, thanks.
>>>> I think you are referring to this feature page and I see in my 4.2.7
>>>> env I can do it for a powered off VM:
>>>>
>>>>
https://ovirt.org/develop/release-management/features/virt/enhance-import...
>>>>
>>>
> Right
>
On destination host I get this, but I don't know if it depends on ova not
exactly completed in its part; from the "Broken pipe" error I suspect so...:
./upload_ova_as_vm.py /export/ovirt/dbatest5.ova.tmp RHVDBA rhvsd3720
Uploaded 69.46%
Uploaded 69.70%
Uploaded 69.95%
Uploaded 70.21%
Uploaded 70.45%
Uploaded 70.71%
Uploaded 70.72%
Traceback (most recent call last):
File "./upload_ova_as_vm.py", line 227, in <module>
proxy_connection.send(chunk)
File "/usr/lib64/python2.7/httplib.py", line 857, in send
self.sock.sendall(data)
File "/usr/lib64/python2.7/ssl.py", line 744, in sendall
v = self.send(data[count:])
File "/usr/lib64/python2.7/ssl.py", line 710, in send
v = self._sslobj.write(data)
socket.error: [Errno 32] Broken pipe
[root@rhvh200 ~]#
>
>>
>>>> I will try
>>>> Are the two described TBD features:
>>>> - export as ova also a running VM
>>>> - stream to export to ovirt-imageio daemon
>>>> supposed to be included in 4.3, or is there already a planned target
>>>> release for them?
>>>>
>>>
> The first one is included in 4.3 already (in general, the ova handling is
> better in 4.3 compared to 4.2 in terms of speed).
>
I meant to say: in general, the ova handling is better in 4.3 compared to
4.2.
I have verified on a 4.3rc2 env that I can indeed execute "export as ova"
for a running VM too.
I have a CentOS Atomic 7 VM and when you export as ova, a snapshot is
executed and the the ova file seems directly generated:
[root@hcinode1 vdsm]# ll /export/
total 1141632
-rw-------. 1 root root 1401305088 Jan 18 11:10 c7atomic1.ova.tmp
[root@hcinode1 vdsm]# ll /export/
total 1356700
-rw-------. 1 root root 1401305088 Jan 18 11:10 c7atomic1.ova
[root@hcinode1 vdsm]#
And at the end the snaphsot has been correctly removed.
Vm c7atomic1 was exported successfully as a Virtual Appliance to path
/export/c7atomic1.ova on Host hcinode1 1/18/19 11:10:23 AM
Starting to export Vm c7atomic1 as a Virtual Appliance 1/18/19 11:08:47 AM
> The second is unlikely to happen as we found there's no real need for it.
>
OK.
>
>>
>>>> Gianluca
>>>>
>>>
>>> BTW: would it be possible to export as ova on a path on host that is an
>>> NFS share, bypassing export domain setup, or does it require local storage?
>>>
>>
> Yes, but note that you need to mount the NFS share in a way that the root
> user on host can write to the specified path.
>
Ok, verified that without the no_root_squash I was not able to start the
export phase
>
>>
>>> Gianluca
>>>
>>
>>
>> Strange: I tried from the GUI with a vm with 3 disks and specifying a
>> directory but it seems that it has started 3 processes of type "qemu-img
>> convert" where destination is on the storage domain itself???
>> Does it need this step before writing to the destination I chose? I hope
>> no...
>>
>
> Unfortunately, it is indeed done that way in 4.2 :/
> Let me explain:
> We wanted disks within the ova to be of type qcow (for several reasons,
> like thin-provisioning, compression).
> In 4.2 we create temporary disks on the storage that are then copied into
> the OVA (and then removed).
>
That way I think you should do at least some kind of estimation before
starting, otherwise yo can compromise primary storage of the
infrastructure...
In my case I have to export about 500Gb of disks and I risked to fill up
the storage domain ;-(
In 4.3 qemu-img converts the images directly into the place they
should be
> within the OVA. That's the reason for the export process being faster in
> 4.3.
>
Verified this in 4.3 RC2
>
>>
>> vdsm 14380 14269 2 17:33 ? 00:00:14 /usr/bin/qemu-img
>> convert -p -t none -T none -f raw
>>
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/8cf79957-1b89-42c6-a7af-8031b22f3771/8846e584-190d-403e-a06a-792cebe3b1b1
>> -O qcow2 -o compat=1.1
>>
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/8ab058e1-19fd-440a-aa91-c137a79a870e/335c5fdb-025b-4e3e-82a8-06ec9d808d60
>>
>>
>> Gianluca
>>
>>
>>
Wouldn't be simpler to do it the vSphere way, where you cannot export
as
ova IIRC (but you can import from ova), while you can export to ovf? This
way you can directly export the disks and the VM definition on target
directory
Anyway if 4.3 solves this, it could be ok
Do you have any hint about errors on 4.2 or any information I could provide?
Gianluca