It is possible to export a vm bigger as 5 TB?
by miguel.garcia@toshibagcs.com
We have a vm with many virtual drives and need to backup as ova file. Since this demands a lot of space I had mounted an nfs directory in the host but get next message after try to export ova:
Error whle executing action: Cannot export VM. Invalid target folder: /mnt/shared2 on Host. You may refer to the engine.log file for further details.
Looking at the engine.log file got this mesage:
2020-07-03 11:42:37,268-04 INFO [org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand] (default task-1929) [e357a397-3dc3-4566-900f-6e0e0cb39030] Lock Acquired to object 'EngineLock:{exclusiveLocks='[0b65c67d-98ae-435a-9c0a-2d9f0856a98b=VM]', sharedLocks=''}'
2020-07-03 11:42:37,275-04 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default task-1929) [e357a397-3dc3-4566-900f-6e0e0cb39030] Executing Ansible command: /usr/bin/ansible-playbook --ssh-common-args=-F /var/lib/ovirt-engine/.ssh/config -v --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=/tmp/ansible-inventory2275650137225503626 --extra-vars=target_directory="/nt/shared2" --extra-vars=validate_only="True" /usr/share/ovirt-engine/playbooks/ovirt-ova-export.yml [Logfile: /var/log/ovirt-engine/ova/ovirt-export-ova-validate-ansible-20200703114237-172.16.99.13-e357a397-3dc3-4566-900f-6e0e0cb39030.log]
2020-07-03 11:42:39,554-04 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default task-1929) [e357a397-3dc3-4566-900f-6e0e0cb39030] Ansible playbook command has exited with value: 2
2020-07-03 11:42:39,555-04 WARN [org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand] (default task-1929) [e357a397-3dc3-4566-900f-6e0e0cb39030] Validation of action 'ExportVmToOva' failed for user Miguel.Garcia. Reasons: VAR__ACTION__EXPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_INVALID_OVA_DESTINATION_FOLDER,$vdsName hyp11.infra,$directory /nt/shared2
2020-07-03 11:42:39,556-04 INFO [org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand] (default task-1929) [e357a397-3dc3-4566-900f-6e0e0cb39030] Lock freed to object 'EngineLock:{exclusiveLocks='[0b65c67d-98ae-435a-9c0a-2d9f0856a98b=VM]', sharedLocks=''}'
2020-07-03 11:44:53,021-04 INFO [org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand] (default task-1937) [ba070d4b-5f76-4fd7-adc0-53e0f96e6635] Lock Acquired to object 'EngineLock:{exclusiveLocks='[0b65c67d-98ae-435a-9c0a-2d9f0856a98b=VM]', sharedLocks=''}'
2020-07-03 11:44:53,027-04 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default task-1937) [ba070d4b-5f76-4fd7-adc0-53e0f96e6635] Executing Ansible command: /usr/bin/ansible-playbook --ssh-common-args=-F /var/lib/ovirt-engine/.ssh/config -v --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=/tmp/ansible-inventory3629946116235444875 --extra-vars=target_directory="/mnt/shared2" --extra-vars=validate_only="True" /usr/share/ovirt-engine/playbooks/ovirt-ova-export.yml [Logfile: /var/log/ovirt-engine/ova/ovirt-export-ova-validate-ansible-20200703114453-172.16.99.13-ba070d4b-5f76-4fd7-adc0-53e0f96e6635.log]
2020-07-03 11:44:55,538-04 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default task-1937) [ba070d4b-5f76-4fd7-adc0-53e0f96e6635] Ansible playbook command has exited with value: 2
2020-07-03 11:44:55,538-04 WARN [org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand] (default task-1937) [ba070d4b-5f76-4fd7-adc0-53e0f96e6635] Validation of action 'ExportVmToOva' failed for user Miguel.Garcia. Reasons: VAR__ACTION__EXPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_INVALID_OVA_DESTINATION_FOLDER,$vdsName hyp11.infra,$directory /mnt/shared2
2020-07-03 11:44:55,539-04 INFO [org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand] (default task-1937) [ba070d4b-5f76-4fd7-adc0-53e0f96e6635] Lock freed to object 'EngineLock:{exclusiveLocks='[0b65c67d-98ae-435a-9c0a-2d9f0856a98b=VM]', sharedLocks=''}'
2020-07-03 11:45:17,391-04 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-11) [6d510f0c] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f762fb4b-57c1-40d0-bf3f-be4c83f16f44=PROVIDER]', sharedLocks=''}'
I had tried to addthe nfs partition as a Storage Domain data type but that didn't help either.
The mount persmission is as follow:
drwxr-xr-x. 3 vdsm kvm 50 Jul 3 11:47 /mnt/shared2
Any idea how can export this vm?
4 years, 4 months
OVA export creates empty and unusable images
by thomas@hoberg.net
I've tested this now in three distinct farms with CentOS7.8 and the latest oVirt 4.3 release: OVA export files only contain an XML header and then lots of zeros where the disk images should be.
Where an 'ls -l <vmname>.ova' shows a file about the size of the disk, 'du -h <vmname>.ova' shows mere kilobytes, 'strings <vmname>.ova' dumps the XML and then nothing but repeating zeros until the end of the file.
Exporting existing VMs from a CentOS/RHEL 7 farm and importing them after a rebuild on CentOS/RHEL 8 would seem like a safe migration strategy, except when OVA export isn't working.
Please treat with priority!
4 years, 4 months
oVirt Hyperconverged question
by Benedetto Vassallo
Hi all,
I am planning to build a 3 nodes hyperconverged system with oVirt, but
I have a question.
After having the system up and running with 3 nodes (compute and
storage), if I need some extra compute power can I add some other
"compute" (with no storage) nodes as glusterfs clients to enhance the
total compute power of the cluster and using the actual storage?
Thank you and Best Regards.
--
Benedetto Vassallo
Responsabile U.O. Sviluppo e manutenzione dei sistemi
Sistema Informativo di Ateneo
Università degli studi di Palermo
Phone: +3909123860056
Fax: +3909123860880
4 years, 4 months
Thin Provisioned to Preallocated
by Jorge Visentini
Hi oVirt land.
Can I convert the disks of Thin Provision to Preallocated?
Best Regards.
--
Att,
Jorge Visentini
+55 55 98432-9868
4 years, 4 months
Do distinct export domains share a name space? Can't export a VM, because it already exists in an unattached export domain...
by thomas@hoberg.net
After OVA export/import was a) recommended against b) not working with the current 4.3 on CentOS 7, I am trying to make sure I keep working copies of critical VMs before I test if the OVA export now works properly, with the Redhat fix from 4.4. applied to 4.3.
Long story short, I have an export domain "export", primarily attached to a 3 node HCI gluster-cluster and another domain "exportMono", primarily attached to a single node HCI gluster-cluster.
Yes I use an export domain for backup, because, ...well there is no easy and working alternative out of the box, or did I overlook something?
But of course, I also use an export domain for shipping between farms, so evidently I swap export domains like good old PDP-11 disk cartridges.... or at least I'd like to.
I started by exporting VM "tdc" from the 1nHCI to exportMono, reattached that to 3nHCI for am import. Import worked everything fine, transfer succeed. So I detach exportMono, which belongs to the 1nHCI cluster.
Next I do the OVA export on 1nHCI, but I need to get the working and reconfigured VM "tdc" out of the way on 3nHCI, so I dump it into the "export" export domain belonging to 3nHCI, because I understand I can't run two copies of the same VM on a single cluster.
Turns out I can' export it, because even if the export domain is now a different one and definitely doesn't contain "tdc" at all, oVirt complains that the volume ID that belongs to "tdc" already exists in the export domain....
So what's the theory here behind export domains? And what's the state of their support in oVirt 4.4?
I understand that distinct farms can't share an export domain, because they have no way of coordinating properly. Of course I tried to use one single NFS mount for both farms but the second farm properly detected the presense of another and required a distinct path.
But from the evidence before me, oVirt doesn't support or like the existance of more than one export domain, either: Something that deserves a note or explanation.
I understand they are deprecated in 4.3 already, but since they are also the only way to manage valuable VM images moving around, that currently works with the GUI on oVirt 4.3 it would be nice to have these questions answered.
4 years, 4 months
4.4 HCI Install Failure - Missing /etc/pki/CA/cacert.pem
by Stephen Panicho
Hi all! I'm using Cockpit to perform an HCI install, and it fails at the
hosted engine deploy. Libvirtd can't restart because of a missing
/etc/pki/CA/cacert.pem file.
The log (tasks seemingly from
/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/initial_clean.yml):
[ INFO ] TASK [ovirt.hosted_engine_setup : Stop libvirt service]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Drop vdsm config statements]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial abrt config
files]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restart abrtd service]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Drop libvirt sasl2 configuration
by vdsm]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Stop and disable services]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial libvirt default
network configuration]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Start libvirt]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unable
to start service libvirtd: Job for libvirtd.service failed because the
control process exited with error code.\nSee \"systemctl status
libvirtd.service\" and \"journalctl -xe\" for details.\n"}
journalctl -u libvirtd:
May 22 04:33:25 node1 libvirtd[26392]: libvirt version: 5.6.0, package:
10.el8 (CBS <cbs(a)centos.org>, 2020-02-27-01:09:46, )
May 22 04:33:25 node1 libvirtd[26392]: hostname: node1
May 22 04:33:25 node1 libvirtd[26392]: Cannot read CA certificate
'/etc/pki/CA/cacert.pem': No such file or directory
May 22 04:33:25 node1 systemd[1]: libvirtd.service: Main process exited,
code=exited, status=6/NOTCONFIGURED
May 22 04:33:25 node1 systemd[1]: libvirtd.service: Failed with result
'exit-code'.
May 22 04:33:25 node1 systemd[1]: Failed to start Virtualization daemon.
From a fresh CentOS 8.1 minimal install, I've installed the following:
- The 4.4 repo
- cockpit
- ovirt-cockpit-dashboard
- vdsm-gluster (providing glusterfs-server and allowing the Gluster Wizard
to complete)
- gluster-ansible-roles (only on the bootstrap host)
I'm not exactly sure what that initial bit of the playbook does. Comparing
the bootstrap node with another that has yet to be touched, both
/etc/libvirt/libvirtd.conf and /etc/sysconfig/libvirtd are the same on both
hosts. Yet the bootstrap host can no longer start libvirtd while the other
host can. Neither host has the /etc/pki/CA/cacert.pem file.
Please let me know if I can provide any more information. Thanks!
4 years, 4 months
Why OVA imports failed (the other reason...)
by thomas@hoberg.net
Empty disks from exports that went wrong didn't help. But that's fixed now, even if I can't fully validate the OVA exports on VMware and VirtualBox.
The export/import target for the *.ova files is an SSD hosted xfs file system on a pure compute Xeon D oVirt node, exported and automounted to the 3nHCI cluster, also all SSD but J5005 Atoms.
As I import the OVA file, I chose the Xeon-D as the import node and the *local path* on that host for the import. Cockpit checks the OVA file, detects the machine inside, lets me select and chose it for import, potentially overriding some parameters, lets me choose the target storage volume, sets up the job... and then fails, rather siliently and with very little in terms of error reporting ("connection closed") is the best I got.
Now that same process worked just fine on a single node HCI cluster (also J5005 Atom), which had me a bit stunned at first, but gave a hint as to the cause: Parts of the input job, most likely an qemu-img job, isn't run via the machine you selected in the first step and unless the path is global (e.g. external NFS), it fails.
If someone from the oVirt team could check and validate or disprove this theory, that could be documented and/or added as a check to avoid people falling into the same trap.
While I was testing this using a global automount path, my cluster failed me (creating and deleting VMs a bit too quickly?) and I had to struggle for a while to have it recover.
While those transient ailures are truly frightening, oVirt's ability to recover from these scenarios is quite simply awsome.
I guess it's really mostly miscommunication and not real failures and oVirt has lots of logic to rectify that.
4 years, 4 months
Has anyone ever had success importing an OVA VM exported from oVirt to VirtualBox or VMware? On Windows?
by thomas@hoberg.net
After applying the OVA export patch, that ensured disk content was actually written ino the OVA image, I have been able to transfer *.ova VMs between two oVirt clusters. There are still problems that I will report once I have fully tested what's going on, but in the mean-time for all those, who want to be able to move VMs not only in the direction of oVirt (seems well tested), but also the other way (e.g. local desktop use, or even a vSphere farm) an OVA export could be a life saver--if it worked.
I have tried importing oVirt exported VMs (which correctly imported and ran on a second oVirt host) both on VMware workstation 15.5.6 and VirtualBox 6.1.12 running on a Windows 2019 host without success.
I've also tried untaring the *.ova into the component *.ovf + image file, but the issue seems to be with the XML description which has both imports fail pretty much instantly, before they even read the disk image file.
From what I can tell, the XML created by qemu-img looks just fine, the error messages emitted by both products are very sparse and seem misleading.
E.g. VirtualBox complains "Empty element ResourceType under Item element, line 1.", while I see all <Item> well constructed with a ResourceType in every one of them.
VMware Workstation is completely useless in terms of log-file-diagnostics: It reports an invalid value for the "populatedSize" for the "Disk", but doesn't log that to the ovftool.log. I am pretty sure the error is in the inconsistent interpretation of this section, where the AllocationUnits, capacity and populatedSize leave some room for misinterpretation:
-<DiskSection>
<Info>List of Virtual Disks</Info>
<Disk ovf:cinder_volume_type="" ovf:disk_storage_type="IMAGE" ovf:description="Auto-generated for Export To OVA" ovf:wipe-after-delete="false" ovf:disk-description="" ovf:disk-alias="oeight_Disk1" ovf:pass-discard="true" ovf:boot="true" ovf:disk-interface="VirtIO_SCSI" ovf:volume-type="Sparse" ovf:volume-format="RAW" ovf:format="http://www.vmware.com/specifications/vmdk.html#sparse" ovf:fileRef="974ad04f-d9bb-4881-aad2-c1d5404200ef" ovf:parentRef="" ovf:populatedSize="536953094144" ovf:capacityAllocationUnits="byte * 2^30" ovf:capacity="500" ovf:diskId="02b8d976-7e42-44e7-8e24-06f974f1c3ea"/>
</DiskSection>
I am trying to import the *.ova files on a Windows host and the sparsity get typically lost on the scp file transfer, too.
Yes, this most likely isn't an oVirt bug, qemu-img is maintained elsewhere and the problem to me looks more on the receiver side.
But have you tried it? Can you imagine it being important, too?
4 years, 4 months
Re: PATCH method not allowed in imageio
by Nir Soffer
On Fri, Aug 7, 2020, 15:52 Łukasz Kołaciński <l.kolacinski(a)storware.eu>
wrote:
> Hello,
> Thank you for previous answers. I don't have problems with checkpoints
> anymore.
>
> I am trying to send PATCH request to imageio but it seems like I don't
> have write access. In the documentation I saw that it must be a RAW format.
> I think I am missing something else.
>
> OPTIONS Request:
> {
> "features": [
> "extents"
> ],
> "max_readers": 8,
> "max_writers": 8
> }
> Allow: OPTIONS,GET
>
> PATCH Request:
> *You are not allowed to access this resource: Ticket
> 485493df-b07a-495c-8aa3-824aad45b4ab forbids write*
>
> I created transfer using java sdk:
> ImageTransfer imageTransfer =
> connection.getImageTransfersSvc().addForDisk().imageTransfer(
> imageTransfer()
> .direction(direction)
>
Is this a backup? You cannot write to disks during backup.
.disk(disk)
> .backup(backup)
> .inactivityTimeout(120)
> .format(DiskFormat.RAW))
> .send().imageTransfer();
>
Can you explain what are you trying to do?
> It's similar to python examples.
>
> Best Regards
>
> Łukasz Kołaciński
>
> Junior Java Developer
>
> e-mail: l.kolacinski(a)storware.eu
> <m.helbert(a)storware.eu>
>
>
>
>
> *[image: STORWARE]* <http://www.storware.eu/>
>
>
>
> *ul. Leszno 8/44 01-192 Warszawa www.storware.eu
> <https://www.storware.eu/>*
>
> *[image: facebook]* <https://www.facebook.com/storware>
>
> *[image: twitter]* <https://twitter.com/storware>
>
> *[image: linkedin]* <https://www.linkedin.com/company/storware>
>
> *[image: Storware_Stopka_09]*
> <https://www.youtube.com/channel/UCKvLitYPyAplBctXibFWrkw>
>
>
>
> *Storware Spółka z o.o. nr wpisu do ewidencji KRS dla M.St. Warszawa
> 000510131* *, NIP 5213672602.** Wiadomość ta jest przeznaczona jedynie
> dla osoby lub podmiotu, który jest jej adresatem i może zawierać poufne
> i/lub uprzywilejowane informacje. Zakazane jest jakiekolwiek przeglądanie,
> przesyłanie, rozpowszechnianie lub inne wykorzystanie tych informacji lub
> podjęcie jakichkolwiek działań odnośnie tych informacji przez osoby lub
> podmioty inne niż zamierzony adresat. Jeżeli Państwo otrzymali przez
> pomyłkę tę informację prosimy o poinformowanie o tym nadawcy i usunięcie
> tej wiadomości z wszelkich komputerów. **This message is intended only
> for the person or entity to which it is addressed and may contain
> confidential and/or privileged material. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon,
> this information by persons or entities other than the intended recipient
> is prohibited. If you have received this message in error, please contact
> the sender and remove the material from all of your computer systems.*
>
>
4 years, 4 months