Something broke & took down multiple VMs for ~20 minutes
by David White
As the subject suggestions, something in oVirt HCI broke. I have no idea what, and it recovered on its own after about 20 minutes or so.
I believe that the issue was limited to a single host (although I don't know that for sure), as we had two VMs go completely unresponsive, but a 3rd VM remained operational. For a while during the outage, I was able to log into the oVirt admin web portal, and I noticed at least 1-2 of my hosts (I have 3 hosts) showed the problematic VMs as being problematic inside of oVirt.
Reviewing the oVirt Events, I see that this basically started right when the ETL Service Started. There were no events before that point since yesterday, but right when the ETL Service started, it seems like all hell broke loose.
oVirt detected "No faulty multipaths" on any of the hosts, but then very quickly started indicating that hosts, vms, and storage targets were unavailable. See my screenshot below.
Around 30 - 35 minutes later, it appears that the Hosted Engine terminated due to a storage issue, and auto recovered on a different host. There's a 2nd screenshot beneath the first.
Everything came back up shortly before 9am, and has been stable since.
In fact, the Volume replication issues that I saw in my environment after I performed maintenance on 1 of my hosts on Friday are no longer present. It appears that the Hosted Engine sees the storage as being perfectly healthy.
How do I even begin to figure out what happened, and try to prevent it from happening again?
[Screenshot from 2021-04-26 16-36-47.png]
[Screenshot from 2021-04-26 16-44-08.png]
Sent with ProtonMail Secure Email.
3 years, 7 months
pool list vm assign user
by Dominique D
Is there a way to know how to see who the vm of a pool assigned to?
I am able on the portal to see those who are "logged-in user" but the others VM I don't know to whom they are assigned.
3 years, 7 months
[ANN] Async oVirt Node release for oVirt 4.4.6
by Sandro Bonazzola
On May 10th 2021 the oVirt project released an async update of oVirt Node
(4.4.6.1)
Changes:
- Updated Advanced Virtualization packages
- Updated ovn2.11 and openvswitch2.11
- Updated ansible 2.9.21 (
https://github.com/ansible/ansible/blob/stable-2.9/changelogs/CHANGELOG-v...
)
- Update libssh (fixes CVE-2020-16135)
Full diff list:
--- ovirt-node-ng-image-4.4.6.manifest-rpm 2021-05-04 17:02:01.839123874
+0200
+++ ovirt-node-ng-image-4.4.6.1.manifest-rpm 2021-05-11 08:39:44.714649170
+0200
@@ -24 +24 @@
-ansible-2.9.20-2.el8.noarch
+ansible-2.9.21-2.el8.noarch
@@ -89 +89 @@
-corosynclib-3.1.0-4.el8.0.1.x86_64
+corosynclib-3.1.0-5.el8.x86_64
@@ -100 +100 @@
-cups-libs-2.2.6-38.el8.x86_64
+cups-libs-2.2.6-39.el8.x86_64
@@ -130,5 +130,5 @@
-dracut-049-135.git20210121.el8.x86_64
-dracut-config-generic-049-135.git20210121.el8.x86_64
-dracut-live-049-135.git20210121.el8.x86_64
-dracut-network-049-135.git20210121.el8.x86_64
-dracut-squash-049-135.git20210121.el8.x86_64
+dracut-049-136.git20210426.el8.x86_64
+dracut-config-generic-049-136.git20210426.el8.x86_64
+dracut-live-049-136.git20210426.el8.x86_64
+dracut-network-049-136.git20210426.el8.x86_64
+dracut-squash-049-136.git20210426.el8.x86_64
@@ -148,36 +148,36 @@
-fence-agents-all-4.2.1-67.el8.x86_64
-fence-agents-amt-ws-4.2.1-67.el8.noarch
-fence-agents-apc-4.2.1-67.el8.noarch
-fence-agents-apc-snmp-4.2.1-67.el8.noarch
-fence-agents-bladecenter-4.2.1-67.el8.noarch
-fence-agents-brocade-4.2.1-67.el8.noarch
-fence-agents-cisco-mds-4.2.1-67.el8.noarch
-fence-agents-cisco-ucs-4.2.1-67.el8.noarch
-fence-agents-common-4.2.1-67.el8.noarch
-fence-agents-compute-4.2.1-67.el8.noarch
-fence-agents-drac5-4.2.1-67.el8.noarch
-fence-agents-eaton-snmp-4.2.1-67.el8.noarch
-fence-agents-emerson-4.2.1-67.el8.noarch
-fence-agents-eps-4.2.1-67.el8.noarch
-fence-agents-heuristics-ping-4.2.1-67.el8.noarch
-fence-agents-hpblade-4.2.1-67.el8.noarch
-fence-agents-ibmblade-4.2.1-67.el8.noarch
-fence-agents-ifmib-4.2.1-67.el8.noarch
-fence-agents-ilo-moonshot-4.2.1-67.el8.noarch
-fence-agents-ilo-mp-4.2.1-67.el8.noarch
-fence-agents-ilo-ssh-4.2.1-67.el8.noarch
-fence-agents-ilo2-4.2.1-67.el8.noarch
-fence-agents-intelmodular-4.2.1-67.el8.noarch
-fence-agents-ipdu-4.2.1-67.el8.noarch
-fence-agents-ipmilan-4.2.1-67.el8.noarch
-fence-agents-kdump-4.2.1-67.el8.x86_64
-fence-agents-mpath-4.2.1-67.el8.noarch
-fence-agents-redfish-4.2.1-67.el8.x86_64
-fence-agents-rhevm-4.2.1-67.el8.noarch
-fence-agents-rsa-4.2.1-67.el8.noarch
-fence-agents-rsb-4.2.1-67.el8.noarch
-fence-agents-sbd-4.2.1-67.el8.noarch
-fence-agents-scsi-4.2.1-67.el8.noarch
-fence-agents-vmware-rest-4.2.1-67.el8.noarch
-fence-agents-vmware-soap-4.2.1-67.el8.noarch
-fence-agents-wti-4.2.1-67.el8.noarch
+fence-agents-all-4.2.1-68.el8.x86_64
+fence-agents-amt-ws-4.2.1-68.el8.noarch
+fence-agents-apc-4.2.1-68.el8.noarch
+fence-agents-apc-snmp-4.2.1-68.el8.noarch
+fence-agents-bladecenter-4.2.1-68.el8.noarch
+fence-agents-brocade-4.2.1-68.el8.noarch
+fence-agents-cisco-mds-4.2.1-68.el8.noarch
+fence-agents-cisco-ucs-4.2.1-68.el8.noarch
+fence-agents-common-4.2.1-68.el8.noarch
+fence-agents-compute-4.2.1-68.el8.noarch
+fence-agents-drac5-4.2.1-68.el8.noarch
+fence-agents-eaton-snmp-4.2.1-68.el8.noarch
+fence-agents-emerson-4.2.1-68.el8.noarch
+fence-agents-eps-4.2.1-68.el8.noarch
+fence-agents-heuristics-ping-4.2.1-68.el8.noarch
+fence-agents-hpblade-4.2.1-68.el8.noarch
+fence-agents-ibmblade-4.2.1-68.el8.noarch
+fence-agents-ifmib-4.2.1-68.el8.noarch
+fence-agents-ilo-moonshot-4.2.1-68.el8.noarch
+fence-agents-ilo-mp-4.2.1-68.el8.noarch
+fence-agents-ilo-ssh-4.2.1-68.el8.noarch
+fence-agents-ilo2-4.2.1-68.el8.noarch
+fence-agents-intelmodular-4.2.1-68.el8.noarch
+fence-agents-ipdu-4.2.1-68.el8.noarch
+fence-agents-ipmilan-4.2.1-68.el8.noarch
+fence-agents-kdump-4.2.1-68.el8.x86_64
+fence-agents-mpath-4.2.1-68.el8.noarch
+fence-agents-redfish-4.2.1-68.el8.x86_64
+fence-agents-rhevm-4.2.1-68.el8.noarch
+fence-agents-rsa-4.2.1-68.el8.noarch
+fence-agents-rsb-4.2.1-68.el8.noarch
+fence-agents-sbd-4.2.1-68.el8.noarch
+fence-agents-scsi-4.2.1-68.el8.noarch
+fence-agents-vmware-rest-4.2.1-68.el8.noarch
+fence-agents-vmware-soap-4.2.1-68.el8.noarch
+fence-agents-wti-4.2.1-68.el8.noarch
@@ -187 +187 @@
-filesystem-3.8-4.el8.x86_64
+filesystem-3.8-3.el8.x86_64
@@ -193 +193 @@
-freetype-2.9.1-5.el8.x86_64
+freetype-2.9.1-4.el8_3.1.x86_64
@@ -199 +199 @@
-fwupd-1.5.5-3.el8.x86_64
+fwupd-1.5.9-1.el8.x86_64
@@ -210 +210 @@
-glib2-2.56.4-10.el8.x86_64
+glib2-2.56.4-11.el8.x86_64
@@ -269,2 +269,2 @@
-iproute-5.9.0-4.el8.x86_64
-iproute-tc-5.9.0-4.el8.x86_64
+iproute-5.12.0-0.el8.x86_64
+iproute-tc-5.12.0-0.el8.x86_64
@@ -317,2 +317,2 @@
-krb5-libs-1.18.2-9.el8.x86_64
-krb5-workstation-1.18.2-9.el8.x86_64
+krb5-libs-1.18.2-10.el8.x86_64
+krb5-workstation-1.18.2-10.el8.x86_64
@@ -381 +381 @@
-libgcc-8.4.1-1.el8.x86_64
+libgcc-8.4.1-2.1.el8.x86_64
@@ -393 +393 @@
-libgomp-8.4.1-1.el8.x86_64
+libgomp-8.4.1-2.1.el8.x86_64
@@ -409 +409 @@
-libkadm5-1.18.2-9.el8.x86_64
+libkadm5-1.18.2-10.el8.x86_64
@@ -469,2 +469,2 @@
-libssh-0.9.4-2.el8.x86_64
-libssh-config-0.9.4-2.el8.noarch
+libssh-0.9.4-3.el8.x86_64
+libssh-config-0.9.4-3.el8.noarch
@@ -476 +476 @@
-libstdc++-8.4.1-1.el8.x86_64
+libstdc++-8.4.1-2.1.el8.x86_64
@@ -498,26 +498,26 @@
-libvirt-7.0.0-9.el8s.x86_64
-libvirt-admin-7.0.0-9.el8s.x86_64
-libvirt-bash-completion-7.0.0-9.el8s.x86_64
-libvirt-client-7.0.0-9.el8s.x86_64
-libvirt-daemon-7.0.0-9.el8s.x86_64
-libvirt-daemon-config-network-7.0.0-9.el8s.x86_64
-libvirt-daemon-config-nwfilter-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-interface-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-network-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-nodedev-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-nwfilter-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-qemu-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-secret-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-core-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-disk-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-gluster-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-iscsi-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-iscsi-direct-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-logical-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-mpath-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-rbd-7.0.0-9.el8s.x86_64
-libvirt-daemon-driver-storage-scsi-7.0.0-9.el8s.x86_64
-libvirt-daemon-kvm-7.0.0-9.el8s.x86_64
-libvirt-libs-7.0.0-9.el8s.x86_64
-libvirt-lock-sanlock-7.0.0-9.el8s.x86_64
+libvirt-7.0.0-14.el8s.x86_64
+libvirt-admin-7.0.0-14.el8s.x86_64
+libvirt-bash-completion-7.0.0-14.el8s.x86_64
+libvirt-client-7.0.0-14.el8s.x86_64
+libvirt-daemon-7.0.0-14.el8s.x86_64
+libvirt-daemon-config-network-7.0.0-14.el8s.x86_64
+libvirt-daemon-config-nwfilter-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-interface-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-network-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-nodedev-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-nwfilter-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-qemu-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-secret-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-core-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-disk-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-gluster-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-iscsi-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-iscsi-direct-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-logical-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-mpath-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-rbd-7.0.0-14.el8s.x86_64
+libvirt-daemon-driver-storage-scsi-7.0.0-14.el8s.x86_64
+libvirt-daemon-kvm-7.0.0-14.el8s.x86_64
+libvirt-libs-7.0.0-14.el8s.x86_64
+libvirt-lock-sanlock-7.0.0-14.el8s.x86_64
@@ -533 +533,2 @@
-libxcrypt-4.1.1-5.el8.x86_64
+libxcrypt-4.1.1-6.el8.x86_64
+libxkbcommon-0.9.1-1.el8.x86_64
@@ -618,2 +619,2 @@
-openscap-1.3.4-5.el8.x86_64
-openscap-scanner-1.3.4-5.el8.x86_64
+openscap-1.3.5-2.el8.x86_64
+openscap-scanner-1.3.5-2.el8.x86_64
@@ -626 +627 @@
-openvswitch2.11-2.11.0-50.el8.x86_64
+openvswitch2.11-2.11.3-87.el8s.x86_64
@@ -642 +643 @@
-ovirt-node-ng-image-update-placeholder-4.4.6-1.el8.noarch
+ovirt-node-ng-image-update-placeholder-4.4.6.1-1.el8.noarch
@@ -650,2 +651,2 @@
-ovirt-release-host-node-4.4.6-1.el8.noarch
-ovirt-release44-4.4.6-1.el8.noarch
+ovirt-release-host-node-4.4.6.1-1.el8.noarch
+ovirt-release44-4.4.6.1-1.el8.noarch
@@ -654,2 +655,2 @@
-ovn2.11-2.11.1-39.el8.x86_64
-ovn2.11-host-2.11.1-39.el8.x86_64
+ovn2.11-2.11.1-57.el8s.x86_64
+ovn2.11-host-2.11.1-57.el8s.x86_64
@@ -788 +789 @@
-python3-openvswitch2.11-2.11.0-50.el8.x86_64
+python3-openvswitch2.11-2.11.3-87.el8s.x86_64
@@ -828 +829 @@
-python3-subscription-manager-rhsm-1.28.13-2.el8.x86_64
+python3-subscription-manager-rhsm-1.28.16-1.el8.x86_64
@@ -830 +831 @@
-python3-syspurpose-1.28.13-2.el8.x86_64
+python3-syspurpose-1.28.16-1.el8.x86_64
@@ -836,12 +837,14 @@
-qemu-guest-agent-5.2.0-11.el8s.x86_64
-qemu-img-5.2.0-11.el8s.x86_64
-qemu-kvm-5.2.0-11.el8s.x86_64
-qemu-kvm-block-curl-5.2.0-11.el8s.x86_64
-qemu-kvm-block-gluster-5.2.0-11.el8s.x86_64
-qemu-kvm-block-iscsi-5.2.0-11.el8s.x86_64
-qemu-kvm-block-rbd-5.2.0-11.el8s.x86_64
-qemu-kvm-block-ssh-5.2.0-11.el8s.x86_64
-qemu-kvm-common-5.2.0-11.el8s.x86_64
-qemu-kvm-core-5.2.0-11.el8s.x86_64
-quota-4.04-13.el8.x86_64
-quota-nls-4.04-13.el8.noarch
+qemu-guest-agent-5.2.0-16.el8s.x86_64
+qemu-img-5.2.0-16.el8s.x86_64
+qemu-kvm-5.2.0-16.el8s.x86_64
+qemu-kvm-block-curl-5.2.0-16.el8s.x86_64
+qemu-kvm-block-gluster-5.2.0-16.el8s.x86_64
+qemu-kvm-block-iscsi-5.2.0-16.el8s.x86_64
+qemu-kvm-block-rbd-5.2.0-16.el8s.x86_64
+qemu-kvm-block-ssh-5.2.0-16.el8s.x86_64
+qemu-kvm-common-5.2.0-16.el8s.x86_64
+qemu-kvm-core-5.2.0-16.el8s.x86_64
+qemu-kvm-ui-opengl-5.2.0-16.el8s.x86_64
+qemu-kvm-ui-spice-5.2.0-16.el8s.x86_64
+quota-4.04-14.el8.x86_64
+quota-nls-4.04-14.el8.noarch
@@ -889 +892 @@
-sos-4.0-11.el8.noarch
+sos-4.1-1.el8.noarch
@@ -903 +906 @@
-subscription-manager-rhsm-certificates-1.28.13-2.el8.x86_64
+subscription-manager-rhsm-certificates-1.28.16-1.el8.x86_64
@@ -960,0 +964 @@
+xkeyboard-config-2.28-1.el8.noarch
@@ -963,0 +968,2 @@
+xmlsec1-1.2.25-4.el8.x86_64
+xmlsec1-openssl-1.2.25-4.el8.x86_64
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 7 months
Re: Changing the ovirtmgmt IP address
by Matthew.Stier@fujitsu.com
Found the answer:
Update /var/lib/vdsm/persistence/netconf/nets/ovritmgmt and reboot.
From: Matthew.Stier(a)fujitsu.com <Matthew.Stier(a)fujitsu.com>
Sent: Monday, May 10, 2021 3:18 PM
To: users(a)ovirt.org
Subject: [ovirt-users] Changing the ovirtmgmt IP address
Version: 4.3.10
I'm attempting to change the IP address, netmask and gateway of the ovirtmgmt NIC of a host, but everytime I reboot the host, the old address/netmask/gateway re-assert themselves.
Where do I need to make the changes, so they will be permanent?
I've modified /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt and route-ovirtmgmt, but they don't stick through a reboot.
3 years, 7 months
Changing the ovirtmgmt IP address
by Matthew.Stier@fujitsu.com
Version: 4.3.10
I'm attempting to change the IP address, netmask and gateway of the ovirtmgmt NIC of a host, but everytime I reboot the host, the old address/netmask/gateway re-assert themselves.
Where do I need to make the changes, so they will be permanent?
I've modified /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt and route-ovirtmgmt, but they don't stick through a reboot.
3 years, 7 months
Changing 1 node Gluster Distributed to replica
by Ernest Clyde Chua
Good day,
currently we have a 1 node host that also runs a gluster in 1 node
distributed mode and we recently decided to upgrade to a 3 node host
which also runs gluster and set a replica count of 3.
can someone help me how can i safely change the volume type to replicated
3 years, 7 months
Sharding decision for oVirt
by levin@mydream.com.hk
Description of problem:
Intermittent VM pause and Qcow image corruption after add new bricks.
I'm suffered an issue on image corruption on oVirt 4.3 caused by default gluster ovirt profile, and intermittent VM pause. the problem is similar to #2246 #2254 in glusterfs issue and VM pause issue report in ovirt user group. The gluster vol did not have pending heal object, vol appear in good shape, xfs is healthy, no hardware issue. Sadly few VM have mystery corruption after new bricks added.
Afterwards, I try to simulate the problem with or without "cluster.lookup-optimize off" few time, but the problem is not 100% reproducible with lookup-optimize on, I got 1 of 3 attempt that able to reproduce it. It really depend on the workloads and cache status at that moment and the number of object after rebalance as well.
Also I tried to disable all sharding features, it ran very solid, write performance increase by far, no corruption, no VM pause when the gluster under stress.
So, here is a decision question on shard or not shard.
IMO, even recommendation document saying it break large file into smaller chunk that allow healing to complete faster, a larger file can spread over multiple bricks. But there are uncovered issue compared to full large file in this case, I'd like to further deep dive into the reason why recommend shard as default for oVirt? Especially from the reliability and performance perspective, sharding seems losing this end for ovirt/kvm workloads. Is it more appropriate to just tell ovirt user to ensure underlying single bricks shall be large enough to hold the largest chunk instead? Besides, anything i'm overlooked for the shard setting? I'm really doubt to enable sharding on the volume after disaster.
3 years, 7 months
Re: oVirt deploy new HE Host problem
by Marko Vrgotic
Hi Yedidyah and Strahil,
Just to double check, if you received the issue request and log files.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
From: Marko Vrgotic <M.Vrgotic(a)activevideo.com>
Date: Thursday, 6 May 2021 at 11:43
To: Yedidyah Bar David <didi(a)redhat.com>, Strahil Nikolov <hunter86_bg(a)yahoo.com>, users(a)ovirt.org <users(a)ovirt.org>
Subject: Re: oVirt deploy new HE Host problem
It might come handy, here is the complete hosted-engine.conf file
[root@ovirt-sj-03 ~]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
fqdn=ovirt-engine.ictv.com
vm_disk_id=b019c5fa-8fb5-4bfc-8339-f5b7f590a051
sdUUID=054c43fc-1924-4106-9f80-0f2ac62b9886
console=vnc
vmid=66b6d489-ceb8-486a-951a-355e21f13627
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
iqn=
conf_image_UUID=910f445e-31c0-4441-9c82-720901f7f19b
port=
network_test=dns
vm_disk_vol_id=f1ce8ba6-2d3b-4309-bca0-e6a00ce74c75
storage=10.210.13.64:/hosted_engine
gateway=10.210.11.254
ca_subject="C=EN, L=Test, O=Test, CN=Test"
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
connectionUUID=e29cf818-5ee5-46e1-85c1-8aeefa33e95d
nfs_version=auto
bridge=ovirtmgmt
metadata_image_UUID=16b3e5ac-e70b-46e3-bf81-322954fe0b44
mnt_options=
domainType=nfs
password=
vdsm_use_ssl=true
tcp_t_port=
user=
host_id=3
metadata_volume_UUID=b6326e48-a7d2-4cba-af91-441db9f353c2
spUUID=00000000-0000-0000-0000-000000000000
conf_volume_UUID=c518f937-60fe-4fed-a54c-db11328bb507
portal=
lockspace_image_UUID=e08188be-f733-4d5c-9222-a4b4e2228955
lockspace_volume_UUID=081f81c5-b2b2-46d5-9f82-9d9041ccc108
tcp_t_address=
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
From: Marko Vrgotic <M.Vrgotic(a)activevideo.com>
Date: Thursday, 6 May 2021 at 11:20
To: Yedidyah Bar David <didi(a)redhat.com>, Strahil Nikolov <hunter86_bg(a)yahoo.com>, users(a)ovirt.org <users(a)ovirt.org>
Subject: oVirt deploy new HE Host problem
Hi Strahil and Yedidyah,
As agreed, short summary: Deploy new HE host fails
Pre Deploy state:
* Host1 and Host3 are current HE HA pool
* Host1 and Host3 are unaware of Host2 (check below)
* I am trying to add Host2 to HE HA pool
* Host2 is fully reinstalled – clean OS
* Host2 is added to oVirt as regular Host
* Host2 is currently in Maintenance mode (waiting for Reinstall with HE Deploy)
[root@ovirt-sj-03 ~]# hosted-engine --vm-status
--== Host ovirt-sj-01.ictv.com (id: 1) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt-sj-01.ictv.com
Host ID : 1
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : d15bb877
local_conf_timestamp : 3103909
Host timestamp : 3103909
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3103909 (Thu May 6 01:42:06 2021)
host-id=1
score=3400
vm_conf_refresh_time=3103909 (Thu May 6 01:42:06 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host ovirt-sj-03.ictv.com (id: 3) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt-sj-03.ictv.com
Host ID : 3
Engine status : {"health": "good", "vm": "up", "detail": "Up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 15801717
local_conf_timestamp : 3106395
Host timestamp : 3106395
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3106395 (Thu May 6 01:42:13 2021)
host-id=3
score=3400
vm_conf_refresh_time=3106395 (Thu May 6 01:42:13 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False
Deployment:
I have attached logs from Host2 and Engine – if anything is missing, please let me know.
Kindly awaiting your reply.
You might notice slight time change on Host2 logs – that’s after re-provisioning I did not set correct timezone
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
3 years, 7 months
Importing VM fails with "No space left on device"
by j.velasco@outlook.com
Hello List,
I am facing the following issue when I try to import a VM from a KVM host to my oVirt (4.4.5.11-1.el8).
The importing I done throuth GUI using the option of KVM provider.
-- Log1:
# cat /var/log/vdsm/import-57f84423-56cb-4187-86e2-f4208348e1f5-20210507T124121.log
[ 0.0] preparing for copy
[ 0.0] Copying disk 1/1 to /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/cb63ffc9-07ee-4323-9e8a-378be31ae3f7/e7e69cbc-47bf-4557-ae02-ca1c53c8423f
Traceback (most recent call last):
File "/usr/libexec/vdsm/kvm2ovirt", line 23, in <module>
kvm2ovirt.main()
File "/usr/lib/python3.6/site-packages/vdsm/kvm2ovirt.py", line 277, in main
handle_volume(con, diskno, src, dst, options)
File "/usr/lib/python3.6/site-packages/vdsm/kvm2ovirt.py", line 228, in handle_volume
download_disk(sr, estimated_size, None, dst, options.bufsize)
File "/usr/lib/python3.6/site-packages/vdsm/kvm2ovirt.py", line 169, in download_disk
op.run()
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 57, in run
res = self._run()
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 163, in _run
self._write_chunk(count)
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 188, in _write_chunk
n = self._dst.write(v)
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/file.py", line 88, in write
return util.uninterruptible(self._fio.write, buf)
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/util.py", line 20, in uninterruptible
return func(*args)
OSError: [Errno 28] No space left on device
-- Log2:
# cat /var/log/vdsm/vdsm.log
2021-05-07 10:29:49,813-0500 DEBUG (v2v/57f84423) [root] START thread <Thread(v2v/57f84423, started daemon 140273162123008)> (func=<bound method ImportVm._run of <vdsm.v2v.ImportVm object at 0x7f946051c5c0>>, args=(), kwargs={}) (concurrent:258)
2021-05-07 10:29:49,813-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' starting import (v2v:880)
2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') moving from state preparing -> state preparing (task:624)
2021-05-07 10:29:49,814-0500 INFO (v2v/57f84423) [vdsm.api] START prepareImage(sdUUID='cc9fae8e-b714-44cf-9dac-3a83a15b0455', spUUID='24d9d2fa-98f9-11eb-aea7-00163e09cc71', imgUUID='226cc137-1992-4246-9484-80a1bfb5e9f7', leafUUID='847bc460-1b54-4756-8ced-4b969c399900', allowIllegal=False) from=internal, task_id=58a7bdc0-0f7e-4307-92ba-040f1a272721 (api:48)
2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Trying to register resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' for lock type 'shared' (resourceManager:474)
2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' is free. Now locking as 'shared' (1 active user) (resourceManager:531)
2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.ResourceManager.Request] (ResName='00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455', ReqID='b2ebef3c-8b3d-4429-b9da-b6f3af2c9ac4') Granted request (resourceManager:221)
2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') _resourcesAcquired: 00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455 (shared) (task:856)
2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') ref 1 aborting False (task:1008)
2021-05-07 10:29:49,815-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvs --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags cc9fae8e-b714-44cf-9dac-3a83a15b0455 (cwd None) (commands:153)
2021-05-07 10:29:49,900-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185)
2021-05-07 10:29:49,902-0500 DEBUG (v2v/57f84423) [storage.LVM] lvs reloaded (lvm:759)
2021-05-07 10:29:49,904-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-23 /usr/bin/dd iflag=direct skip=2096 bs=512 if=/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/metadata count=1 (cwd None) (commands:211)
2021-05-07 10:29:49,916-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] SUCCESS: <err> = b'1+0 records in\n1+0 records out\n512 bytes copied, 0.000228452 s, 2.2 MB/s\n'; <rc> = 0 (commands:224)
2021-05-07 10:29:49,916-0500 DEBUG (v2v/57f84423) [storage.Misc] err: [b'1+0 records in', b'1+0 records out', b'512 bytes copied, 0.000228452 s, 2.2 MB/s'], size: 512 (misc:114)
2021-05-07 10:29:49,917-0500 INFO (v2v/57f84423) [storage.LVM] Activating lvs: vg=cc9fae8e-b714-44cf-9dac-3a83a15b0455 lvs=['847bc460-1b54-4756-8ced-4b969c399900'] (lvm:1738)
2021-05-07 10:29:49,917-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvchange --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --autobackup n --available y cc9fae8e-b714-44cf-9dac-3a83a15b0455/847bc460-1b54-4756-8ced-4b969c399900 (cwd None) (commands:153)
2021-05-07 10:29:50,034-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185)
2021-05-07 10:29:50,035-0500 INFO (v2v/57f84423) [storage.StorageDomain] Creating image run directory '/run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7' (blockSD:1362)
2021-05-07 10:29:50,035-0500 INFO (v2v/57f84423) [storage.fileUtils] Creating directory: /run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7 mode: None (fileUtils:201)
2021-05-07 10:29:50,036-0500 INFO (v2v/57f84423) [storage.StorageDomain] Creating symlink from /dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/847bc460-1b54-4756-8ced-4b969c399900 to /run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900 (blockSD:1367)
2021-05-07 10:29:50,037-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvs --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags cc9fae8e-b714-44cf-9dac-3a83a15b0455 (cwd None) (commands:153)
2021-05-07 10:29:50,119-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185)
2021-05-07 10:29:50,121-0500 DEBUG (v2v/57f84423) [storage.LVM] lvs reloaded (lvm:759)
2021-05-07 10:29:50,122-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-23 /usr/bin/dd iflag=direct skip=2096 bs=512 if=/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/metadata count=1 (cwd None) (commands:211)
2021-05-07 10:29:50,135-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] SUCCESS: <err> = b'1+0 records in\n1+0 records out\n512 bytes copied, 0.000212608 s, 2.4 MB/s\n'; <rc> = 0 (commands:224)
2021-05-07 10:29:50,135-0500 DEBUG (v2v/57f84423) [storage.Misc] err: [b'1+0 records in', b'1+0 records out', b'512 bytes copied, 0.000212608 s, 2.4 MB/s'], size: 512 (misc:114)
2021-05-07 10:29:50,136-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-23 /usr/bin/dd iflag=direct skip=2096 bs=512 if=/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/metadata count=1 (cwd None) (commands:211)
2021-05-07 10:29:50,143-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] SUCCESS: <err> = b'1+0 records in\n1+0 records out\n512 bytes copied, 0.000248951 s, 2.1 MB/s\n'; <rc> = 0 (commands:224)
2021-05-07 10:29:50,143-0500 DEBUG (v2v/57f84423) [storage.Misc] err: [b'1+0 records in', b'1+0 records out', b'512 bytes copied, 0.000248951 s, 2.1 MB/s'], size: 512 (misc:114)
2021-05-07 10:29:50,143-0500 DEBUG (v2v/57f84423) [root] /usr/bin/taskset --cpu-list 0-23 /usr/bin/qemu-img info --output json -U /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900 (cwd None) (commands:211)
2021-05-07 10:29:50,161-0500 DEBUG (v2v/57f84423) [root] SUCCESS: <err> = b''; <rc> = 0 (commands:224)
2021-05-07 10:29:50,162-0500 INFO (v2v/57f84423) [storage.StorageDomain] Creating symlink from /run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7 to /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7 (blockSD:1332)
2021-05-07 10:29:50,162-0500 DEBUG (v2v/57f84423) [storage.StorageDomain] path to image directory already exists: /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7 (blockSD:1338)
2021-05-07 10:29:50,163-0500 INFO (v2v/57f84423) [vdsm.api] FINISH prepareImage return={'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900', 'info': {'type': 'block', 'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900'}, 'imgVolumesInfo': [{'domainID': 'cc9fae8e-b714-44cf-9dac-3a83a15b0455', 'imageID': '226cc137-1992-4246-9484-80a1bfb5e9f7', 'volumeID': '847bc460-1b54-4756-8ced-4b969c399900', 'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900', 'leasePath': '/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/leases', 'leaseOffset': 108003328}]} from=internal, task_id=58a7bdc0-0f7e-4307-92ba-040f1a272721 (api:54)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') finished: {'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900', 'info': {'type': 'block', 'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900'}, 'imgVolumesInfo': [{'domainID': 'cc9fae8e-b714-44cf-9dac-3a83a15b0455', 'imageID': '226cc137-1992-4246-9484-80a1bfb5e9f7', 'volumeID': '847bc460-1b54-4756-8ced-4b969c399900', 'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900', 'leasePath': '/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/leases', 'leaseOffset': 108003328}]} (task:1210)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') moving from state finished -> state finished (task:624)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager.Owner] Owner.releaseAll resources %s (resourceManager:742)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Trying to release resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' (resourceManager:546)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Released resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' (0 active users) (resourceManager:564)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' is free, finding out if anyone is waiting for it. (resourceManager:570)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] No one is waiting for resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455', Clearing records. (resourceManager:578)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') ref 0 aborting False (task:1008)
2021-05-07 10:29:50,164-0500 INFO (v2v/57f84423) [root] Storing import log at: '/var/log/vdsm/import/import-57f84423-56cb-4187-86e2-f4208348e1f5-20210507T102950.log' (v2v:436)
2021-05-07 10:29:50,170-0500 DEBUG (v2v/57f84423) [root] /usr/bin/taskset --cpu-list 0-23 /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/libexec/vdsm/kvm2ovirt --uri qemu+tcp://root@172.16.0.61/system --bufsize 1048576 --source /var/lib/libvirt/images/vm_powervp-si.qcow2 --dest /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900 --storage-type volume --vm-name vm_powervp-si --allocation sparse (cwd None) (v2v:1511)
2021-05-07 10:29:50,175-0500 DEBUG (v2v/57f84423) [root] /usr/bin/taskset --cpu-list 0-23 /usr/bin/nice -n 19 /usr/bin/ionice -c 3 tee /var/log/vdsm/import/import-57f84423-56cb-4187-86e2-f4208348e1f5-20210507T102950.log (cwd None) (v2v:1511)
2021-05-07 10:29:50,274-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copying disk 1/1 (v2v:912)
2021-05-07 10:29:50,277-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 0/100 (v2v:921)
2021-05-07 10:29:51,277-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 0/100 (v2v:921)
2021-05-07 10:29:52,277-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 0/100 (v2v:921)
2021-05-07 10:30:14,283-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 10/100 (v2v:921)
2021-05-07 10:30:15,283-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 10/100 (v2v:921)
2021-05-07 10:30:16,283-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 10/100 (v2v:921)
2021-05-07 10:30:39,288-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 20/100 (v2v:921)
2021-05-07 10:30:40,288-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 20/100 (v2v:921)
2021-05-07 10:30:46,281-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 100/100 (v2v:921)
2021-05-07 10:30:46,407-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') moving from state preparing -> state preparing (task:624)
2021-05-07 10:30:46,407-0500 INFO (v2v/57f84423) [vdsm.api] START teardownImage(sdUUID='cc9fae8e-b714-44cf-9dac-3a83a15b0455', spUUID='24d9d2fa-98f9-11eb-aea7-00163e09cc71', imgUUID='226cc137-1992-4246-9484-80a1bfb5e9f7', volUUID=None) from=internal, task_id=ecbcd983-2f33-45a4-b962-7fc4b9342822 (api:48)
2021-05-07 10:30:46,407-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Trying to register resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' for lock type 'shared' (resourceManager:474)
2021-05-07 10:30:46,407-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' is free. Now locking as 'shared' (1 active user) (resourceManager:531)
2021-05-07 10:30:46,407-0500 DEBUG (v2v/57f84423) [storage.ResourceManager.Request] (ResName='00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455', ReqID='1744a1a5-d543-4528-be79-c752bce08263') Granted request (resourceManager:221)
2021-05-07 10:30:46,408-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') _resourcesAcquired: 00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455 (shared) (task:856)
2021-05-07 10:30:46,408-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') ref 1 aborting False (task:1008)
2021-05-07 10:30:46,408-0500 INFO (v2v/57f84423) [storage.StorageDomain] Removing image run directory '/run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7' (blockSD:1386)
2021-05-07 10:30:46,408-0500 INFO (v2v/57f84423) [storage.fileUtils] Removing directory: /run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7 (fileUtils:182)
2021-05-07 10:30:46,408-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvs --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags cc9fae8e-b714-44cf-9dac-3a83a15b0455 (cwd None) (commands:153)
2021-05-07 10:30:46,510-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185)
2021-05-07 10:30:46,511-0500 DEBUG (v2v/57f84423) [storage.LVM] lvs reloaded (lvm:759)
2021-05-07 10:30:46,512-0500 INFO (v2v/57f84423) [storage.LVM] Deactivating lvs: vg=cc9fae8e-b714-44cf-9dac-3a83a15b0455 lvs=['847bc460-1b54-4756-8ced-4b969c399900'] (lvm:1746)
2021-05-07 10:30:46,512-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvchange --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --autobackup n --available n cc9fae8e-b714-44cf-9dac-3a83a15b0455/847bc460-1b54-4756-8ced-4b969c399900 (cwd None) (commands:153)
2021-05-07 10:30:46,629-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185)
2021-05-07 10:30:46,630-0500 INFO (v2v/57f84423) [vdsm.api] FINISH teardownImage return=None from=internal, task_id=ecbcd983-2f33-45a4-b962-7fc4b9342822 (api:54)
2021-05-07 10:30:46,630-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') finished: None (task:1210)
2021-05-07 10:30:46,630-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') moving from state finished -> state finished (task:624)
2021-05-07 10:30:46,630-0500 DEBUG (v2v/57f84423) [storage.ResourceManager.Owner] Owner.releaseAll resources %s (resourceManager:742)
2021-05-07 10:30:46,630-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Trying to release resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' (resourceManager:546)
2021-05-07 10:30:46,631-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Released resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' (0 active users) (resourceManager:564)
2021-05-07 10:30:46,631-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' is free, finding out if anyone is waiting for it. (resourceManager:570)
2021-05-07 10:30:46,631-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] No one is waiting for resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455', Clearing records. (resourceManager:578)
2021-05-07 10:30:46,631-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') ref 0 aborting False (task:1008)
2021-05-07 10:30:46,631-0500 ERROR (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' failed (v2v:869)
2021-05-07 10:30:46,635-0500 DEBUG (v2v/57f84423) [root] FINISH thread <Thread(v2v/57f84423, stopped daemon 140273162123008)> (concurrent:261)
-- Details of the enviroment:
# df -Ph
Filesystem Size Used Avail Use% Mounted on
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 4.0K 32G 1% /dev/shm
tmpfs 32G 26M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/mapper/onn-ovirt--node--ng--4.4.5.1--0.20210323.0+1 584G 11G 573G 2% /
/dev/mapper/onn-home 1014M 40M 975M 4% /home
/dev/mapper/onn-tmp 1014M 40M 975M 4% /tmp
/dev/sda2 1014M 479M 536M 48% /boot
/dev/mapper/onn-var 30G 3.2G 27G 11% /var
/dev/sda1 599M 6.9M 592M 2% /boot/efi
/dev/mapper/onn-var_log 8.0G 498M 7.6G 7% /var/log
/dev/mapper/onn-var_crash 10G 105M 9.9G 2% /var/crash
/dev/mapper/onn-var_log_audit 2.0G 84M 2.0G 5% /var/log/audit
tmpfs 6.3G 0 6.3G 0% /run/user/0
/dev/mapper/da3e3aff--0bfc--42cd--944f--f6145c50134a-master 976M 1.3M 924M 1% /rhev/data-center/mnt/blockSD/da3e3aff-0bfc-42cd-944f-f6145c50134a/master
/dev/mapper/onn-lv_iso 12G 11G 1.6G 88% /rhev/data-center/mnt/_dev_mapper_onn-lv__iso
172.19.1.80:/exportdomain 584G 11G 573G 2% /rhev/data-center/mnt/172.19.1.80:_exportdomain
* Inodes available = 99%.
# qemu-img info /var/lib/libvirt/images/vm_powervp-si.qcow2
image: /var/lib/libvirt/images/vm_powervp-si.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 4.2G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: true
3 years, 7 months