So I have a few VMs that are locked and unable to start on either
hypervisor, this happened after the hosted engine for some reason switched
hosts. It seems like the imaged is locked but I'm unsure how to unlock it.
Any advice is appreciated.
Thanks,
Dan
Version
-------------
glusterfs-3.8.4-54.8.el7rhgs.x86_64
vdsm-4.20.27.2-1.el7ev.x86_64
ovirt-ansible-disaster-recovery-0.4-1.el7ev.noarch
ovirt-engine-extension-aaa-ldap-1.3.7-1.el7ev.noarch
ovirt-vmconsole-proxy-1.0.5-4.el7ev.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.2.3.8-0.1.el7.noarch
ovirt-engine-extensions-api-impl-4.2.3.8-0.1.el7.noarch
ovirt-imageio-proxy-setup-1.3.1.2-0.el7ev.noarch
ovirt-engine-extension-aaa-jdbc-1.1.7-1.el7ev.noarch
ovirt-engine-webadmin-portal-4.2.3.4-0.1.el7.noarch
ovirt-engine-backend-4.2.3.4-0.1.el7.noarch
ovirt-host-deploy-1.7.3-1.el7ev.noarch
ovirt-cockpit-sso-0.0.4-1.el7ev.noarch
ovirt-ansible-infra-1.1.5-1.el7ev.noarch
ovirt-provider-ovn-1.2.10-1.el7ev.noarch
ovirt-engine-setup-4.2.3.8-0.1.el7.noarch
ovirt-setup-lib-1.1.4-1.el7ev.noarch
ovirt-engine-dwh-4.2.2.2-1.el7ev.noarch
ovirt-js-dependencies-1.2.0-3.1.el7ev.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7ev.noarch
ovirt-log-collector-4.2.5-2.el7ev.noarch
ovirt-ansible-v2v-conversion-host-1.1.2-1.el7ev.noarch
ovirt-ansible-cluster-upgrade-1.1.7-1.el7ev.noarch
ovirt-ansible-image-template-1.1.6-2.el7ev.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.2.3.8-0.1.el7.noarch
ovirt-engine-websocket-proxy-4.2.3.8-0.1.el7.noarch
ovirt-engine-tools-backup-4.2.3.4-0.1.el7.noarch
ovirt-engine-restapi-4.2.3.4-0.1.el7.noarch
ovirt-engine-tools-4.2.3.4-0.1.el7.noarch
ovirt-imageio-common-1.3.1.2-0.el7ev.noarch
ovirt-engine-cli-3.6.8.1-1.el7ev.noarch
ovirt-web-ui-1.3.9-1.el7ev.noarch
ovirt-ansible-manageiq-1.1.8-1.el7ev.noarch
ovirt-ansible-roles-1.1.4-2.el7ev.noarch
ovirt-engine-lib-4.2.3.8-0.1.el7.noarch
ovirt-vmconsole-1.0.5-4.el7ev.noarch
ovirt-engine-setup-base-4.2.3.8-0.1.el7.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.2.3.8-0.1.el7.noarch
ovirt-host-deploy-java-1.7.3-1.el7ev.noarch
ovirt-engine-dashboard-1.2.3-2.el7ev.noarch
ovirt-engine-4.2.3.4-0.1.el7.noarch
python-ovirt-engine-sdk4-4.2.6-1.el7ev.x86_64
ovirt-engine-metrics-1.1.4.2-1.el7ev.noarch
ovirt-engine-vmconsole-proxy-helper-4.2.3.8-0.1.el7.noarch
ovirt-imageio-proxy-1.3.1.2-0.el7ev.noarch
ovirt-engine-dwh-setup-4.2.2.2-1.el7ev.noarch
ovirt-guest-agent-common-1.0.14-3.el7ev.noarch
ovirt-ansible-vm-infra-1.1.7-1.el7ev.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.2.3.8-0.1.el7.noarch
ovirt-engine-api-explorer-0.0.1-1.el7ev.noarch
ovirt-engine-dbscripts-4.2.3.4-0.1.el7.noarch
ovirt-iso-uploader-4.2.0-1.el7ev.noarch
---
VDSM log
---------------
2018-06-06 01:07:12,940-0400 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
call Host.getStorageRepoStats succeeded in 0.01 seconds (__init__:573)
2018-06-06 01:07:12,948-0400 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:573)
2018-06-06 01:07:13,068-0400 INFO (periodic/3) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=3e30ead8-20b6-449d-a3d3-684a9d20e2c2 (api:46)
2018-06-06 01:07:13,068-0400 INFO (periodic/3) [vdsm.api] FINISH repoStats
return={u'f7dfffc3-9d69-4d20-83fc-c3d4324430a2': {'code': 0,
'actual':
True, 'version': 0, 'acquired': True, 'delay':
'0.000482363', 'lastCheck':
'2.2', 'valid': True}, u'ca5bf4c5-43d8-4d88-ae64-78f87ce016b1':
{'code': 0,
'actual': True, 'version': 4, 'acquired': True, 'delay':
'0.00143521',
'lastCheck': '2.2', 'valid': True},
u'f4e26e9a-427b-44f2-9ecf-5d789b56a1be': {'code': 0, 'actual':
True,
'version': 4, 'acquired': True, 'delay': '0.000832749',
'lastCheck': '2.2',
'valid': True}, u'a4c70c2d-98f2-4394-a6fc-c087a31b21d3': {'code':
0,
'actual': True, 'version': 0, 'acquired': True, 'delay':
'0.000280917',
'lastCheck': '2.1', 'valid': True},
u'30cee3ab-83a3-4bf4-a674-023df575c3da': {'code': 0, 'actual':
True,
'version': 4, 'acquired': True, 'delay': '0.00128562',
'lastCheck': '2.1',
'valid': True}} from=internal, task_id=3e30ead8-20b6-449d-a3d3-684a9d20e2c2
(api:52)
2018-06-06 01:07:13,069-0400 INFO (periodic/3) [vdsm.api] START
multipath_health() from=internal,
task_id=7064b06c-14a2-4bfd-8c31-b650918b7287 (api:46)
2018-06-06 01:07:13,069-0400 INFO (periodic/3) [vdsm.api] FINISH
multipath_health return={} from=internal,
task_id=7064b06c-14a2-4bfd-8c31-b650918b7287 (api:52)
2018-06-06 01:07:13,099-0400 INFO (vm/78754822) [root]
/usr/libexec/vdsm/hooks/before_vm_start/50_hostedengine: rc=0 err=
(hooks:110)
2018-06-06 01:07:13,350-0400 INFO (vm/78754822) [root]
/usr/libexec/vdsm/hooks/before_vm_start/50_vfio_mdev: rc=0 err= (hooks:110)
2018-06-06 01:07:13,578-0400 INFO (vm/78754822) [root]
/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd: rc=0 err= (hooks:110)
2018-06-06 01:07:13,579-0400 INFO (vm/78754822) [virt.vm]
(vmId='78754822-2bd3-4acc-a029-906b7a167c8e') <?xml version="1.0"
encoding="utf-8"?><domain type="kvm"
xmlns:ns0="http://ovirt.org/vm/tune/1.0"
xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<name>idm1-runlevelone-lan</name>
<uuid>78754822-2bd3-4acc-a029-906b7a167c8e</uuid>
<memory>2097152</memory>
<currentMemory>2097152</currentMemory>
<maxMemory slots="16">8388608</maxMemory>
<vcpu current="2">16</vcpu>
<sysinfo type="smbios">
<system>
<entry name="manufacturer">oVirt</entry>
<entry name="product">RHEV Hypervisor</entry>
<entry name="version">7.5-8.el7</entry>
<entry
name="serial">30333436-3638-5355-4532-313631574337</entry>
<entry
name="uuid">78754822-2bd3-4acc-a029-906b7a167c8e</entry>
</system>
</sysinfo>
<clock adjustment="0" offset="variable">
<timer name="rtc" tickpolicy="catchup"/>
<timer name="pit" tickpolicy="delay"/>
<timer name="hpet" present="no"/>
</clock>
<features>
<acpi/>
<vmcoreinfo/>
</features>
<cpu match="exact">
<model>Nehalem</model>
<topology cores="1" sockets="16"
threads="1"/>
<numa>
<cell cpus="0,1" id="0"
memory="2097152"/>
</numa>
</cpu>
<cputune/>
<devices>
<input bus="ps2" type="mouse"/>
<channel type="unix">
<target name="ovirt-guest-agent.0" type="virtio"/>
<source mode="bind"
path="/var/lib/libvirt/qemu/channels/78754822-2bd3-4acc-a029-906b7a167c8e.ovirt-guest-agent.0"/>
</channel>
<channel type="unix">
<target name="org.qemu.guest_agent.0"
type="virtio"/>
<source mode="bind"
path="/var/lib/libvirt/qemu/channels/78754822-2bd3-4acc-a029-906b7a167c8e.org.qemu.guest_agent.0"/>
</channel>
<graphics autoport="yes" passwd="*****"
passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1"
type="spice">
<channel mode="secure" name="main"/>
<channel mode="secure" name="inputs"/>
<channel mode="secure" name="cursor"/>
<channel mode="secure" name="playback"/>
<channel mode="secure" name="record"/>
<channel mode="secure" name="display"/>
<channel mode="secure" name="smartcard"/>
<channel mode="secure" name="usbredir"/>
<listen network="vdsm-ovirtmgmt" type="network"/>
</graphics>
<rng model="virtio">
<backend model="random">/dev/urandom</backend>
<alias name="ua-1b3d2efc-5605-4b5b-afde-7e75369d0191"/>
</rng>
<controller index="0" model="piix3-uhci"
type="usb">
<address bus="0x00" domain="0x0000"
function="0x2" slot="0x01"
type="pci"/>
</controller>
<controller type="ide">
<address bus="0x00" domain="0x0000"
function="0x1" slot="0x01"
type="pci"/>
</controller>
<controller index="0" ports="16"
type="virtio-serial">
<alias name="ua-c27a9db4-39dc-436e-8b21-b2cd12aeb3dc"/>
<address bus="0x00" domain="0x0000"
function="0x0" slot="0x05"
type="pci"/>
</controller>
<memballoon model="virtio">
<stats period="5"/>
<alias name="ua-c82a301f-e476-4107-b954-166bbdd65f03"/>
<address bus="0x00" domain="0x0000"
function="0x0" slot="0x06"
type="pci"/>
</memballoon>
<controller index="0" model="virtio-scsi"
type="scsi">
<alias name="ua-d8d0e95b-80e0-4d7d-91d6-4faf0f266c6e"/>
<address bus="0x00" domain="0x0000"
function="0x0" slot="0x04"
type="pci"/>
</controller>
<video>
<model heads="1" ram="65536" type="qxl"
vgamem="16384"
vram="32768"/>
<alias name="ua-f0c36e10-652c-4fc2-87e8-737271baebca"/>
<address bus="0x00" domain="0x0000"
function="0x0" slot="0x02"
type="pci"/>
</video>
<channel type="spicevmc">
<target name="com.redhat.spice.0" type="virtio"/>
</channel>
<disk device="cdrom" snapshot="no"
type="file">
<driver error_policy="report" name="qemu"
type="raw"/>
<source file="" startupPolicy="optional"/>
<target bus="ide" dev="hdc"/>
<readonly/>
<alias name="ua-74a927f8-31ac-41c1-848e-599078655d77"/>
<address bus="1" controller="0" target="0"
type="drive"
unit="0"/>
<boot order="2"/>
</disk>
<disk device="disk" snapshot="no"
type="file">
<target bus="scsi" dev="sda"/>
<source
file="/rhev/data-center/mnt/glusterSD/deadpool.ib.runlevelone.lan:rhev__vms/30cee3ab-83a3-4bf4-a674-023df575c3da/images/0d38d154-cbd7-491b-ac25-c96fd5fe3830/5c93d0b3-4dfa-4114-a403-09f2e8c67bfc"/>
<driver cache="none" error_policy="stop"
io="threads"
name="qemu" type="raw"/>
<alias name="ua-0d38d154-cbd7-491b-ac25-c96fd5fe3830"/>
<address bus="0" controller="0" target="0"
type="drive"
unit="0"/>
<boot order="1"/>
<serial>0d38d154-cbd7-491b-ac25-c96fd5fe3830</serial>
</disk>
<interface type="bridge">
<model type="virtio"/>
<link state="up"/>
<source bridge="lab"/>
<alias name="ua-db30b82a-c181-48cf-901f-29b568576ec7"/>
<address bus="0x00" domain="0x0000"
function="0x0" slot="0x03"
type="pci"/>
<mac address="00:1a:4a:16:01:63"/>
<filterref filter="vdsm-no-mac-spoofing"/>
<bandwidth/>
</interface>
</devices>
<pm>
<suspend-to-disk enabled="no"/>
<suspend-to-mem enabled="no"/>
</pm>
<os>
<type arch="x86_64"
machine="pc-i440fx-rhel7.5.0">hvm</type>
<smbios mode="sysinfo"/>
</os>
<metadata>
<ns0:qos/>
<ovirt-vm:vm>
<minGuaranteedMemoryMb
type="int">1365</minGuaranteedMemoryMb>
<clusterVersion>4.2</clusterVersion>
<ovirt-vm:custom/>
<ovirt-vm:device mac_address="00:1a:4a:16:01:63">
<ovirt-vm:custom/>
</ovirt-vm:device>
<ovirt-vm:device devtype="disk" name="sda">
<ovirt-vm:poolID>946fd87c-6327-11e8-b7d9-00163e751a4c</ovirt-vm:poolID>
<ovirt-vm:volumeID>5c93d0b3-4dfa-4114-a403-09f2e8c67bfc</ovirt-vm:volumeID>
<ovirt-vm:imageID>0d38d154-cbd7-491b-ac25-c96fd5fe3830</ovirt-vm:imageID>
<ovirt-vm:domainID>30cee3ab-83a3-4bf4-a674-023df575c3da</ovirt-vm:domainID>
</ovirt-vm:device>
<launchPaused>false</launchPaused>
<resumeBehavior>auto_resume</resumeBehavior>
</ovirt-vm:vm>
</metadata>
</domain> (vm:2867)
2018-06-06 01:07:14,584-0400 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:573)
2018-06-06 01:07:14,590-0400 INFO (jsonrpc/4) [api.virt] START getStats()
from=::1,60908, vmId=d237b932-35fa-4b98-97e2-cb0afce1b3a8 (api:46)
2018-06-06 01:07:14,590-0400 INFO (jsonrpc/4) [api] FINISH getStats
error=Virtual machine does not exist: {'vmId':
u'd237b932-35fa-4b98-97e2-cb0afce1b3a8'} (api:127)
2018-06-06 01:07:14,590-0400 INFO (jsonrpc/4) [api.virt] FINISH getStats
return={'status': {'message': "Virtual machine does not exist:
{'vmId':
u'd237b932-35fa-4b98-97e2-cb0afce1b3a8'}", 'code': 1}}
from=::1,60908,
vmId=d237b932-35fa-4b98-97e2-cb0afce1b3a8 (api:52)
2018-06-06 01:07:14,591-0400 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC
call VM.getStats failed (error 1) in 0.00 seconds (__init__:573)
2018-06-06 01:07:14,675-0400 INFO (jsonrpc/0) [api.host] START
getAllVmStats() from=::1,60914 (api:46)
2018-06-06 01:07:14,677-0400 INFO (jsonrpc/0) [api.host] FINISH
getAllVmStats return={'status': {'message': 'Done',
'code': 0},
'statsList': (suppressed)} from=::1,60914 (api:52)
2018-06-06 01:07:14,678-0400 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573)
2018-06-06 01:07:15,557-0400 ERROR (vm/78754822) [virt.vm]
(vmId='78754822-2bd3-4acc-a029-906b7a167c8e') The vm start process failed
(vm:943)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in
_startUnderlyingVm
self._run()
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2872, in
_run
dom.createWithFlags(flags)
File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 130, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92,
in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in
createWithFlags
if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed',
dom=self)
libvirtError: internal error: qemu unexpectedly closed the monitor:
2018-06-06T05:07:14.703253Z qemu-kvm: warning: All CPU(s) up to maxcpus
should be described in NUMA config, ability to start up with partial NUMA
mappings is obsoleted and will be removed in future
2018-06-06T05:07:14.798631Z qemu-kvm: -device
scsi-hd,bus=ua-d8d0e95b-80e0-4d7d-91d6-4faf0f266c6e.0,channel=0,scsi-id=0,lun=0,drive=drive-ua-0d38d154-cbd7-491b-ac25-c96fd5fe3830,id=ua-0d38d154-cbd7-491b-ac25-c96fd5fe3830,bootindex=1:
Failed to get shared "write" lock
Is another process using the image?