Hi,
I've now created a new pool without USB support. After creating the
pool, I've restarted ovirt-engine as I could not start the VM's from the
pool (it indicated a similar request was already running).
When the ovirt-engine was restarted I logged into the VM portal and was
able to launch a VM from the new pool. Once booted, I powered it down
(from within the VM). Once the VM portal UI indicated the VM was down, I
clicked run again to launch a new instance from the pool. The same error
as before comes up that there is no VM available (which is incorrect as
the pool is larger than 1 VM and no VM's are running at that point).
In the log, the below errors and warnings are logged (stripped the
INFO). It seems to try and release a lock which does not exist, or is
referencing it with the wrong id. Is there a way to trace which locks
are currently being held? Are they stored persistently somewhere that
may be causing my issues?
Regards,
Rik
2019-05-21 09:16:39,342+02 ERROR
[org.ovirt.engine.core.bll.GetPermissionsForObjectQuery] (default
task-2) [5e16cd77-01c8-43ed-80d2-c85452732570] Query execution failed
due to insufficient permissions.
2019-05-21 09:16:39,345+02 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-2) [] Operation Failed: query execution failed due to insufficient
permissions.
<name>testpool-1</name>
<uuid>6489cc72-8ca5-4943-901c-bbd405bdac68</uuid>
<memory>8388608</memory>
<currentMemory>8388608</currentMemory>
<maxMemory slots="16">33554432</maxMemory>
<vcpu current="4">16</vcpu>
<sysinfo type="smbios">
<system>
<entry name="manufacturer">oVirt</entry>
<entry name="product">OS-NAME:</entry>
<entry name="version">OS-VERSION:</entry>
<entry name="serial">HOST-SERIAL:</entry>
<entry
name="uuid">6489cc72-8ca5-4943-901c-bbd405bdac68</entry>
</system>
</sysinfo>
<clock offset="variable" adjustment="0">
<timer name="rtc" tickpolicy="catchup"/>
<timer name="pit" tickpolicy="delay"/>
<timer name="hpet" present="no"/>
</clock>
<features>
<acpi/>
</features>
<cpu match="exact">
<model>Haswell-noTSX</model>
<feature name="spec-ctrl" policy="require"/>
<feature name="ssbd" policy="require"/>
<topology cores="1" threads="1" sockets="16"/>
<numa>
<cell id="0" cpus="0,1,2,3" memory="8388608"/>
</numa>
</cpu>
<cputune/>
<devices>
<input type="mouse" bus="ps2"/>
<channel type="unix">
<target type="virtio" name="ovirt-guest-agent.0"/>
<source mode="bind"
path="/var/lib/libvirt/qemu/channels/6489cc72-8ca5-4943-901c-bbd405bdac68.ovirt-guest-agent.0"/>
</channel>
<channel type="unix">
<target type="virtio" name="org.qemu.guest_agent.0"/>
<source mode="bind"
path="/var/lib/libvirt/qemu/channels/6489cc72-8ca5-4943-901c-bbd405bdac68.org.qemu.guest_agent.0"/>
</channel>
<memballoon model="virtio">
<stats period="5"/>
<alias name="ua-0a488d46-765e-40d6-965a-5568dcd3fda0"/>
<address bus="0x00" domain="0x0000" function="0x0"
slot="0x08"
type="pci"/>
</memballoon>
<controller type="scsi" model="virtio-scsi"
index="0">
<alias name="ua-2374b7bb-810a-47d9-8be7-97655a4cfb0b"/>
<address bus="0x00" domain="0x0000" function="0x0"
slot="0x05"
type="pci"/>
</controller>
<controller type="virtio-serial" index="0"
ports="16">
<alias name="ua-27074498-9c11-4d97-9711-778ae923b9b0"/>
<address bus="0x00" domain="0x0000" function="0x0"
slot="0x07"
type="pci"/>
</controller>
<sound model="ich6">
<alias name="ua-46ed1231-ff6f-4970-9d3c-2fea8436f38b"/>
<address bus="0x00" domain="0x0000" function="0x0"
slot="0x04"
type="pci"/>
</sound>
<graphics type="vnc" port="-1" autoport="yes"
passwd="*****"
passwdValidTo="1970-01-01T00:00:01" keymap="en-us">
<listen type="network" network="vdsm-ovirtmgmt"/>
</graphics>
<controller type="usb" model="nec-xhci"
index="0">
<address bus="0x00" domain="0x0000" function="0x0"
slot="0x06"
type="pci"/>
</controller>
<controller type="ide">
<address bus="0x00" domain="0x0000" function="0x1"
slot="0x01"
type="pci"/>
</controller>
<video>
<model type="qxl" vram="32768" heads="1"
ram="65536" vgamem="16384"/>
<alias name="ua-beea631e-9e14-46be-aaec-4e24e06bcd25"/>
<address bus="0x00" domain="0x0000" function="0x0"
slot="0x02"
type="pci"/>
</video>
<graphics type="spice" port="-1" autoport="yes"
passwd="*****"
passwdValidTo="1970-01-01T00:00:01" tlsPort="-1">
<channel name="main" mode="secure"/>
<channel name="inputs" mode="secure"/>
<channel name="cursor" mode="secure"/>
<channel name="playback" mode="secure"/>
<channel name="record" mode="secure"/>
<channel name="display" mode="secure"/>
<channel name="smartcard" mode="secure"/>
<channel name="usbredir" mode="secure"/>
<listen type="network" network="vdsm-ovirtmgmt"/>
</graphics>
<rng model="virtio">
<backend model="random">/dev/urandom</backend>
<alias name="ua-c8b3ba43-38d6-40a2-abf0-45cbf645b498"/>
</rng>
<channel type="spicevmc">
<target type="virtio" name="com.redhat.spice.0"/>
</channel>
<interface type="bridge">
<model type="virtio"/>
<link state="up"/>
<source bridge="ea-students"/>
<driver queues="4" name="vhost"/>
<alias name="ua-25b08af8-a164-46d3-9377-5585b7c946de"/>
<address bus="0x00" domain="0x0000" function="0x0"
slot="0x03"
type="pci"/>
<mac address="00:1a:4a:16:01:85"/>
<mtu size="1500"/>
<filterref filter="vdsm-no-mac-spoofing"/>
<bandwidth/>
</interface>
<disk type="file" device="cdrom" snapshot="no">
<driver name="qemu" type="raw"
error_policy="report"/>
<source file="" startupPolicy="optional">
<seclabel model="dac" type="none"
relabel="no"/>
</source>
<target dev="hdc" bus="ide"/>
<readonly/>
<alias name="ua-e8f9d30f-469f-4d1a-966b-720dad429620"/>
<address bus="1" controller="0" unit="0"
type="drive" target="0"/>
</disk>
<disk snapshot="no" type="block" device="disk">
<target dev="sda" bus="scsi"/>
<source
dev="/rhev/data-center/mnt/blockSD/4194c70d-5b7e-441f-af6b-7d8754e89572/images/9651890f-9b0c-4857-abae-77b8b543a897/8690340a-8d4d-4d04-a8ab-b18fe6cbb78b">
<seclabel model="dac" type="none"
relabel="no"/>
</source>
<driver name="qemu" io="native" type="qcow2"
error_policy="stop"
cache="none"/>
<alias name="ua-9651890f-9b0c-4857-abae-77b8b543a897"/>
<address bus="0" controller="0" unit="0"
type="drive" target="0"/>
<boot order="1"/>
<serial>9651890f-9b0c-4857-abae-77b8b543a897</serial>
</disk>
</devices>
<pm>
<suspend-to-disk enabled="no"/>
<suspend-to-mem enabled="no"/>
</pm>
<os>
<type arch="x86_64"
machine="pc-i440fx-rhel7.6.0">hvm</type>
<smbios mode="sysinfo"/>
</os>
<metadata>
<ovirt-tune:qos/>
<ovirt-vm:vm>
<ovirt-vm:minGuaranteedMemoryMb
type="int">4096</ovirt-vm:minGuaranteedMemoryMb>
<ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion>
<ovirt-vm:custom/>
<ovirt-vm:device mac_address="00:1a:4a:16:01:85">
<ovirt-vm:custom/>
</ovirt-vm:device>
<ovirt-vm:device devtype="disk" name="sda">
<ovirt-vm:poolID>5982eba0-03b9-0281-0363-00000000037a</ovirt-vm:poolID>
<ovirt-vm:volumeID>8690340a-8d4d-4d04-a8ab-b18fe6cbb78b</ovirt-vm:volumeID>
<ovirt-vm:imageID>9651890f-9b0c-4857-abae-77b8b543a897</ovirt-vm:imageID>
<ovirt-vm:domainID>4194c70d-5b7e-441f-af6b-7d8754e89572</ovirt-vm:domainID>
</ovirt-vm:device>
<ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
<ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
</ovirt-vm:vm>
</metadata>
</domain>
2019-05-21 09:16:44,399+02 WARN
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] (default task-9)
[8a9f8c3f-e441-4121-aa7f-5b2cf26da6bb] Trying to release exclusive lock
which does not exist, lock key: 'a
5bed59c-d2fe-4fe4-bff7-52efe089ebd6USER_VM_POOL'
<name>testpool-1</name>
<uuid>6489cc72-8ca5-4943-901c-bbd405bdac68</uuid>
<metadata
<ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion>
<ovirt-vm:destroy_on_reboot
type="bool">False</ovirt-vm:destroy_on_reboot>
<ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
<ovirt-vm:memGuaranteedSize
type="int">4096</ovirt-vm:memGuaranteedSize>
<ovirt-vm:minGuaranteedMemoryMb
type="int">4096</ovirt-vm:minGuaranteedMemoryMb>
<ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
<ovirt-vm:startTime
type="float">1558423004.25</ovirt-vm:startTime>
<ovirt-vm:device mac_address="00:1a:4a:16:01:85">
<ovirt-vm:network>ea-students</ovirt-vm:network>
<ovirt-vm:custom>
<ovirt-vm:queues>4</ovirt-vm:queues>
</ovirt-vm:custom>
</ovirt-vm:device>
<ovirt-vm:device devtype="disk" name="sda">
<ovirt-vm:domainID>4194c70d-5b7e-441f-af6b-7d8754e89572</ovirt-vm:domainID>
<ovirt-vm:imageID>9651890f-9b0c-4857-abae-77b8b543a897</ovirt-vm:imageID>
<ovirt-vm:poolID>5982eba0-03b9-0281-0363-00000000037a</ovirt-vm:poolID>
<ovirt-vm:volumeID>8690340a-8d4d-4d04-a8ab-b18fe6cbb78b</ovirt-vm:volumeID>
<ovirt-vm:volumeChain>
<ovirt-vm:volumeChainNode>
<ovirt-vm:domainID>4194c70d-5b7e-441f-af6b-7d8754e89572</ovirt-vm:domainID>
<ovirt-vm:imageID>9651890f-9b0c-4857-abae-77b8b543a897</ovirt-vm:imageID>
<ovirt-vm:leaseOffset
type="int">119537664</ovirt-vm:leaseOffset>
<ovirt-vm:leasePath>/dev/4194c70d-5b7e-441f-af6b-7d8754e89572/leases</ovirt-vm:leasePath>
<ovirt-vm:path>/rhev/data-center/mnt/blockSD/4194c70d-5b7e-441f-af6b-7d8754e89572/images/9651890f-9b0c-4857-abae-77b8b543a897/5f606a1e-3377-45b7-91d0-6398f7694c45</ovirt-vm:path>
<ovirt-vm:volumeID>5f606a1e-3377-45b7-91d0-6398f7694c45</ovirt-vm:volumeID>
</ovirt-vm:volumeChainNode>
<ovirt-vm:volumeChainNode>
<ovirt-vm:domainID>4194c70d-5b7e-441f-af6b-7d8754e89572</ovirt-vm:domainID>
<ovirt-vm:imageID>9651890f-9b0c-4857-abae-77b8b543a897</ovirt-vm:imageID>
<ovirt-vm:leaseOffset
type="int">108003328</ovirt-vm:leaseOffset>
<ovirt-vm:leasePath>/dev/4194c70d-5b7e-441f-af6b-7d8754e89572/leases</ovirt-vm:leasePath>
<ovirt-vm:path>/rhev/data-center/mnt/blockSD/4194c70d-5b7e-441f-af6b-7d8754e89572/images/9651890f-9b0c-4857-abae-77b8b543a897/8690340a-8d4d-4d04-a8ab-b18fe6cbb78b</ovirt-vm:path>
<ovirt-vm:volumeID>8690340a-8d4d-4d04-a8ab-b18fe6cbb78b</ovirt-vm:volumeID>
</ovirt-vm:volumeChainNode>
</ovirt-vm:volumeChain>
</ovirt-vm:device>
<ovirt-vm:device devtype="disk" name="hdc"/>
</ovirt-vm:vm>
</metadata>
<maxMemory slots='16' unit='KiB'>33554432</maxMemory>
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
<vcpu placement='static' current='4'>16</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>oVirt</entry>
<entry name='product'>oVirt Node</entry>
<entry name='version'>7-6.1810.2.el7.centos</entry>
<entry
name='serial'>4C4C4544-004D-4710-8046-B8C04F584A32</entry>
<entry name='uuid'>6489cc72-8ca5-4943-901c-bbd405bdac68</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64'
machine='pc-i440fx-rhel7.6.0'>hvm</type>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>Haswell-noTSX</model>
<topology sockets='16' cores='1' threads='1'/>
<feature policy='require' name='spec-ctrl'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='vme'/>
<feature policy='require' name='f16c'/>
<feature policy='require' name='rdrand'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='arat'/>
<feature policy='require' name='xsaveopt'/>
<feature policy='require' name='abm'/>
<numa>
<cell id='0' cpus='0-3' memory='8388608'
unit='KiB'/>
</numa>
</cpu>
<clock offset='variable' adjustment='0' basis='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' error_policy='report'/>
<source startupPolicy='optional'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ua-e8f9d30f-469f-4d1a-966b-720dad429620'/>
<address type='drive' controller='0' bus='1'
target='0' unit='0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='qcow2' cache='none'
error_policy='stop'
io='native'/>
<source
dev='/rhev/data-center/mnt/blockSD/4194c70d-5b7e-441f-af6b-7d8754e89572/images/9651890f-9b0c-4857-abae-77b8b543a897/8690340a-8d4d-4d04-a8ab-b18fe6cbb78b'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore type='block' index='1'>
<format type='raw'/>
<source
dev='/rhev/data-center/mnt/blockSD/4194c70d-5b7e-441f-af6b-7d8754e89572/images/9651890f-9b0c-4857-abae-77b8b543a897/5f606a1e-3377-45b7-91d0-6398f7694c45'/>
<backingStore/>
</backingStore>
<target dev='sda' bus='scsi'/>
<serial>9651890f-9b0c-4857-abae-77b8b543a897</serial>
<boot order='1'/>
<alias name='ua-9651890f-9b0c-4857-abae-77b8b543a897'/>
<address type='drive' controller='0' bus='0'
target='0' unit='0'/>
</disk>
<controller type='scsi' index='0' model='virtio-scsi'>
<alias name='ua-2374b7bb-810a-47d9-8be7-97655a4cfb0b'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x05'
function='0x0'/>
</controller>
<controller type='virtio-serial' index='0' ports='16'>
<alias name='ua-27074498-9c11-4d97-9711-778ae923b9b0'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x07'
function='0x0'/>
</controller>
<controller type='usb' index='0' model='nec-xhci'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x06'
function='0x0'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x01'
function='0x1'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<interface type='bridge'>
<mac address='00:1a:4a:16:01:85'/>
<source bridge='ea-students'/>
<target dev='vnet3'/>
<model type='virtio'/>
<driver name='vhost' queues='4'/>
<filterref filter='vdsm-no-mac-spoofing'/>
<link state='up'/>
<mtu size='1500'/>
<alias name='ua-25b08af8-a164-46d3-9377-5585b7c946de'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x03'
function='0x0'/>
</interface>
<channel type='unix'>
<source mode='bind'
path='/var/lib/libvirt/qemu/channels/6489cc72-8ca5-4943-901c-bbd405bdac68.ovirt-guest-agent.0'/>
<target type='virtio' name='ovirt-guest-agent.0'
state='disconnected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0'
port='1'/>
</channel>
<channel type='unix'>
<source mode='bind'
path='/var/lib/libvirt/qemu/channels/6489cc72-8ca5-4943-901c-bbd405bdac68.org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0'
state='disconnected'/>
<alias name='channel1'/>
<address type='virtio-serial' controller='0' bus='0'
port='2'/>
</channel>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'
state='disconnected'/>
<alias name='channel2'/>
<address type='virtio-serial' controller='0' bus='0'
port='3'/>
</channel>
<input type='mouse' bus='ps2'>
<alias name='input0'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input1'/>
</input>
<graphics type='vnc' port='5908' autoport='yes'
listen='10.86.19.1'
keymap='en-us' passwdValidTo='1970-01-01T00:00:01'>
<listen type='network' address='10.86.19.1'
network='vdsm-ovirtmgmt'/>
</graphics>
<graphics type='spice' port='5909' tlsPort='5910'
autoport='yes'
listen='10.86.19.1' passwdValidTo='1970-01-01T00:00:01'>
<listen type='network' address='10.86.19.1'
network='vdsm-ovirtmgmt'/>
<channel name='main' mode='secure'/>
<channel name='display' mode='secure'/>
<channel name='inputs' mode='secure'/>
<channel name='cursor' mode='secure'/>
<channel name='playback' mode='secure'/>
<channel name='record' mode='secure'/>
<channel name='smartcard' mode='secure'/>
<channel name='usbredir' mode='secure'/>
</graphics>
<sound model='ich6'>
<alias name='ua-46ed1231-ff6f-4970-9d3c-2fea8436f38b'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x04'
function='0x0'/>
</sound>
<video>
<model type='qxl' ram='65536' vram='32768'
vgamem='16384'
heads='1' primary='yes'/>
<alias name='ua-beea631e-9e14-46be-aaec-4e24e06bcd25'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x02'
function='0x0'/>
</video>
<memballoon model='virtio'>
<stats period='5'/>
<alias name='ua-0a488d46-765e-40d6-965a-5568dcd3fda0'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x08'
function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<alias name='ua-c8b3ba43-38d6-40a2-abf0-45cbf645b498'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x09'
function='0x0'/>
</rng>
</devices>
<seclabel type='dynamic' model='selinux' relabel='yes'>
<label>system_u:system_r:svirt_t:s0:c510,c786</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c510,c786</imagelabel>
</seclabel>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
</domain>
}, log id: 7b96aaf6
<name>testpool-1</name>
<uuid>6489cc72-8ca5-4943-901c-bbd405bdac68</uuid>
<metadata
<ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion>
<ovirt-vm:destroy_on_reboot
type="bool">False</ovirt-vm:destroy_on_reboot>
<ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
<ovirt-vm:memGuaranteedSize
type="int">4096</ovirt-vm:memGuaranteedSize>
<ovirt-vm:minGuaranteedMemoryMb
type="int">4096</ovirt-vm:minGuaranteedMemoryMb>
<ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
<ovirt-vm:startTime
type="float">1558423004.25</ovirt-vm:startTime>
<ovirt-vm:device mac_address="00:1a:4a:16:01:85">
<ovirt-vm:network>ea-students</ovirt-vm:network>
<ovirt-vm:custom>
<ovirt-vm:queues>4</ovirt-vm:queues>
</ovirt-vm:custom>
</ovirt-vm:device>
<ovirt-vm:device devtype="disk" name="sda">
<ovirt-vm:domainID>4194c70d-5b7e-441f-af6b-7d8754e89572</ovirt-vm:domainID>
<ovirt-vm:guestName>/dev/sda</ovirt-vm:guestName>
<ovirt-vm:imageID>9651890f-9b0c-4857-abae-77b8b543a897</ovirt-vm:imageID>
<ovirt-vm:poolID>5982eba0-03b9-0281-0363-00000000037a</ovirt-vm:poolID>
<ovirt-vm:volumeID>8690340a-8d4d-4d04-a8ab-b18fe6cbb78b</ovirt-vm:volumeID>
<ovirt-vm:volumeChain>
<ovirt-vm:volumeChainNode>
<ovirt-vm:domainID>4194c70d-5b7e-441f-af6b-7d8754e89572</ovirt-vm:domainID>
<ovirt-vm:imageID>9651890f-9b0c-4857-abae-77b8b543a897</ovirt-vm:imageID>
<ovirt-vm:leaseOffset
type="int">119537664</ovirt-vm:leaseOffset>
<ovirt-vm:leasePath>/dev/4194c70d-5b7e-441f-af6b-7d8754e89572/leases</ovirt-vm:leasePath>
<ovirt-vm:path>/rhev/data-center/mnt/blockSD/4194c70d-5b7e-441f-af6b-7d8754e89572/images/9651890f-9b0c-4857-abae-77b8b543a897/5f606a1e-3377-45b7-91d0-6398f7694c45</ovirt-vm:path>
<ovirt-vm:volumeID>5f606a1e-3377-45b7-91d0-6398f7694c45</ovirt-vm:volumeID>
</ovirt-vm:volumeChainNode>
<ovirt-vm:volumeChainNode>
<ovirt-vm:domainID>4194c70d-5b7e-441f-af6b-7d8754e89572</ovirt-vm:domainID>
<ovirt-vm:imageID>9651890f-9b0c-4857-abae-77b8b543a897</ovirt-vm:imageID>
<ovirt-vm:leaseOffset
type="int">108003328</ovirt-vm:leaseOffset>
<ovirt-vm:leasePath>/dev/4194c70d-5b7e-441f-af6b-7d8754e89572/leases</ovirt-vm:leasePath>
<ovirt-vm:path>/rhev/data-center/mnt/blockSD/4194c70d-5b7e-441f-af6b-7d8754e89572/images/9651890f-9b0c-4857-abae-77b8b543a897/8690340a-8d4d-4d04-a8ab-b18fe6cbb78b</ovirt-vm:path>
<ovirt-vm:volumeID>8690340a-8d4d-4d04-a8ab-b18fe6cbb78b</ovirt-vm:volumeID>
</ovirt-vm:volumeChainNode>
</ovirt-vm:volumeChain>
</ovirt-vm:device>
<ovirt-vm:device devtype="disk" name="hdc"/>
</ovirt-vm:vm>
</metadata>
<maxMemory slots='16' unit='KiB'>33554432</maxMemory>
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
<vcpu placement='static' current='4'>16</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>oVirt</entry>
<entry name='product'>oVirt Node</entry>
<entry name='version'>7-6.1810.2.el7.centos</entry>
<entry
name='serial'>4C4C4544-004D-4710-8046-B8C04F584A32</entry>
<entry name='uuid'>6489cc72-8ca5-4943-901c-bbd405bdac68</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64'
machine='pc-i440fx-rhel7.6.0'>hvm</type>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>Haswell-noTSX</model>
<topology sockets='16' cores='1' threads='1'/>
<feature policy='require' name='spec-ctrl'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='vme'/>
<feature policy='require' name='f16c'/>
<feature policy='require' name='rdrand'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='arat'/>
<feature policy='require' name='xsaveopt'/>
<feature policy='require' name='abm'/>
<numa>
<cell id='0' cpus='0-3' memory='8388608'
unit='KiB'/>
</numa>
</cpu>
<clock offset='variable' adjustment='0' basis='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' error_policy='report'/>
<source startupPolicy='optional'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ua-e8f9d30f-469f-4d1a-966b-720dad429620'/>
<address type='drive' controller='0' bus='1'
target='0' unit='0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='qcow2' cache='none'
error_policy='stop'
io='native'/>
<source
dev='/rhev/data-center/mnt/blockSD/4194c70d-5b7e-441f-af6b-7d8754e89572/images/9651890f-9b0c-4857-abae-77b8b543a897/8690340a-8d4d-4d04-a8ab-b18fe6cbb78b'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore type='block' index='1'>
<format type='raw'/>
<source
dev='/rhev/data-center/mnt/blockSD/4194c70d-5b7e-441f-af6b-7d8754e89572/images/9651890f-9b0c-4857-abae-77b8b543a897/5f606a1e-3377-45b7-91d0-6398f7694c45'/>
<backingStore/>
</backingStore>
<target dev='sda' bus='scsi'/>
<serial>9651890f-9b0c-4857-abae-77b8b543a897</serial>
<boot order='1'/>
<alias name='ua-9651890f-9b0c-4857-abae-77b8b543a897'/>
<address type='drive' controller='0' bus='0'
target='0' unit='0'/>
</disk>
<controller type='scsi' index='0' model='virtio-scsi'>
<alias name='ua-2374b7bb-810a-47d9-8be7-97655a4cfb0b'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x05'
function='0x0'/>
</controller>
<controller type='virtio-serial' index='0' ports='16'>
<alias name='ua-27074498-9c11-4d97-9711-778ae923b9b0'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x07'
function='0x0'/>
</controller>
<controller type='usb' index='0' model='nec-xhci'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x06'
function='0x0'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x01'
function='0x1'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<interface type='bridge'>
<mac address='00:1a:4a:16:01:85'/>
<source bridge='ea-students'/>
<target dev='vnet3'/>
<model type='virtio'/>
<driver name='vhost' queues='4'/>
<filterref filter='vdsm-no-mac-spoofing'/>
<link state='up'/>
<mtu size='1500'/>
<alias name='ua-25b08af8-a164-46d3-9377-5585b7c946de'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x03'
function='0x0'/>
</interface>
<channel type='unix'>
<source mode='bind'
path='/var/lib/libvirt/qemu/channels/6489cc72-8ca5-4943-901c-bbd405bdac68.ovirt-guest-agent.0'/>
<target type='virtio' name='ovirt-guest-agent.0'
state='connected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0'
port='1'/>
</channel>
<channel type='unix'>
<source mode='bind'
path='/var/lib/libvirt/qemu/channels/6489cc72-8ca5-4943-901c-bbd405bdac68.org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0'
state='connected'/>
<alias name='channel1'/>
<address type='virtio-serial' controller='0' bus='0'
port='2'/>
</channel>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'
state='connected'/>
<alias name='channel2'/>
<address type='virtio-serial' controller='0' bus='0'
port='3'/>
</channel>
<input type='mouse' bus='ps2'>
<alias name='input0'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input1'/>
</input>
<graphics type='vnc' port='5908' autoport='yes'
listen='10.86.19.1'
keymap='en-us' passwdValidTo='1970-01-01T00:00:01'>
<listen type='network' address='10.86.19.1'
network='vdsm-ovirtmgmt'/>
</graphics>
<graphics type='spice' port='5909' tlsPort='5910'
autoport='yes'
listen='10.86.19.1' passwdValidTo='2019-05-21T07:19:09'>
<listen type='network' address='10.86.19.1'
network='vdsm-ovirtmgmt'/>
<channel name='main' mode='secure'/>
<channel name='display' mode='secure'/>
<channel name='inputs' mode='secure'/>
<channel name='cursor' mode='secure'/>
<channel name='playback' mode='secure'/>
<channel name='record' mode='secure'/>
<channel name='smartcard' mode='secure'/>
<channel name='usbredir' mode='secure'/>
</graphics>
<sound model='ich6'>
<alias name='ua-46ed1231-ff6f-4970-9d3c-2fea8436f38b'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x04'
function='0x0'/>
</sound>
<video>
<model type='qxl' ram='65536' vram='32768'
vgamem='16384'
heads='1' primary='yes'/>
<alias name='ua-beea631e-9e14-46be-aaec-4e24e06bcd25'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x02'
function='0x0'/>
</video>
<memballoon model='virtio'>
<stats period='5'/>
<alias name='ua-0a488d46-765e-40d6-965a-5568dcd3fda0'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x08'
function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<alias name='ua-c8b3ba43-38d6-40a2-abf0-45cbf645b498'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x09'
function='0x0'/>
</rng>
</devices>
<seclabel type='dynamic' model='selinux' relabel='yes'>
<label>system_u:system_r:svirt_t:s0:c510,c786</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c510,c786</imagelabel>
</seclabel>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
</domain>
}, log id: 143dc893
2019-05-21 09:17:42,799+02 WARN
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default
task-9) [20af568a-9128-41f6-ac2c-0b636b790655] Validation of action
'AttachUserToVmFromPoolAndRun' fail
ed for user u0045469(a)esat.kuleuven.be-authz. Reasons:
VAR__ACTION__ALLOCATE_AND_RUN,VAR__TYPE__VM_FROM_VM_POOL,ACTION_TYPE_FAILED_NO_AVAILABLE_POOL_VMS
2019-05-21 09:17:42,809+02 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-9) [] Operation Failed: [Cannot allocate and run VM from VM-Pool.
There are no available VMs in th
e VM-Pool.]
On 5/17/19 2:52 PM, Rik Theys wrote:
Hi,
Things are going from bad to worse it seems.
I've created a new VM to be used as a template and installed it with
CentOS 7. I've created a template of this VM and created a new pool
based on this template.
When I try to boot one of the VM's from the pool, it fails and logs
the following error:
2019-05-17 14:48:01,709+0200 ERROR (vm/f7da02e4) [virt.vm]
(vmId='f7da02e4-725c-4c6c-bdd4-9f2cae8b10e4') The vm start process
failed (vm:937)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 866,
in _startUnderlyingVm
self._run()
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2861,
in _run
dom = self._connection.defineXML(self._domain.xml)
File
"/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/function.py",
line 94, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3743, in
defineXML
if ret is None:raise libvirtError('virDomainDefineXML() failed',
conn=self)
libvirtError: XML error: requested USB port 3 not present on USB bus 0
2019-05-17 14:48:01,709+0200 INFO (vm/f7da02e4) [virt.vm]
(vmId='f7da02e4-725c-4c6c-bdd4-9f2cae8b10e4') Changed state to Down:
XML error: requested USB port 3 not present on USB bus 0 (code=1)
(vm:1675)
Strange thing is that this error was not present when I created the
initial master VM.
I get similar errors when I select Q35 type VM's instead of the default.
Did your test pool have VM's with USB enabled?
Regards,
Rik
On 5/17/19 10:48 AM, Rik Theys wrote:
>
> Hi Lucie,
>
> On 5/16/19 6:27 PM, Lucie Leistnerova wrote:
>>
>> Hi Rik,
>>
>> On 5/14/19 2:21 PM, Rik Theys wrote:
>>>
>>> Hi,
>>>
>>> It seems VM pools are completely broken since our upgrade to 4.3.
>>> Is anybody else also experiencing this issue?
>>>
>> I've tried to reproduce this issue. And I can use pool VMs as
>> expected, no problem. I've tested clean install and also upgrade
>> from 4.2.8.7.
>> Version: ovirt-engine-4.3.3.7-0.1.el7.noarch with
>> ovirt-web-ui-1.5.2-1.el7ev.noarch
> That is strange. I will try to create a new pool to verify if I also
> have the problem with the new pool. Currently we are having this
> issue with two different pools. Both pools were created in August or
> September of last year. I believe it was on 4.2 but could still have
> been 4.1.
>>>
>>> Only a single instance from a pool can be used. Afterwards the pool
>>> becomes unusable due to a lock not being released. Once
>>> ovirt-engine is restarted, another (single) VM from a pool can be used.
>>>
>> What users are running the VMs? What are the permissions?
>
> The users are taking VM's from the pools using the user portal. They
> are all member of a group that has the UserRole permission on the pools.
>
>> Each VM is running by other user? Were already some VMs running
>> before the upgrade?
>
> A user can take at most 1 VM from each pool. So it's possible a user
> has two VM's running (but not from the same pool). It doesn't matter
> which user is taking a VM from the pool. Once a user has taken a VM
> from the pool, no other user can take a VM. If the user that was able
> to take a VM powers it down and tries to run a new VM, it will also fail.
>
> During the upgrade of the host, no VM's were running.
>
>> Please provide exact steps.
>
> 1. ovirt-engine is restarted.
>
> 2. User A takes a VM from the pool and can work.
>
> 3. User B can not take a VM from that pool.
>
> 4. User A powers off the VM it was using. Once the VM is down, (s)he
> tries to take a new VM, which also fails now.
>
> It seems the VM pool is locked when the first user takes a VM and the
> lock is never released.
>
> In our case, there are no prestarted VM's. I can try to see if that
> makes a difference.
>
>
> Are there any more steps I can take to debug this issue regarding the
> locks?
>
> Regards,
>
> Rik
>
>>> I've added my findings to bug 1462236, but I'm no longer sure the
>>> issue is the same as the one initially reported.
>>>
>>> When the first VM of a pool is started:
>>>
>>> 2019-05-14 13:26:46,058+02 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] START, IsVmDuringInitiatingVDSCommand(
IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
log id: 2fb4f7f5
>>> 2019-05-14 13:26:46,058+02 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, IsVmDuringInitiatingVDSCommand, return:
false, log id: 2fb4f7f5
>>> 2019-05-14 13:26:46,208+02 INFO [org.ovirt.engine.core.bll.VmPoolHandler]
(default task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Lock Acquired to object
'EngineLock:{exclusiveLocks='[d8a99676-d520-425e-9974-1b1efe6da8a5=VM]',
sharedLocks=''}'
>>>
>>> -> it has acquired a lock (lock1)
>>>
>>> 2019-05-14 13:26:46,247+02 INFO
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] Lock Acquired to object
'EngineLock:{exclusiveLocks='[a5bed59c-d2fe-4fe4-bff7-52efe089ebd6=USER_VM_POOL]',
sharedLocks=''}'
>>>
>>> -> it has acquired another lock (lock2)
>>>
>>> 2019-05-14 13:26:46,352+02 INFO
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] Running command:
AttachUserToVmFromPoolAndRunCommand internal: false. Entities affected : ID:
4c622213-e5f4-4032-8639-643174b698cc Type: VmPoolAction group VM_POOL_BASIC_OPERATIONS
with role type USER
>>> 2019-05-14 13:26:46,393+02 INFO
[org.ovirt.engine.core.bll.AddPermissionCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] Running command: AddPermissionCommand internal:
true. Entities affected : ID: d8a99676-d520-425e-9974-1b1efe6da8a5 Type: VMAction group
MANIPULATE_PERMISSIONS with role type USER
>>> 2019-05-14 13:26:46,433+02 INFO
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] Succeeded giving user
'a5bed59c-d2fe-4fe4-bff7-52efe089ebd6' permission to Vm
'd8a99676-d520-425e-9974-1b1efe6da8a5'
>>> 2019-05-14 13:26:46,608+02 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] START, IsVmDuringInitiatingVDSCommand(
IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
log id: 67acc561
>>> 2019-05-14 13:26:46,608+02 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, IsVmDuringInitiatingVDSCommand, return:
false, log id: 67acc561
>>> 2019-05-14 13:26:46,719+02 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(default task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Running command:RunVmCommand
internal: true. Entities affected : ID: d8a99676-d520-425e-9974-1b1efe6da8a5 Type:
VMAction group RUN_VM with role type USER
>>> 2019-05-14 13:26:46,791+02 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] START, UpdateVmDynamicDataVDSCommand(
UpdateVmDynamicDataVDSCommandParameters:{hostId='null',
vmId='d8a99676-d520-425e-9974-1b1efe6da8a5',
vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@6db8c94d'}),
log id: 2c110e4
>>> 2019-05-14 13:26:46,795+02 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, UpdateVmDynamicDataVDSCommand, return: ,
log id: 2c110e4
>>> 2019-05-14 13:26:46,804+02 INFO
[org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] START,CreateVDSCommand(
CreateVDSCommandParameters:{hostId='eec7ec2b-cae1-4bb9-b933-4dff47a70bdb',
vmId='d8a99676-d520-425e-9974-1b1efe6da8a5', vm='VM [stud-c7-1]'}), log
id: 71d599f2
>>> 2019-05-14 13:26:46,809+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] START, CreateBrokerVDSCommand(HostName = studvirt1,
CreateVDSCommandParameters:{hostId='eec7ec2b-cae1-4bb9-b933-4dff47a70bdb',
vmId='d8a99676-d520-425e-9974-1b1efe6da8a5', vm='VM [stud-c7-1]'}), log
id: 3aa6b5ff
>>> 2019-05-14 13:26:46,836+02 INFO
[org.ovirt.engine.core.vdsbroker.builder.vminfo.VmInfoBuildUtils] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] Kernel FIPS - Guid:
eec7ec2b-cae1-4bb9-b933-4dff47a70bdb fips: false
>>> 2019-05-14 13:26:46,903+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] VM <?xml version="1.0"
encoding="UTF-8"?><domain type="kvm"
xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0"
xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
>>> [domain xml stripped]
>>> 2019-05-14 13:26:46,928+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, CreateBrokerVDSCommand, return: , log id:
3aa6b5ff
>>> 2019-05-14 13:26:46,932+02 INFO
[org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, CreateVDSCommand, return: WaitForLaunch,
log id: 71d599f2
>>> 2019-05-14 13:26:46,932+02 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(default task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Lock freed to object
'EngineLock:{exclusiveLocks='[a5bed59c-d2fe-4fe4-bff7-52efe089ebd6=USER_VM_POOL]',
sharedLocks=''}'
>>>
>>> -> it has released lock2
>>>
>>> 2019-05-14 13:26:47,004+02 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] EVENT_ID: VDS_INITIATED_RUN_VM(506), Trying to
restart VM stud-c7-1 on Host studvirt1
>>> 2019-05-14 13:26:47,094+02 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] EVENT_ID: USER_ATTACH_USER_TO_VM_FROM_POOL(316),
Attaching User u0045469 to VM stud-c7-1 in VM Pool stud-c7-? was initiated by
u0045469(a)esat.kuleuven.be-authz <mailto:u0045469@esat.kuleuven.be-authz>.
>>> 2019-05-14 13:26:47,098+02 WARN
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] Trying to release exclusive lock which does not
exist, lock key: 'a5bed59c-d2fe-4fe4-bff7-52efe089ebd6USER_VM_POOL'
>>>
>>> -> it's trying to release the same lock2 as above and failing
>>>
>>> 2019-05-14 13:26:47,098+02 INFO
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default task-6)
[e3c5745c-e593-4aed-ba67-b173808140e8] Lock freed to object
'EngineLock:{exclusiveLocks='[a5bed59c-d2fe-4fe4-bff7-52efe089ebd6=USER_VM_POOL]',
sharedLocks=''}'
>>>
>>> -> now it's indicating that it released/freed the lock (lock2)
>>>
>>> 2019-05-14 13:26:48,518+02 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-3) [e3c5745c-e593-4aed-ba67-b173808140e8]
Command 'AttachUserToVmFromPoolAndRun' id:
'0148c91d-b053-4dc9-960c-f10bf0d3f343' child commands
'[0470802d-09fa-4579-b95b-3d9603feff7b, 47dbfc58-3bae-4229-96eb-d1fc08911237]'
executions were completed, status 'SUCCEEDED'
>>> 2019-05-14 13:26:49,584+02 INFO
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-14) [e3c5745c-e593-4aed-ba67-b173808140e8]
Ending command 'org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand'
successfully.
>>> 2019-05-14 13:26:49,650+02 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-14) [e3c5745c-e593-4aed-ba67-b173808140e8]
EVENT_ID: USER_ATTACH_USER_TO_VM_FROM_POOL_FINISHED_SUCCESS(318), User u0045469
successfully attached to VM stud-c7-1 in VM Pool stud-c7-?.
>>> 2019-05-14 13:26:50,584+02 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ForkJoinPool-1-worker-2) [] EVENT_ID: VM_CONSOLE_DISCONNECTED(168), User <UNKNOWN>
got disconnected from VM stud-c7-1.
>>> 2019-05-14 13:26:50,585+02 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-2) [] VM
'd8a99676-d520-425e-9974-1b1efe6da8a5'(stud-c7-1) moved from
'WaitForLaunch' --> 'PoweringUp'
>>>
>>>
>>> The first lock (on the pool itself) is never released. It seems this lock
should also be released? Maybe it's this lock that should be released the second time
(instead of the second lock twice)?
>>>
>>>
>>>
>>> When we start to launch another instance of the pool it fails:
>>>
>>> 2019-05-14 13:49:32,656+02 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-11)
[55cc0796-4f53-49cd-8739-3b7e7dd2d95b] START, IsVmDuringInitiatingVDSCommand(
IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
log id: 7db2f4fc
>>> 2019-05-14 13:49:32,656+02 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-11)
[55cc0796-4f53-49cd-8739-3b7e7dd2d95b] FINISH, IsVmDuringInitiatingVDSCommand, return:
false, log id: 7db2f4fc
>>> 2019-05-14 13:49:32,688+02 INFO [org.ovirt.engine.core.bll.VmPoolHandler]
(default task-11) [55cc0796-4f53-49cd-8739-3b7e7dd2d95b] Failed to Acquire Lock to object
'EngineLock:{exclusiveLocks='[d8a99676-d520-425e-9974-1b1efe6da8a5=VM]',
sharedLocks=''}'
>>> 2019-05-14 13:49:32,700+02 INFO
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default task-11)
[55cc0796-4f53-49cd-8739-3b7e7dd2d95b] Lock Acquired to object
'EngineLock:{exclusiveLocks='[a5bed59c-d2fe-4fe4-bff7-52efe089ebd6=USER_VM_POOL]',
sharedLocks=''}'
>>> 2019-05-14 13:49:32,700+02 WARN
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default task-11)
[55cc0796-4f53-49cd-8739-3b7e7dd2d95b] Validation of action
'AttachUserToVmFromPoolAndRun' failed for user u0045469(a)esat.kuleuven.be-authz
<mailto:u0045469@esat.kuleuven.be-authz>. Reasons:
VAR__ACTION__ALLOCATE_AND_RUN,VAR__TYPE__VM_FROM_VM_POOL,ACTION_TYPE_FAILED_NO_AVAILABLE_POOL_VMS
>>> 2019-05-14 13:49:32,700+02 INFO
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default task-11)
[55cc0796-4f53-49cd-8739-3b7e7dd2d95b] Lock freed to object
'EngineLock:{exclusiveLocks='[a5bed59c-d2fe-4fe4-bff7-52efe089ebd6=USER_VM_POOL]',
sharedLocks=''}'
>>> 2019-05-14 13:49:32,706+02 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-11) []
Operation Failed: [Cannot allocate and run VM from VM-Pool. There are no available VMs in
the VM-Pool.]
>>>
>>>
>>> Regards,
>>> Rik
>>>
>>>
>>> --
>>> Rik Theys
>>> System Engineer
>>> KU Leuven - Dept. Elektrotechniek (ESAT)
>>> Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
>>> +32(0)16/32.11.07
>>> ----------------------------------------------------------------
>>> <<Any errors in spelling, tact or fact are transmission errors>>
>>>
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3IASEV7U7DI...
>> Thank you
>> Lucie
>> --
>> Lucie Leistnerova
>> Quality Engineer, QE Cloud, RHVM
>> Red Hat EMEA
>>
>> IRC: lleistne @ #rhev-qe
> --
> Rik Theys
> System Engineer
> KU Leuven - Dept. Elektrotechniek (ESAT)
> Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
> +32(0)16/32.11.07
> ----------------------------------------------------------------
> <<Any errors in spelling, tact or fact are transmission errors>>
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGLGGOS6KDN...
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>