
------=_Part_14812387_1517716221.1383129966977 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit ----- Original Message -----
From: "Alessandro Bianchi" <a.bianchi@skynet.it> To: users@ovirt.org Sent: Wednesday, October 30, 2013 12:34:21 PM Subject: [Users] Gluster VM stuck in "waiting for launch" state
Hi everyone
I've set up a gluster storage with two replicated bricks
DC is up and I created a VM to test gluster storage
If I start the VM WITHOUT any disk attached (only one virtual DVD) it starts fine.
If I attach a gluster domain disk thin provisioning 30 Gb the Vm stucks in "waiting for launch" state
I see no special activity on the gluster servers (they serve several other shares with no troubles at all and even the ISO domain is a NFS on locally mounted gluster and works fine)
I've double checked all the pre requisites and they look fine (F 19 - gluster setup insecure in both glusterd.vol and volume options - uid/gid/insecure )
Am I doing something wrong?
I'm even unable to stop the VM from the engine GUI
Any advise?
what is the status of the host that runs the vm? is it up?
Thank you
Best regards
CentOS_30.log
2013-10-30 10:17:48.094+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name CentOS_30 -S -machine pc-1.0,accel=kvm,usb=off -cpu Westmere -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid 200dfb05-461e-49d9-95a2-c0a7c7ced669 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-4,serial=316140A0-D7DA-11DD-AF75-08606ED667F7,uuid=200dfb05-461e-49d9-95a2-c0a7c7ced669 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/CentOS_30.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-10-30T10:17:47,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=gluster://172.16.0.100/vms/21e3d67f-fdec-4f87-b09e-53af994a6f87/images/b73a79fe-401b-434c-b023-6de4c3eb100d/202adc86-7857-40eb-8ca6-7cec3ef0d2d6,if=none,id=drive-virtio-disk0,format=raw,serial=b73a79fe-401b-434c-b023-6de4c3eb100d,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:00:00:33,bus=pci.0,addr=0x3,bootindex=3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/200dfb05-461e-49d9-95a2-c0a7c7ced669.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/200dfb05-461e-49d9-95a2-c0a7c7ced669.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon0,bus=pci
vdsm.log
Thread-1239::DEBUG::2013-10-30 11:25:03,420::task::974::TaskManager.Task::(_decref) Task=`d891f3d3-070d-403f-9050-f10651a7fc47`::ref 0 aborting False Thread-242::DEBUG::2013-10-30 11:25:04,982::domainMonitor::178::Storage.DomainMonitorThread::(_monitorDomain) Refreshing domain 21e3d67f-fdec-4f87-b09e-53af994a6f87 Thread-242::DEBUG::2013-10-30 11:25:04,992::fileSD::153::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4f87-b09e-53af994a6f87 Thread-242::DEBUG::2013-10-30 11:25:04,994::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-242::DEBUG::2013-10-30 11:25:05,005::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=vms', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=GlusterSD', 'POOL_DOMAINS=21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-c2ca7f9956b4:Active', 'POOL_SPM_ID=1', 'POOL_SPM_LVER=3', 'POOL_UUID=e3cb7694-47f7-4eac-9201-922e381a25dc', 'REMOTE_PATH=172.16.0.100:vms', 'ROLE=Master', 'SDUUID=21e3d67f-fdec-4f87-b09e-53af994a6f87', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2d0dc3836063d45dd1d8046aaf00bce05c2455ea'] Thread-242::DEBUG::2013-10-30 11:25:05,008::fileSD::535::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images [] Thread-242:: INFO::2013-10-30 11:25:05,008::sd::374::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 21e3d67f-fdec-4f87-b09e-53af994a6f87_imageNS already registered Thread-242:: INFO::2013-10-30 11:25:05,008::sd::382::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 21e3d67f-fdec-4f87-b09e-53af994a6f87_volumeNS already registered Thread-242::DEBUG::2013-10-30 11:25:05,020::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4f87-b09e-53af994a6f87/dom_md/metadata bs=4096 count=1' (cwd None) Thread-242::DEBUG::2013-10-30 11:25:05,026::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n497 bytes (497 B) copied, 0.000280701 s, 1.8 MB/s\n'; <rc> = 0 Thread-243::DEBUG::2013-10-30 11:25:07,721::domainMonitor::178::Storage.DomainMonitorThread::(_monitorDomain) Refreshing domain 8e56a914-3395-48ad-ab4c-c2ca7f9956b4 Thread-243::DEBUG::2013-10-30 11:25:07,730::fileSD::153::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4 Thread-243::DEBUG::2013-10-30 11:25:07,731::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-243::DEBUG::2013-10-30 11:25:07,740::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Iso', 'DESCRIPTION=ISO', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=0', 'POOL_UUID=01b79582-75e3-436d-bade-db653207620b,e3cb7694-47f7-4eac-9201-922e381a25dc', 'REMOTE_PATH=172.16.0.5:/home/iso2', 'ROLE=Regular', 'SDUUID=8e56a914-3395-48ad-ab4c-c2ca7f9956b4', 'TYPE=NFS', 'VERSION=0', '_SHA_CKSUM=b0d102325f279bb8efa68c594cbace0b11c66783'] Thread-243::DEBUG::2013-10-30 11:25:07,741::fileSD::535::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images [] Thread-243:: INFO::2013-10-30 11:25:07,741::sd::374::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 8e56a914-3395-48ad-ab4c-c2ca7f9956b4_imageNS already registered Thread-243:: INFO::2013-10-30 11:25:07,741::sd::382::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 8e56a914-3395-48ad-ab4c-c2ca7f9956b4_volumeNS already registered Thread-243::DEBUG::2013-10-30 11:25:07,757::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/dom_md/metadata bs=4096 count=1' (cwd None) Thread-243::DEBUG::2013-10-30 11:25:07,760::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n373 bytes (373 B) copied, 0.000128829 s, 2.9 MB/s\n'; <rc> = 0 Thread-1246::DEBUG::2013-10-30 11:25:09,944::task::579::TaskManager.Task::(_updateState) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::moving from state init -> state preparing Thread-1246:: INFO::2013-10-30 11:25:09,944::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-1246:: INFO::2013-10-30 11:25:09,945::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'delay': '0.000280701', 'lastCheck': '4.9', 'code': 0, 'valid': True, 'version': 3}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'delay': '0.000128829', 'lastCheck': '2.2', 'code': 0, 'valid': True, 'version': 0}} Thread-1246::DEBUG::2013-10-30 11:25:09,945::task::1168::TaskManager.Task::(prepare) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::finished: {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'delay': '0.000280701', 'lastCheck': '4.9', 'code': 0, 'valid': True, 'version': 3}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'delay': '0.000128829', 'lastCheck': '2.2', 'code': 0, 'valid': True, 'version': 0}} Thread-1246::DEBUG::2013-10-30 11:25:09,945::task::579::TaskManager.Task::(_updateState) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::moving from state preparing -> state finished Thread-1246::DEBUG::2013-10-30 11:25:09,945::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-1246::DEBUG::2013-10-30 11:25:09,945::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-1246::DEBUG::2013-10-30 11:25:09,945::task::974::TaskManager.Task::(_decref) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::ref 0 aborting False Thread-1249::DEBUG::2013-10-30 11:25:13,507::BindingXMLRPC::177::vds::(wrapper) client [172.16.0.5] Thread-1249::DEBUG::2013-10-30 11:25:13,507::task::579::TaskManager.Task::(_updateState) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::moving from state init -> state preparing Thread-1249:: INFO::2013-10-30 11:25:13,507::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='e3cb7694-47f7-4eac-9201-922e381a25dc', options=None) Thread-1249:: INFO::2013-10-30 11:25:13,508::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 3}} Thread-1249::DEBUG::2013-10-30 11:25:13,508::task::1168::TaskManager.Task::(prepare) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::finished: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 3}} Thread-1249::DEBUG::2013-10-30 11:25:13,508::task::579::TaskManager.Task::(_updateState) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::moving from state preparing -> state finished Thread-1249::DEBUG::2013-10-30 11:25:13,508::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-1249::DEBUG::2013-10-30 11:25:13,508::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-1249::DEBUG::2013-10-30 11:25:13,508::task::974::TaskManager.Task::(_decref) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::ref 0 aborting False Thread-1250::DEBUG::2013-10-30 11:25:13,510::BindingXMLRPC::177::vds::(wrapper) client [172.16.0.5] Thread-1250::DEBUG::2013-10-30 11:25:13,510::task::579::TaskManager.Task::(_updateState) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::moving from state init -> state preparing Thread-1250:: INFO::2013-10-30 11:25:13,511::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='e3cb7694-47f7-4eac-9201-922e381a25dc', options=None) Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.e3cb7694-47f7-4eac-9201-922e381a25dc`ReqID=`42f323ef-aa03-49e6-893f-0e4ff0764b6b`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2560' at 'getStoragePoolInfo' Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' for lock type 'shared' Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' is free. Now locking as 'shared' (1 active user) Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.e3cb7694-47f7-4eac-9201-922e381a25dc`ReqID=`42f323ef-aa03-49e6-893f-0e4ff0764b6b`::Granted request Thread-1250::DEBUG::2013-10-30 11:25:13,512::task::811::TaskManager.Task::(resourceAcquired) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::_resourcesAcquired: Storage.e3cb7694-47f7-4eac-9201-922e381a25dc (shared) Thread-1250::DEBUG::2013-10-30 11:25:13,512::task::974::TaskManager.Task::(_decref) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::ref 1 aborting False Thread-1250:: INFO::2013-10-30 11:25:13,517::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_id': 1, 'master_uuid': '21e3d67f-fdec-4f87-b09e-53af994a6f87', 'name': 'GlusterSD', 'version': '3', 'domains': '21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-c2ca7f9956b4:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111', 'type': 'GLUSTERFS', 'master_ver': 1, 'lver': 3}, 'dominfo': {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'status': 'Active', 'diskfree': '239969632256', 'alerts': [], 'version': 3, 'disktotal': '492120440832'}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'status': 'Active', 'diskfree': '201653747712', 'alerts': [], 'version': 0, 'disktotal': '246783410176'}}} Thread-1250::DEBUG::2013-10-30 11:25:13,517::task::1168::TaskManager.Task::(prepare) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::finished: {'info': {'spm_id': 1, 'master_uuid': '21e3d67f-fdec-4f87-b09e-53af994a6f87', 'name': 'GlusterSD', 'version': '3', 'domains': '21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-c2ca7f9956b4:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111', 'type': 'GLUSTERFS', 'master_ver': 1, 'lver': 3}, 'dominfo': {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'status': 'Active', 'diskfree': '239969632256', 'alerts': [], 'version': 3, 'disktotal': '492120440832'}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'status': 'Active', 'diskfree': '201653747712', 'alerts': [], 'version': 0, 'disktotal': '246783410176'}}} Thread-1250::DEBUG::2013-10-30 11:25:13,517::task::579::TaskManager.Task::(_updateState) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::moving from state preparing -> state finished Thread-1250::DEBUG::2013-10-30 11:25:13,517::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc': < ResourceRef 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc', isValid: 'True' obj: 'None'>} Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' (0 active users) Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' is free, finding out if anyone is waiting for it. Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc', Clearing records. Thread-1250::DEBUG::2013-10-30 11:25:13,518::task::974::TaskManager.Task::(_decref) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::ref 0 aborting False Thread-242::DEBUG::2013-10-30 11:25:15,046::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4f87-b09e-53af994a6f87/dom_md/metadata bs=4096 count=1' (cwd None) Thread-242::DEBUG::2013-10-30 11:25:15,051::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n497 bytes (497 B) copied, 0.00032077 s, 1.5 MB/s\n'; <rc> = 0 Thread-243::DEBUG::2013-10-30 11:25:17,777::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/dom_md/metadata bs=4096 count=1' (cwd None) Thread-243::DEBUG::2013-10-30 11:25:17,780::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n373 bytes (373 B) copied, 9.9232e-05 s, 3.8 Thread-1239::DEBUG::2013-10-30 11:25:03,420::task::974::TaskManager.Task::(_decref) Task=`d891f3d3-070d-403f-9050-f10651a7fc47`::ref 0 aborting False Thread-242::DEBUG::2013-10-30 11:25:04,982::domainMonitor::178::Storage.DomainMonitorThread::(_monitorDomain) Refreshing domain 21e3d67f-fdec-4f87-b09e-53af994a6f87 Thread-242::DEBUG::2013-10-30 11:25:04,992::fileSD::153::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4f87-b09e-53af994a6f87 Thread-242::DEBUG::2013-10-30 11:25:04,994::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-242::DEBUG::2013-10-30 11:25:05,005::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=vms', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=GlusterSD', 'POOL_DOMAINS=21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-c2ca7f9956b4:Active', 'POOL_SPM_ID=1', 'POOL_SPM_LVER=3', 'POOL_UUID=e3cb7694-47f7-4eac-9201-922e381a25dc', 'REMOTE_PATH=172.16.0.100:vms', 'ROLE=Master', 'SDUUID=21e3d67f-fdec-4f87-b09e-53af994a6f87', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2d0dc3836063d45dd1d8046aaf00bce05c2455ea'] Thread-242::DEBUG::2013-10-30 11:25:05,008::fileSD::535::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images [] Thread-242:: INFO::2013-10-30 11:25:05,008::sd::374::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 21e3d67f-fdec-4f87-b09e-53af994a6f87_imageNS already registered Thread-242:: INFO::2013-10-30 11:25:05,008::sd::382::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 21e3d67f-fdec-4f87-b09e-53af994a6f87_volumeNS already registered Thread-242::DEBUG::2013-10-30 11:25:05,020::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4f87-b09e-53af994a6f87/dom_md/metadata bs=4096 count=1' (cwd None) Thread-242::DEBUG::2013-10-30 11:25:05,026::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n497 bytes (497 B) copied, 0.000280701 s, 1.8 MB/s\n'; <rc> = 0 Thread-243::DEBUG::2013-10-30 11:25:07,721::domainMonitor::178::Storage.DomainMonitorThread::(_monitorDomain) Refreshing domain 8e56a914-3395-48ad-ab4c-c2ca7f9956b4 Thread-243::DEBUG::2013-10-30 11:25:07,730::fileSD::153::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4 Thread-243::DEBUG::2013-10-30 11:25:07,731::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-243::DEBUG::2013-10-30 11:25:07,740::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Iso', 'DESCRIPTION=ISO', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=0', 'POOL_UUID=01b79582-75e3-436d-bade-db653207620b,e3cb7694-47f7-4eac-9201-922e381a25dc', 'REMOTE_PATH=172.16.0.5:/home/iso2', 'ROLE=Regular', 'SDUUID=8e56a914-3395-48ad-ab4c-c2ca7f9956b4', 'TYPE=NFS', 'VERSION=0', '_SHA_CKSUM=b0d102325f279bb8efa68c594cbace0b11c66783'] Thread-243::DEBUG::2013-10-30 11:25:07,741::fileSD::535::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images [] Thread-243:: INFO::2013-10-30 11:25:07,741::sd::374::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 8e56a914-3395-48ad-ab4c-c2ca7f9956b4_imageNS already registered Thread-243:: INFO::2013-10-30 11:25:07,741::sd::382::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 8e56a914-3395-48ad-ab4c-c2ca7f9956b4_volumeNS already registered Thread-243::DEBUG::2013-10-30 11:25:07,757::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/dom_md/metadata bs=4096 count=1' (cwd None) Thread-243::DEBUG::2013-10-30 11:25:07,760::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n373 bytes (373 B) copied, 0.000128829 s, 2.9 MB/s\n'; <rc> = 0 Thread-1246::DEBUG::2013-10-30 11:25:09,944::task::579::TaskManager.Task::(_updateState) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::moving from state init -> state preparing Thread-1246:: INFO::2013-10-30 11:25:09,944::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-1246:: INFO::2013-10-30 11:25:09,945::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'delay': '0.000280701', 'lastCheck': '4.9', 'code': 0, 'valid': True, 'version': 3}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'delay': '0.000128829', 'lastCheck': '2.2', 'code': 0, 'valid': True, 'version': 0}} Thread-1246::DEBUG::2013-10-30 11:25:09,945::task::1168::TaskManager.Task::(prepare) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::finished: {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'delay': '0.000280701', 'lastCheck': '4.9', 'code': 0, 'valid': True, 'version': 3}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'delay': '0.000128829', 'lastCheck': '2.2', 'code': 0, 'valid': True, 'version': 0}} Thread-1246::DEBUG::2013-10-30 11:25:09,945::task::579::TaskManager.Task::(_updateState) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::moving from state preparing -> state finished Thread-1246::DEBUG::2013-10-30 11:25:09,945::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-1246::DEBUG::2013-10-30 11:25:09,945::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-1246::DEBUG::2013-10-30 11:25:09,945::task::974::TaskManager.Task::(_decref) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::ref 0 aborting False Thread-1249::DEBUG::2013-10-30 11:25:13,507::BindingXMLRPC::177::vds::(wrapper) client [172.16.0.5] Thread-1249::DEBUG::2013-10-30 11:25:13,507::task::579::TaskManager.Task::(_updateState) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::moving from state init -> state preparing Thread-1249:: INFO::2013-10-30 11:25:13,507::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='e3cb7694-47f7-4eac-9201-922e381a25dc', options=None) Thread-1249:: INFO::2013-10-30 11:25:13,508::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 3}} Thread-1249::DEBUG::2013-10-30 11:25:13,508::task::1168::TaskManager.Task::(prepare) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::finished: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 3}} Thread-1249::DEBUG::2013-10-30 11:25:13,508::task::579::TaskManager.Task::(_updateState) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::moving from state preparing -> state finished Thread-1249::DEBUG::2013-10-30 11:25:13,508::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-1249::DEBUG::2013-10-30 11:25:13,508::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-1249::DEBUG::2013-10-30 11:25:13,508::task::974::TaskManager.Task::(_decref) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::ref 0 aborting False Thread-1250::DEBUG::2013-10-30 11:25:13,510::BindingXMLRPC::177::vds::(wrapper) client [172.16.0.5] Thread-1250::DEBUG::2013-10-30 11:25:13,510::task::579::TaskManager.Task::(_updateState) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::moving from state init -> state preparing Thread-1250:: INFO::2013-10-30 11:25:13,511::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='e3cb7694-47f7-4eac-9201-922e381a25dc', options=None) Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.e3cb7694-47f7-4eac-9201-922e381a25dc`ReqID=`42f323ef-aa03-49e6-893f-0e4ff0764b6b`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2560' at 'getStoragePoolInfo' Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' for lock type 'shared' Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' is free. Now locking as 'shared' (1 active user) Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.e3cb7694-47f7-4eac-9201-922e381a25dc`ReqID=`42f323ef-aa03-49e6-893f-0e4ff0764b6b`::Granted request Thread-1250::DEBUG::2013-10-30 11:25:13,512::task::811::TaskManager.Task::(resourceAcquired) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::_resourcesAcquired: Storage.e3cb7694-47f7-4eac-9201-922e381a25dc (shared) Thread-1250::DEBUG::2013-10-30 11:25:13,512::task::974::TaskManager.Task::(_decref) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::ref 1 aborting False Thread-1250:: INFO::2013-10-30 11:25:13,517::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_id': 1, 'master_uuid': '21e3d67f-fdec-4f87-b09e-53af994a6f87', 'name': 'GlusterSD', 'version': '3', 'domains': '21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-c2ca7f9956b4:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111', 'type': 'GLUSTERFS', 'master_ver': 1, 'lver': 3}, 'dominfo': {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'status': 'Active', 'diskfree': '239969632256', 'alerts': [], 'version': 3, 'disktotal': '492120440832'}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'status': 'Active', 'diskfree': '201653747712', 'alerts': [], 'version': 0, 'disktotal': '246783410176'}}} Thread-1250::DEBUG::2013-10-30 11:25:13,517::task::1168::TaskManager.Task::(prepare) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::finished: {'info': {'spm_id': 1, 'master_uuid': '21e3d67f-fdec-4f87-b09e-53af994a6f87', 'name': 'GlusterSD', 'version': '3', 'domains': '21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-c2ca7f9956b4:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111', 'type': 'GLUSTERFS', 'master_ver': 1, 'lver': 3}, 'dominfo': {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'status': 'Active', 'diskfree': '239969632256', 'alerts': [], 'version': 3, 'disktotal': '492120440832'}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'status': 'Active', 'diskfree': '201653747712', 'alerts': [], 'version': 0, 'disktotal': '246783410176'}}} Thread-1250::DEBUG::2013-10-30 11:25:13,517::task::579::TaskManager.Task::(_updateState) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::moving from state preparing -> state finished Thread-1250::DEBUG::2013-10-30 11:25:13,517::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc': < ResourceRef 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc', isValid: 'True' obj: 'None'>} Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' (0 active users) Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' is free, finding out if anyone is waiting for it. Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc', Clearing records. Thread-1250::DEBUG::2013-10-30 11:25:13,518::task::974::TaskManager.Task::(_decref) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::ref 0 aborting False Thread-242::DEBUG::2013-10-30 11:25:15,046::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4f87-b09e-53af994a6f87/dom_md/metadata bs=4096 count=1' (cwd None) Thread-242::DEBUG::2013-10-30 11:25:15,051::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n497 bytes (497 B) copied, 0.00032077 s, 1.5 MB/s\n'; <rc> = 0 Thread-243::DEBUG::2013-10-30 11:25:17,777::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/dom_md/metadata bs=4096 count=1' (cwd None) Thread-243::DEBUG::2013-10-30 11:25:17,780::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n373 bytes (373 B) copied, 9.9232e-05 s, 3.8 MB/s\n'; <rc> = 0
engine.log
2013-10-30 11:17:47,729 INFO [org.ovirt.engine.core.bll.RunVmOnceCommand] (ajp--127.0.0.1-8702-6) Running command: RunVmOnceCommand internal: false. Entities affected : ID: 200dfb05-461e-49d9-95a2-c0a7c7ced669 Type: VM 2013-10-30 11:17:47,743 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (ajp--127.0.0.1-8702-6) START, CreateVmVDSCommand(HostName = Hypervisor, HostId = da764821-0650-40f0-b7cf-038766fbd2ca, vmId=200dfb05-461e-49d9-95a2-c0a7c7ced669, vm=VM [CentOS_30]), log id: 638a3a0e 2013-10-30 11:17:47,765 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (ajp--127.0.0.1-8702-6) START, CreateVDSCommand(HostName = Hypervisor, HostId = da764821-0650-40f0-b7cf-038766fbd2ca, vmId=200dfb05-461e-49d9-95a2-c0a7c7ced669, vm=VM [CentOS_30]), log id: 297dbca2 2013-10-30 11:17:47,784 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (ajp--127.0.0.1-8702-6) org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand spiceSslCipherSuite=DEFAULT,memSize=2048,kvmEnable=true,smp=1,vmType=kvm,emulatedMachine=pc-1.0,keyboardLayout=en-us,memGuaranteedSize=1365,pitReinjection=false,nice=0,display=qxl,smartcardEnable=false,smpCoresPerSocket=1,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,timeOffset=0,transparentHugePages=true,vmId=200dfb05-461e-49d9-95a2-c0a7c7ced669,devices=[Ljava.util.HashMap;@f3eee3e,acpiEnable=true,vmName=CentOS_30,cpuType=Westmere,custom={} 2013-10-30 11:17:47,785 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (ajp--127.0.0.1-8702-6) FINISH, CreateVDSCommand, log id: 297dbca2 2013-10-30 11:17:47,786 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (ajp--127.0.0.1-8702-6) FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 638a3a0e 2013-10-30 11:17:47,812 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-6) Correlation ID: 4f8bc4f9, Job ID: 74a77fc0-7e83-44c0-b3d7-334eb38612b4, Call Stack: null, Custom Event ID: -1, Message: VM CentOS_30 was started by admin@internal (Host: Hypervisor). 2013-10-30 11:17:51,115 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (DefaultQuartzScheduler_Worker-37) START, FullListVdsCommand(HostName = Hypervisor, HostId = da764821-0650-40f0-b7cf-038766fbd2ca, vds=Host[Hypervisor], vmIds=[200dfb05-461e-49d9-95a2-c0a7c7ced669]), log id: 546bd3e2 2013-10-30 11:17:51,119 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (DefaultQuartzScheduler_Worker-37) FINISH, FullListVdsCommand, return: [Ljava.util.HashMap;@318a7164, log id: 546bd3e2 2013-10-30 11:17:51,121 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-37) Received a scsi Device without an address when processing VM 200dfb05-461e-49d9-95a2-c0a7c7ced669 devices, skipping device: {model=virtio-scsi, device=scsi, type=controller} 2013-10-30 11:17:51,121 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-37) Received a cdrom Device without an address when processing VM 200dfb05-461e-49d9-95a2-c0a7c7ced669 devices, skipping device: {shared=false, bootOrder=1, iface=ide, index=2, specParams={}, device=cdrom, path=/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x86_64-bin-DVD1.iso, type=disk, readonly=true, deviceId=e29c7ac7-f872-43e3-90e2-cb2205c0203f} 2013-10-30 11:17:51,122 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-37) Received a disk Device without an address when processing VM 200dfb05-461e-49d9-95a2-c0a7c7ced669 devices, skipping device: {shared=false, volumeID=202adc86-7857-40eb-8ca6-7cec3ef0d2d6, index=0, propagateErrors=off, format=raw, type=disk, truesize=4096, reqsize=0, bootOrder=2, iface=virtio, volumeChain=[Ljava.lang.Object;@8741843, imageID=b73a79fe-401b-434c-b023-6de4c3eb100d, domainID=21e3d67f-fdec-4f87-b09e-53af994a6f87, specParams={}, optional=false, volumeInfo={volfileServer=172.16.0.100, volPort=0, protocol=gluster, volTransport=tcp, path=vms/21e3d67f-fdec-4f87-b09e-53af994a6f87/images/b73a79fe-401b-434c-b023-6de4c3eb100d/202adc86-7857-40eb-8ca6-7cec3ef0d2d6, volType=network}, path=/rhev/data-center/e3cb7694-47f7-4eac-9201-922e381a25dc/21e3d67f-fdec-4f87-b09e-53af994a6f87/images/b73a79fe-401b-434c-b023-6de4c3eb100d/202adc86-7857-40eb-8ca6-7cec3ef0d2d6, device=disk, poolID=e3cb7694-47f7-4eac-9201-922e381a25dc, readonly=false, deviceId=b73a79fe-401b-434c-b023-6de4c3eb100d, apparentsize=32212254720} 2013-10-30 11:17:51,123 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-37) Received a memballoon Device without an address when processing VM 200dfb05-461e-49d9-95a2-c0a7c7ced669 devices, skipping device: {specParams={model=virtio}, target=2097152, device=memballoon, type=balloon, deviceId=b61b058d-cdfe-4d7c-8e6c-1bfd631cb26c}
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
------=_Part_14812387_1517716221.1383129966977 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 7bit <html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div><br></div><div><br></div><hr id="zwchr"><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>"Alessandro Bianchi" <a.bianchi@skynet.it><br><b>To: </b>users@ovirt.org<br><b>Sent: </b>Wednesday, October 30, 2013 12:34:21 PM<br><b>Subject: </b>[Users] Gluster VM stuck in "waiting for launch" state<br><div><br></div><div id="QCMcontainer" style="font-family:sans-serif;font-size:16px;">Hi everyone<br><br> I've set up a gluster storage with two replicated bricks<br><br> DC is up and I created a VM to test gluster storage<br><br> If I start the VM WITHOUT any disk attached (only one virtual DVD) it starts fine.<br><br> If I attach a gluster domain disk thin provisioning 30 Gb the Vm stucks in "waiting for launch" state<br><br> I see no special activity on the gluster servers (they serve several other shares with no troubles at all and even the ISO domain is a NFS on locally mounted gluster and works fine)<br><br> I've double checked all the pre requisites and they look fine (F 19 - gluster setup insecure in both glusterd.vol and volume options - uid/gid/insecure )<br><br> Am I doing something wrong?<br><br> I'm even unable to stop the VM from the engine GUI<br><br> Any advise?</div></blockquote><div><br></div><div>what is the status of the host that runs the vm?<br></div><div>is it up?<br></div><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><div id="QCMcontainer" style="font-family:sans-serif;font-size:16px;"><br> Thank you<br><br> Best regards<br><br> CentOS_30.log<br><br> 2013-10-30 10:17:48.094+0000: starting up<br> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name CentOS_30 -S -machine pc-1.0,accel=kvm,usb=off -cpu Westmere -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid 200dfb05-461e-49d9-95a2-c0a7c7ced669 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-4,serial=316140A0-D7DA-11DD-AF75-08606ED667F7,uuid=200dfb05-461e-49d9-95a2-c0a7c7ced669 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/CentOS_30.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-10-30T10:17:47,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=gluster://172.16.0.100/vms/21e3d67f-fdec-4f87-b09e-53af994a6f87/images/b73a79fe-401b-434c-b023-6de4c3eb100d/202adc86-7857-40eb-8ca6-7cec3ef0d2d6,if=none,id=drive-virtio-disk0,format=raw,serial=b73a79fe-401b-434c-b023-6de4c3eb100d,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:00:00:33,bus=pci.0,addr=0x3,bootindex=3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/200dfb05-461e-49d9-95a2-c0a7c7ced669.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/200dfb05-461e-49d9-95a2-c0a7c7ced669.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon0,bus=pci<br><br> vdsm.log<br><br> Thread-1239::DEBUG::2013-10-30 11:25:03,420::task::974::TaskManager.Task::(_decref) Task=`d891f3d3-070d-403f-9050-f10651a7fc47`::ref 0 aborting False<br> Thread-242::DEBUG::2013-10-30 11:25:04,982::domainMonitor::178::Storage.DomainMonitorThread::(_monitorDomain) Refreshing domain 21e3d67f-fdec-4f87-b09e-53af994a6f87<br> Thread-242::DEBUG::2013-10-30 11:25:04,992::fileSD::153::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4f87-b09e-53af994a6f87<br> Thread-242::DEBUG::2013-10-30 11:25:04,994::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend<br> Thread-242::DEBUG::2013-10-30 11:25:05,005::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=vms', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=GlusterSD', 'POOL_DOMAINS=21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-c2ca7f9956b4:Active', 'POOL_SPM_ID=1', 'POOL_SPM_LVER=3', 'POOL_UUID=e3cb7694-47f7-4eac-9201-922e381a25dc', 'REMOTE_PATH=172.16.0.100:vms', 'ROLE=Master', 'SDUUID=21e3d67f-fdec-4f87-b09e-53af994a6f87', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2d0dc3836063d45dd1d8046aaf00bce05c2455ea']<br> Thread-242::DEBUG::2013-10-30 11:25:05,008::fileSD::535::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images []<br> Thread-242::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:05,008::sd::374::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 21e3d67f-fdec-4f87-b09e-53af994a6f87_imageNS already registered<br> Thread-242::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:05,008::sd::382::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 21e3d67f-fdec-4f87-b09e-53af994a6f87_volumeNS already registered<br> Thread-242::DEBUG::2013-10-30 11:25:05,020::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4f87-b09e-53af994a6f87/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-242::DEBUG::2013-10-30 11:25:05,026::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n497 bytes (497 B) copied, 0.000280701 s, 1.8 MB/s\n'; <rc> = 0<br> Thread-243::DEBUG::2013-10-30 11:25:07,721::domainMonitor::178::Storage.DomainMonitorThread::(_monitorDomain) Refreshing domain 8e56a914-3395-48ad-ab4c-c2ca7f9956b4<br> Thread-243::DEBUG::2013-10-30 11:25:07,730::fileSD::153::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4<br> Thread-243::DEBUG::2013-10-30 11:25:07,731::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend<br> Thread-243::DEBUG::2013-10-30 11:25:07,740::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Iso', 'DESCRIPTION=ISO', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=0', 'POOL_UUID=01b79582-75e3-436d-bade-db653207620b,e3cb7694-47f7-4eac-9201-922e381a25dc', 'REMOTE_PATH=172.16.0.5:/home/iso2', 'ROLE=Regular', 'SDUUID=8e56a914-3395-48ad-ab4c-c2ca7f9956b4', 'TYPE=NFS', 'VERSION=0', '_SHA_CKSUM=b0d102325f279bb8efa68c594cbace0b11c66783']<br> Thread-243::DEBUG::2013-10-30 11:25:07,741::fileSD::535::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images []<br> Thread-243::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:07,741::sd::374::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 8e56a914-3395-48ad-ab4c-c2ca7f9956b4_imageNS already registered<br> Thread-243::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:07,741::sd::382::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 8e56a914-3395-48ad-ab4c-c2ca7f9956b4_volumeNS already registered<br> Thread-243::DEBUG::2013-10-30 11:25:07,757::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-243::DEBUG::2013-10-30 11:25:07,760::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n373 bytes (373 B) copied, 0.000128829 s, 2.9 MB/s\n'; <rc> = 0<br> Thread-1246::DEBUG::2013-10-30 11:25:09,944::task::579::TaskManager.Task::(_updateState) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::moving from state init -> state preparing<br> Thread-1246::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:09,944::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)<br> Thread-1246::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:09,945::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'delay': '0.000280701', 'lastCheck': '4.9', 'code': 0, 'valid': True, 'version': 3}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'delay': '0.000128829', 'lastCheck': '2.2', 'code': 0, 'valid': True, 'version': 0}}<br> Thread-1246::DEBUG::2013-10-30 11:25:09,945::task::1168::TaskManager.Task::(prepare) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::finished: {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'delay': '0.000280701', 'lastCheck': '4.9', 'code': 0, 'valid': True, 'version': 3}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'delay': '0.000128829', 'lastCheck': '2.2', 'code': 0, 'valid': True, 'version': 0}}<br> Thread-1246::DEBUG::2013-10-30 11:25:09,945::task::579::TaskManager.Task::(_updateState) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::moving from state preparing -> state finished<br> Thread-1246::DEBUG::2013-10-30 11:25:09,945::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br> Thread-1246::DEBUG::2013-10-30 11:25:09,945::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br> Thread-1246::DEBUG::2013-10-30 11:25:09,945::task::974::TaskManager.Task::(_decref) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::ref 0 aborting False<br> Thread-1249::DEBUG::2013-10-30 11:25:13,507::BindingXMLRPC::177::vds::(wrapper) client [172.16.0.5]<br> Thread-1249::DEBUG::2013-10-30 11:25:13,507::task::579::TaskManager.Task::(_updateState) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::moving from state init -> state preparing<br> Thread-1249::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:13,507::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='e3cb7694-47f7-4eac-9201-922e381a25dc', options=None)<br> Thread-1249::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:13,508::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 3}}<br> Thread-1249::DEBUG::2013-10-30 11:25:13,508::task::1168::TaskManager.Task::(prepare) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::finished: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 3}}<br> Thread-1249::DEBUG::2013-10-30 11:25:13,508::task::579::TaskManager.Task::(_updateState) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::moving from state preparing -> state finished<br> Thread-1249::DEBUG::2013-10-30 11:25:13,508::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br> Thread-1249::DEBUG::2013-10-30 11:25:13,508::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br> Thread-1249::DEBUG::2013-10-30 11:25:13,508::task::974::TaskManager.Task::(_decref) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::ref 0 aborting False<br> Thread-1250::DEBUG::2013-10-30 11:25:13,510::BindingXMLRPC::177::vds::(wrapper) client [172.16.0.5]<br> Thread-1250::DEBUG::2013-10-30 11:25:13,510::task::579::TaskManager.Task::(_updateState) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::moving from state init -> state preparing<br> Thread-1250::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:13,511::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='e3cb7694-47f7-4eac-9201-922e381a25dc', options=None)<br> Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.e3cb7694-47f7-4eac-9201-922e381a25dc`ReqID=`42f323ef-aa03-49e6-893f-0e4ff0764b6b`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2560' at 'getStoragePoolInfo'<br> Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' for lock type 'shared'<br> Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' is free. Now locking as 'shared' (1 active user)<br> Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.e3cb7694-47f7-4eac-9201-922e381a25dc`ReqID=`42f323ef-aa03-49e6-893f-0e4ff0764b6b`::Granted request<br> Thread-1250::DEBUG::2013-10-30 11:25:13,512::task::811::TaskManager.Task::(resourceAcquired) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::_resourcesAcquired: Storage.e3cb7694-47f7-4eac-9201-922e381a25dc (shared)<br> Thread-1250::DEBUG::2013-10-30 11:25:13,512::task::974::TaskManager.Task::(_decref) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::ref 1 aborting False<br> Thread-1250::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:13,517::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_id': 1, 'master_uuid': '21e3d67f-fdec-4f87-b09e-53af994a6f87', 'name': 'GlusterSD', 'version': '3', 'domains': '21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-c2ca7f9956b4:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111', 'type': 'GLUSTERFS', 'master_ver': 1, 'lver': 3}, 'dominfo': {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'status': 'Active', 'diskfree': '239969632256', 'alerts': [], 'version': 3, 'disktotal': '492120440832'}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'status': 'Active', 'diskfree': '201653747712', 'alerts': [], 'version': 0, 'disktotal': '246783410176'}}}<br> Thread-1250::DEBUG::2013-10-30 11:25:13,517::task::1168::TaskManager.Task::(prepare) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::finished: {'info': {'spm_id': 1, 'master_uuid': '21e3d67f-fdec-4f87-b09e-53af994a6f87', 'name': 'GlusterSD', 'version': '3', 'domains': '21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-c2ca7f9956b4:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111', 'type': 'GLUSTERFS', 'master_ver': 1, 'lver': 3}, 'dominfo': {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'status': 'Active', 'diskfree': '239969632256', 'alerts': [], 'version': 3, 'disktotal': '492120440832'}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'status': 'Active', 'diskfree': '201653747712', 'alerts': [], 'version': 0, 'disktotal': '246783410176'}}}<br> Thread-1250::DEBUG::2013-10-30 11:25:13,517::task::579::TaskManager.Task::(_updateState) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::moving from state preparing -> state finished<br> Thread-1250::DEBUG::2013-10-30 11:25:13,517::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc': < ResourceRef 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc', isValid: 'True' obj: 'None'>}<br> Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br> Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc'<br> Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' (0 active users)<br> Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' is free, finding out if anyone is waiting for it.<br> Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc', Clearing records.<br> Thread-1250::DEBUG::2013-10-30 11:25:13,518::task::974::TaskManager.Task::(_decref) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::ref 0 aborting False<br> Thread-242::DEBUG::2013-10-30 11:25:15,046::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4f87-b09e-53af994a6f87/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-242::DEBUG::2013-10-30 11:25:15,051::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n497 bytes (497 B) copied, 0.00032077 s, 1.5 MB/s\n'; <rc> = 0<br> Thread-243::DEBUG::2013-10-30 11:25:17,777::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-243::DEBUG::2013-10-30 11:25:17,780::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n373 bytes (373 B) copied, 9.9232e-05 s, 3.8 Thread-1239::DEBUG::2013-10-30 11:25:03,420::task::974::TaskManager.Task::(_decref) Task=`d891f3d3-070d-403f-9050-f10651a7fc47`::ref 0 aborting False<br> Thread-242::DEBUG::2013-10-30 11:25:04,982::domainMonitor::178::Storage.DomainMonitorThread::(_monitorDomain) Refreshing domain 21e3d67f-fdec-4f87-b09e-53af994a6f87<br> Thread-242::DEBUG::2013-10-30 11:25:04,992::fileSD::153::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4f87-b09e-53af994a6f87<br> Thread-242::DEBUG::2013-10-30 11:25:04,994::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend<br> Thread-242::DEBUG::2013-10-30 11:25:05,005::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=vms', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=GlusterSD', 'POOL_DOMAINS=21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-c2ca7f9956b4:Active', 'POOL_SPM_ID=1', 'POOL_SPM_LVER=3', 'POOL_UUID=e3cb7694-47f7-4eac-9201-922e381a25dc', 'REMOTE_PATH=172.16.0.100:vms', 'ROLE=Master', 'SDUUID=21e3d67f-fdec-4f87-b09e-53af994a6f87', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2d0dc3836063d45dd1d8046aaf00bce05c2455ea']<br> Thread-242::DEBUG::2013-10-30 11:25:05,008::fileSD::535::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images []<br> Thread-242::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:05,008::sd::374::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 21e3d67f-fdec-4f87-b09e-53af994a6f87_imageNS already registered<br> Thread-242::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:05,008::sd::382::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 21e3d67f-fdec-4f87-b09e-53af994a6f87_volumeNS already registered<br> Thread-242::DEBUG::2013-10-30 11:25:05,020::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4f87-b09e-53af994a6f87/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-242::DEBUG::2013-10-30 11:25:05,026::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n497 bytes (497 B) copied, 0.000280701 s, 1.8 MB/s\n'; <rc> = 0<br> Thread-243::DEBUG::2013-10-30 11:25:07,721::domainMonitor::178::Storage.DomainMonitorThread::(_monitorDomain) Refreshing domain 8e56a914-3395-48ad-ab4c-c2ca7f9956b4<br> Thread-243::DEBUG::2013-10-30 11:25:07,730::fileSD::153::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4<br> Thread-243::DEBUG::2013-10-30 11:25:07,731::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend<br> Thread-243::DEBUG::2013-10-30 11:25:07,740::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Iso', 'DESCRIPTION=ISO', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=0', 'POOL_UUID=01b79582-75e3-436d-bade-db653207620b,e3cb7694-47f7-4eac-9201-922e381a25dc', 'REMOTE_PATH=172.16.0.5:/home/iso2', 'ROLE=Regular', 'SDUUID=8e56a914-3395-48ad-ab4c-c2ca7f9956b4', 'TYPE=NFS', 'VERSION=0', '_SHA_CKSUM=b0d102325f279bb8efa68c594cbace0b11c66783']<br> Thread-243::DEBUG::2013-10-30 11:25:07,741::fileSD::535::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images []<br> Thread-243::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:07,741::sd::374::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 8e56a914-3395-48ad-ab4c-c2ca7f9956b4_imageNS already registered<br> Thread-243::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:07,741::sd::382::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 8e56a914-3395-48ad-ab4c-c2ca7f9956b4_volumeNS already registered<br> Thread-243::DEBUG::2013-10-30 11:25:07,757::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-243::DEBUG::2013-10-30 11:25:07,760::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n373 bytes (373 B) copied, 0.000128829 s, 2.9 MB/s\n'; <rc> = 0<br> Thread-1246::DEBUG::2013-10-30 11:25:09,944::task::579::TaskManager.Task::(_updateState) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::moving from state init -> state preparing<br> Thread-1246::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:09,944::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)<br> Thread-1246::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:09,945::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'delay': '0.000280701', 'lastCheck': '4.9', 'code': 0, 'valid': True, 'version': 3}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'delay': '0.000128829', 'lastCheck': '2.2', 'code': 0, 'valid': True, 'version': 0}}<br> Thread-1246::DEBUG::2013-10-30 11:25:09,945::task::1168::TaskManager.Task::(prepare) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::finished: {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'delay': '0.000280701', 'lastCheck': '4.9', 'code': 0, 'valid': True, 'version': 3}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'delay': '0.000128829', 'lastCheck': '2.2', 'code': 0, 'valid': True, 'version': 0}}<br> Thread-1246::DEBUG::2013-10-30 11:25:09,945::task::579::TaskManager.Task::(_updateState) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::moving from state preparing -> state finished<br> Thread-1246::DEBUG::2013-10-30 11:25:09,945::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br> Thread-1246::DEBUG::2013-10-30 11:25:09,945::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br> Thread-1246::DEBUG::2013-10-30 11:25:09,945::task::974::TaskManager.Task::(_decref) Task=`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::ref 0 aborting False<br> Thread-1249::DEBUG::2013-10-30 11:25:13,507::BindingXMLRPC::177::vds::(wrapper) client [172.16.0.5]<br> Thread-1249::DEBUG::2013-10-30 11:25:13,507::task::579::TaskManager.Task::(_updateState) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::moving from state init -> state preparing<br> Thread-1249::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:13,507::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='e3cb7694-47f7-4eac-9201-922e381a25dc', options=None)<br> Thread-1249::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:13,508::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 3}}<br> Thread-1249::DEBUG::2013-10-30 11:25:13,508::task::1168::TaskManager.Task::(prepare) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::finished: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 3}}<br> Thread-1249::DEBUG::2013-10-30 11:25:13,508::task::579::TaskManager.Task::(_updateState) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::moving from state preparing -> state finished<br> Thread-1249::DEBUG::2013-10-30 11:25:13,508::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br> Thread-1249::DEBUG::2013-10-30 11:25:13,508::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br> Thread-1249::DEBUG::2013-10-30 11:25:13,508::task::974::TaskManager.Task::(_decref) Task=`8518162d-eb91-44aa-819a-354e05807cb9`::ref 0 aborting False<br> Thread-1250::DEBUG::2013-10-30 11:25:13,510::BindingXMLRPC::177::vds::(wrapper) client [172.16.0.5]<br> Thread-1250::DEBUG::2013-10-30 11:25:13,510::task::579::TaskManager.Task::(_updateState) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::moving from state init -> state preparing<br> Thread-1250::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:13,511::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='e3cb7694-47f7-4eac-9201-922e381a25dc', options=None)<br> Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.e3cb7694-47f7-4eac-9201-922e381a25dc`ReqID=`42f323ef-aa03-49e6-893f-0e4ff0764b6b`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2560' at 'getStoragePoolInfo'<br> Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' for lock type 'shared'<br> Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' is free. Now locking as 'shared' (1 active user)<br> Thread-1250::DEBUG::2013-10-30 11:25:13,511::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.e3cb7694-47f7-4eac-9201-922e381a25dc`ReqID=`42f323ef-aa03-49e6-893f-0e4ff0764b6b`::Granted request<br> Thread-1250::DEBUG::2013-10-30 11:25:13,512::task::811::TaskManager.Task::(resourceAcquired) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::_resourcesAcquired: Storage.e3cb7694-47f7-4eac-9201-922e381a25dc (shared)<br> Thread-1250::DEBUG::2013-10-30 11:25:13,512::task::974::TaskManager.Task::(_decref) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::ref 1 aborting False<br> Thread-1250::<a class="moz-txt-link-freetext" href="INFO::2013-10-30" target="_blank">INFO::2013-10-30</a> 11:25:13,517::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_id': 1, 'master_uuid': '21e3d67f-fdec-4f87-b09e-53af994a6f87', 'name': 'GlusterSD', 'version': '3', 'domains': '21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-c2ca7f9956b4:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111', 'type': 'GLUSTERFS', 'master_ver': 1, 'lver': 3}, 'dominfo': {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'status': 'Active', 'diskfree': '239969632256', 'alerts': [], 'version': 3, 'disktotal': '492120440832'}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'status': 'Active', 'diskfree': '201653747712', 'alerts': [], 'version': 0, 'disktotal': '246783410176'}}}<br> Thread-1250::DEBUG::2013-10-30 11:25:13,517::task::1168::TaskManager.Task::(prepare) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::finished: {'info': {'spm_id': 1, 'master_uuid': '21e3d67f-fdec-4f87-b09e-53af994a6f87', 'name': 'GlusterSD', 'version': '3', 'domains': '21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-c2ca7f9956b4:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111', 'type': 'GLUSTERFS', 'master_ver': 1, 'lver': 3}, 'dominfo': {'21e3d67f-fdec-4f87-b09e-53af994a6f87': {'status': 'Active', 'diskfree': '239969632256', 'alerts': [], 'version': 3, 'disktotal': '492120440832'}, '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'status': 'Active', 'diskfree': '201653747712', 'alerts': [], 'version': 0, 'disktotal': '246783410176'}}}<br> Thread-1250::DEBUG::2013-10-30 11:25:13,517::task::579::TaskManager.Task::(_updateState) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::moving from state preparing -> state finished<br> Thread-1250::DEBUG::2013-10-30 11:25:13,517::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc': < ResourceRef 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc', isValid: 'True' obj: 'None'>}<br> Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br> Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc'<br> Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' (0 active users)<br> Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' is free, finding out if anyone is waiting for it.<br> Thread-1250::DEBUG::2013-10-30 11:25:13,518::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc', Clearing records.<br> Thread-1250::DEBUG::2013-10-30 11:25:13,518::task::974::TaskManager.Task::(_decref) Task=`221b537a-2c1b-418f-93ff-bf77d4029097`::ref 0 aborting False<br> Thread-242::DEBUG::2013-10-30 11:25:15,046::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4f87-b09e-53af994a6f87/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-242::DEBUG::2013-10-30 11:25:15,051::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n497 bytes (497 B) copied, 0.00032077 s, 1.5 MB/s\n'; <rc> = 0<br> Thread-243::DEBUG::2013-10-30 11:25:17,777::fileSD::238::Storage.Misc.excCmd::(getReadDelay) '/usr/bin/dd iflag=direct if=/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/dom_md/metadata bs=4096 count=1' (cwd None)<br> Thread-243::DEBUG::2013-10-30 11:25:17,780::fileSD::238::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n373 bytes (373 B) copied, 9.9232e-05 s, 3.8 MB/s\n'; <rc> = 0<br><br><br> engine.log<br><br> 2013-10-30 11:17:47,729 INFO [org.ovirt.engine.core.bll.RunVmOnceCommand] (ajp--127.0.0.1-8702-6) Running command: RunVmOnceCommand internal: false. Entities affected : ID: 200dfb05-461e-49d9-95a2-c0a7c7ced669 Type: VM<br> 2013-10-30 11:17:47,743 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (ajp--127.0.0.1-8702-6) START, CreateVmVDSCommand(HostName = Hypervisor, HostId = da764821-0650-40f0-b7cf-038766fbd2ca, vmId=200dfb05-461e-49d9-95a2-c0a7c7ced669, vm=VM [CentOS_30]), log id: 638a3a0e<br> 2013-10-30 11:17:47,765 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (ajp--127.0.0.1-8702-6) START, CreateVDSCommand(HostName = Hypervisor, HostId = da764821-0650-40f0-b7cf-038766fbd2ca, vmId=200dfb05-461e-49d9-95a2-c0a7c7ced669, vm=VM [CentOS_30]), log id: 297dbca2<br> 2013-10-30 11:17:47,784 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (ajp--127.0.0.1-8702-6) org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand spiceSslCipherSuite=DEFAULT,memSize=2048,kvmEnable=true,smp=1,vmType=kvm,emulatedMachine=pc-1.0,keyboardLayout=en-us,memGuaranteedSize=1365,pitReinjection=false,nice=0,display=qxl,smartcardEnable=false,smpCoresPerSocket=1,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,timeOffset=0,transparentHugePages=true,vmId=200dfb05-461e-49d9-95a2-c0a7c7ced669,devices=[Ljava.util.HashMap;@f3eee3e,acpiEnable=true,vmName=CentOS_30,cpuType=Westmere,custom={}<br> 2013-10-30 11:17:47,785 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (ajp--127.0.0.1-8702-6) FINISH, CreateVDSCommand, log id: 297dbca2<br> 2013-10-30 11:17:47,786 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (ajp--127.0.0.1-8702-6) FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 638a3a0e<br> 2013-10-30 11:17:47,812 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-6) Correlation ID: 4f8bc4f9, Job ID: 74a77fc0-7e83-44c0-b3d7-334eb38612b4, Call Stack: null, Custom Event ID: -1, Message: VM CentOS_30 was started by admin@internal (Host: Hypervisor).<br> 2013-10-30 11:17:51,115 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (DefaultQuartzScheduler_Worker-37) START, FullListVdsCommand(HostName = Hypervisor, HostId = da764821-0650-40f0-b7cf-038766fbd2ca, vds=Host[Hypervisor], vmIds=[200dfb05-461e-49d9-95a2-c0a7c7ced669]), log id: 546bd3e2<br> 2013-10-30 11:17:51,119 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (DefaultQuartzScheduler_Worker-37) FINISH, FullListVdsCommand, return: [Ljava.util.HashMap;@318a7164, log id: 546bd3e2<br> 2013-10-30 11:17:51,121 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-37) Received a scsi Device without an address when processing VM 200dfb05-461e-49d9-95a2-c0a7c7ced669 devices, skipping device: {model=virtio-scsi, device=scsi, type=controller}<br> 2013-10-30 11:17:51,121 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-37) Received a cdrom Device without an address when processing VM 200dfb05-461e-49d9-95a2-c0a7c7ced669 devices, skipping device: {shared=false, bootOrder=1, iface=ide, index=2, specParams={}, device=cdrom, path=/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x86_64-bin-DVD1.iso, type=disk, readonly=true, deviceId=e29c7ac7-f872-43e3-90e2-cb2205c0203f}<br> 2013-10-30 11:17:51,122 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-37) Received a disk Device without an address when processing VM 200dfb05-461e-49d9-95a2-c0a7c7ced669 devices, skipping device: {shared=false, volumeID=202adc86-7857-40eb-8ca6-7cec3ef0d2d6, index=0, propagateErrors=off, format=raw, type=disk, truesize=4096, reqsize=0, bootOrder=2, iface=virtio, volumeChain=[Ljava.lang.Object;@8741843, imageID=b73a79fe-401b-434c-b023-6de4c3eb100d, domainID=21e3d67f-fdec-4f87-b09e-53af994a6f87, specParams={}, optional=false, volumeInfo={volfileServer=172.16.0.100, volPort=0, protocol=gluster, volTransport=tcp, path=vms/21e3d67f-fdec-4f87-b09e-53af994a6f87/images/b73a79fe-401b-434c-b023-6de4c3eb100d/202adc86-7857-40eb-8ca6-7cec3ef0d2d6, volType=network}, path=/rhev/data-center/e3cb7694-47f7-4eac-9201-922e381a25dc/21e3d67f-fdec-4f87-b09e-53af994a6f87/images/b73a79fe-401b-434c-b023-6de4c3eb100d/202adc86-7857-40eb-8ca6-7cec3ef0d2d6, device=disk, poolID=e3cb7694-47f7-4eac-9201-922e381a25dc, readonly=false, deviceId=b73a79fe-401b-434c-b023-6de4c3eb100d, apparentsize=32212254720}<br> 2013-10-30 11:17:51,123 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-37) Received a memballoon Device without an address when processing VM 200dfb05-461e-49d9-95a2-c0a7c7ced669 devices, skipping device: {specParams={model=virtio}, target=2097152, device=memballoon, type=balloon, deviceId=b61b058d-cdfe-4d7c-8e6c-1bfd631cb26c}<br><br></div><br>_______________________________________________<br>Users mailing list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br></blockquote><div><br></div></div></body></html> ------=_Part_14812387_1517716221.1383129966977--