------=_Part_14839104_1553755466.1383132645323
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
----- Original Message -----
From: "Alessandro Bianchi" <a.bianchi(a)skynet.it>
To: users(a)ovirt.org
Sent: Wednesday, October 30, 2013 12:49:27 PM
Subject: Re: [Users] Gluster VM stuck in "waiting for launch" state
Thank you for your answer: yes it's marked as up
> > From: "Alessandro Bianchi"
<a.bianchi(a)skynet.it> To: users(a)ovirt.org
> > Sent:
> > Wednesday, October 30, 2013 12:34:21 PM
>=20
=20
> > Subject: [Users] Gluster VM stuck in "waiting for launch" state
>=20
=20
> > Hi everyone
>=20
=20
> > I've set up a gluster storage with two replicated
bricks
>=20
=20
> > DC is up and I created a VM to test gluster storage
>=20
=20
> > If I start the VM WITHOUT any disk attached (only one
virtual DVD) it
> > starts
>=20
=20
> > fine.
>=20
=20
> > If I attach a gluster domain disk thin provisioning 30 Gb
the Vm stuc=
ks
> > in
>=20
=20
> > "waiting for launch" state
>=20
=20
> > I see no special activity on the gluster servers (they
serve several
> > other
>=20
=20
> > shares with no troubles at all and even the ISO domain is a NFS on
> > locally
>=20
=20
> > mounted gluster and works fine)
>=20
=20
> > I've double checked all the pre requisites and they
look fine (F 19 -
> > gluster
>=20
=20
> > setup insecure in both glusterd.vol and volume options - uid/gid/inse=
cure
> > )
>=20
=20
> > Am I doing something wrong?
>=20
=20
> > I'm even unable to stop the VM from the engine GUI
>=20
=20
> > Any advise?
>=20
=20
> what is the status of the host that runs the vm?
=20
> is it up?
=20
Yes
greate what does vdsClient returns for this vm status?=20
(you can use:=20
vdsClient -s 0 list table=20
or if you work in non-secure setup:=20
vdsClient 0 list table=20
)=20
> > Thank you
>=20
=20
> > Best regards
>=20
=20
> > CentOS_30.log
>=20
=20
> > 2013-10-30 10:17:48.094+0000: starting up
>=20
=20
> > LC_ALL=3DC PATH=3D/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
>=20
=20
> > QEMU_AUDIO_DRV=3Dspice /usr/bin/qemu-kvm -name CentOS_30 -S -machine
>=20
=20
> > pc-1.0,accel=3Dkvm,usb=3Doff -cpu Westmere -m 2048 -smp
>=20
=20
> > 1,sockets=3D1,cores=3D1,threads=3D1 -uuid 200dfb05-461e-49d9-95a2-c0a=
7c7ced669
>=20
=20
> > -smbios type=3D1,manufacturer=3DoVirt,product=3DoVirt
>=20
=20
> > Node,version=3D19-4,serial=3D316140A0-D7DA-11DD-AF75-08606ED667F7,uui=
d=3D200dfb05-461e-49d9-95a2-c0a7c7ced669
>=20
=20
> > -no-user-config -nodefaults -chardev
>=20
=20
> > socket,id=3Dcharmonitor,path=3D/var/lib/libvirt/qemu/CentOS_30.monito=
r,server,nowait
>=20
=20
> > -mon chardev=3Dcharmonitor,id=3Dmonitor,mode=3Dcontrol -rtc
>=20
=20
> > base=3D2013-10-30T10:17:47,driftfix=3Dslew -no-shutdown -device
>=20
=20
> > piix3-usb-uhci,id=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device
>=20
=20
> > virtio-scsi-pci,id=3Dscsi0,bus=3Dpci.0,addr=3D0x4 -device
>=20
=20
> > virtio-serial-pci,id=3Dvirtio-serial0,bus=3Dpci.0,addr=3D0x5 -drive
>=20
=20
> > file=3D/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad=
-ab4c-c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x=
86_64-bin-DVD1.iso,if=3Dnone,id=3Ddrive-ide0-1-0,readonly=3Don,format=3Draw=
,serial=3D
>=20
=20
> > -device
> > ide-cd,bus=3Dide.1,unit=3D0,drive=3Ddrive-ide0-1-0,id=3Dide0-1-0,boot=
index=3D1
>=20
=20
> > -drive
>=20
=20
> > file=3Dgluster://172.16.0.100/vms/21e3d67f-fdec-4f87-b09e-53af994a6f8=
7/images/b73a79fe-401b-434c-b023-6de4c3eb100d/202adc86-7857-40eb-8ca6-7cec3=
ef0d2d6,if=3Dnone,id=3Ddrive-virtio-disk0,format=3Draw,serial=3Db73a79fe-40=
1b-434c-b023-6de4c3eb100d,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3Dth=
reads
>=20
=20
> > -device
>=20
=20
> > virtio-blk-pci,scsi=3Doff,bus=3Dpci.0,addr=3D0x6,drive=3Ddrive-virtio=
-disk0,id=3Dvirtio-disk0,bootindex=3D2
>=20
=20
> > -netdev tap,fd=3D27,id=3Dhostnet0,vhost=3Don,vhostfd=3D28 -device
>=20
=20
> > virtio-net-pci,netdev=3Dhostnet0,id=3Dnet0,mac=3D00:1a:4a:00:00:33,bu=
s=3Dpci.0,addr=3D0x3,bootindex=3D3
>=20
=20
> > -chardev
>=20
=20
> > socket,id=3Dcharchannel0,path=3D/var/lib/libvirt/qemu/channels/200dfb=
05-461e-49d9-95a2-c0a7c7ced669.com.redhat.rhevm.vdsm,server,nowait
>=20
=20
> > -device
>=20
=20
> > virtserialport,bus=3Dvirtio-serial0.0,nr=3D1,chardev=3Dcharchannel0,i=
d=3Dchannel0,name=3Dcom.redhat.rhevm.vdsm
>=20
=20
> > -chardev
>=20
=20
> > socket,id=3Dcharchannel1,path=3D/var/lib/libvirt/qemu/channels/200dfb=
05-461e-49d9-95a2-c0a7c7ced669.org.qemu.guest_agent.0,server,nowait
>=20
=20
> > -device
>=20
=20
> > virtserialport,bus=3Dvirtio-serial0.0,nr=3D2,chardev=3Dcharchannel1,i=
d=3Dchannel1,name=3Dorg.qemu.guest_agent.0
>=20
=20
> > -chardev spicevmc,id=3Dcharchannel2,name=3Dvdagent -device
>=20
=20
> > virtserialport,bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dcharchannel2,i=
d=3Dchannel2,name=3Dcom.redhat.spice.0
>=20
=20
> > -spice
>=20
=20
> > tls-port=3D5900,addr=3D0,x509-dir=3D/etc/pki/vdsm/libvirt-spice,tls-c=
hannel=3Dmain,tls-channel=3Ddisplay,tls-channel=3Dinputs,tls-channel=3Dcurs=
or,tls-channel=3Dplayback,tls-channel=3Drecord,tls-channel=3Dsmartcard,tls-=
channel=3Dusbredir,seamless-migration=3Don
>=20
=20
> > -k en-us -vga qxl -global qxl-vga.ram_size=3D67108864 -global
>=20
=20
> > qxl-vga.vram_size=3D67108864 -device virtio-balloon-pci,id=3Dballoon0=
,bus=3Dpci
>=20
=20
> > > vdsm.log
>=20
=20
> > > Thread-1239::DEBUG::2013-10-30
>=20
=20
> > > 11:25:03,420::task::974::TaskManager.Task::(_decref)
>=20
=20
> > > Task=3D`d891f3d3-070d-403f-9050-f10651a7fc47`::ref 0 aborting
False
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > >
11:25:04,982::domainMonitor::178::Storage.DomainMonitorThread::(_moni=
torDomain)
>=20
=20
> > > Refreshing domain 21e3d67f-fdec-4f87-b09e-53af994a6f87
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > > 11:25:04,992::fileSD::153::Storage.StorageDomain::(__init__)
Reading
> > > domain
>=20
=20
> > > in path
>=20
=20
> > >
/rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4f87-b=
09e-53af994a6f87
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > >
11:25:04,994::persistentDict::192::Storage.PersistentDict::(__init__)
>=20
=20
> > > Created a persistent dict with FileMetadataRW backend
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > >
11:25:05,005::persistentDict::234::Storage.PersistentDict::(refresh) =
read
>=20
=20
> > > lines (FileMetadataRW)=3D['CLASS=3DData',
'DESCRIPTION=3Dvms',
>=20
=20
> > > 'IOOPTIMEOUTSEC=3D10', 'LEASERETRIES=3D3',
'LEASETIMESEC=3D60', 'LOCK=
POLICY=3D',
>=20
=20
> > > 'LOCKRENEWALINTERVALSEC=3D5',
'MASTER_VERSION=3D1',
>=20
=20
> > > 'POOL_DESCRIPTION=3DGlusterSD',
>=20
=20
> > >
'POOL_DOMAINS=3D21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-=
3395-48ad-ab4c-c2ca7f9956b4:Active',
>=20
=20
> > > 'POOL_SPM_ID=3D1', 'POOL_SPM_LVER=3D3',
>=20
=20
> > > 'POOL_UUID=3De3cb7694-47f7-4eac-9201-922e381a25dc',
>=20
=20
> > > 'REMOTE_PATH=3D172.16.0.100:vms',
'ROLE=3DMaster',
>=20
=20
> > > 'SDUUID=3D21e3d67f-fdec-4f87-b09e-53af994a6f87',
'TYPE=3DGLUSTERFS',
>=20
=20
> > > 'VERSION=3D3',
'_SHA_CKSUM=3D2d0dc3836063d45dd1d8046aaf00bce05c2455ea=
']
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > >
11:25:05,008::fileSD::535::Storage.StorageDomain::(imageGarbageCollec=
tor)
>=20
=20
> > > Removing remnants of deleted images []
>=20
=20
> > > Thread-242:: INFO::2013-10-30
> > > 11:25:05,008::sd::374::Storage.StorageDomain::(_registerResourceNames=
paces)
>=20
=20
> > > Resource namespace 21e3d67f-fdec-4f87-b09e-53af994a6f87_imageNS
alrea=
dy
>=20
=20
> > > registered
>=20
=20
> > > Thread-242:: INFO::2013-10-30
> > > 11:25:05,008::sd::382::Storage.StorageDomain::(_registerResourceNames=
paces)
>=20
=20
> > > Resource namespace 21e3d67f-fdec-4f87-b09e-53af994a6f87_volumeNS
alre=
ady
>=20
=20
> > > registered
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > > 11:25:05,020::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
> > > '/usr/bin/dd
>=20
=20
> > > iflag=3Ddirect
>=20
=20
> > >
if=3D/rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4=
f87-b09e-53af994a6f87/dom_md/metadata
>=20
=20
> > > bs=3D4096 count=3D1' (cwd None)
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > > 11:25:05,026::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
SUCCES=
S:
>=20
=20
> > > <err> =3D '0+1 records in\n0+1 records out\n497 bytes
(497 B) copied,
>=20
=20
> > > 0.000280701 s, 1.8 MB/s\n'; <rc> =3D 0
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > >
11:25:07,721::domainMonitor::178::Storage.DomainMonitorThread::(_moni=
torDomain)
>=20
=20
> > > Refreshing domain 8e56a914-3395-48ad-ab4c-c2ca7f9956b4
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > > 11:25:07,730::fileSD::153::Storage.StorageDomain::(__init__)
Reading
> > > domain
>=20
=20
> > > in path
>=20
=20
> > >
/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c=
2ca7f9956b4
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > >
11:25:07,731::persistentDict::192::Storage.PersistentDict::(__init__)
>=20
=20
> > > Created a persistent dict with FileMetadataRW backend
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > >
11:25:07,740::persistentDict::234::Storage.PersistentDict::(refresh) =
read
>=20
=20
> > > lines (FileMetadataRW)=3D['CLASS=3DIso',
'DESCRIPTION=3DISO',
> > > 'IOOPTIMEOUTSEC=3D1',
>=20
=20
> > > 'LEASERETRIES=3D3', 'LEASETIMESEC=3D5',
'LOCKPOLICY=3D',
>=20
=20
> > > 'LOCKRENEWALINTERVALSEC=3D5',
'MASTER_VERSION=3D0',
>=20
=20
> > >
'POOL_UUID=3D01b79582-75e3-436d-bade-db653207620b,e3cb7694-47f7-4eac-=
9201-922e381a25dc',
>=20
=20
> > > 'REMOTE_PATH=3D172.16.0.5:/home/iso2',
'ROLE=3DRegular',
>=20
=20
> > > 'SDUUID=3D8e56a914-3395-48ad-ab4c-c2ca7f9956b4',
'TYPE=3DNFS', 'VERSI=
ON=3D0',
>=20
=20
> > > '_SHA_CKSUM=3Db0d102325f279bb8efa68c594cbace0b11c66783']
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > >
11:25:07,741::fileSD::535::Storage.StorageDomain::(imageGarbageCollec=
tor)
>=20
=20
> > > Removing remnants of deleted images []
>=20
=20
> > > Thread-243:: INFO::2013-10-30
> > > 11:25:07,741::sd::374::Storage.StorageDomain::(_registerResourceNames=
paces)
>=20
=20
> > > Resource namespace 8e56a914-3395-48ad-ab4c-c2ca7f9956b4_imageNS
alrea=
dy
>=20
=20
> > > registered
>=20
=20
> > > Thread-243:: INFO::2013-10-30
> > > 11:25:07,741::sd::382::Storage.StorageDomain::(_registerResourceNames=
paces)
>=20
=20
> > > Resource namespace 8e56a914-3395-48ad-ab4c-c2ca7f9956b4_volumeNS
alre=
ady
>=20
=20
> > > registered
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > > 11:25:07,757::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
> > > '/usr/bin/dd
>=20
=20
> > > iflag=3Ddirect
>=20
=20
> > >
if=3D/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-a=
b4c-c2ca7f9956b4/dom_md/metadata
>=20
=20
> > > bs=3D4096 count=3D1' (cwd None)
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > > 11:25:07,760::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
SUCCES=
S:
>=20
=20
> > > <err> =3D '0+1 records in\n0+1 records out\n373 bytes
(373 B) copied,
>=20
=20
> > > 0.000128829 s, 2.9 MB/s\n'; <rc> =3D 0
>=20
=20
> > > Thread-1246::DEBUG::2013-10-30
>=20
=20
> > > 11:25:09,944::task::579::TaskManager.Task::(_updateState)
>=20
=20
> > > Task=3D`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::moving from state
init=
->
> > > state
>=20
=20
> > > preparing
>=20
=20
> > > Thread-1246:: INFO::2013-10-30
> > > 11:25:09,944::logUtils::44::dispatcher::(wrapper) Run and protect:
>=20
=20
> > > repoStats(options=3DNone)
>=20
=20
> > > Thread-1246:: INFO::2013-10-30
> > > 11:25:09,945::logUtils::47::dispatcher::(wrapper) Run and protect:
>=20
=20
> > > repoStats, Return response:
{'21e3d67f-fdec-4f87-b09e-53af994a6f87':
>=20
=20
> > > {'delay': '0.000280701', 'lastCheck':
'4.9', 'code': 0, 'valid': True=
,
>=20
=20
> > > 'version': 3},
'8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'delay':
>=20
=20
> > > '0.000128829', 'lastCheck': '2.2',
'code': 0, 'valid': True, 'version=
':
> > > 0}}
>=20
=20
> > > Thread-1246::DEBUG::2013-10-30
>=20
=20
> > > 11:25:09,945::task::1168::TaskManager.Task::(prepare)
>=20
=20
> > > Task=3D`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::finished:
>=20
=20
> > > {'21e3d67f-fdec-4f87-b09e-53af994a6f87':
{'delay': '0.000280701',
>=20
=20
> > > 'lastCheck': '4.9', 'code': 0,
'valid': True, 'version': 3},
>=20
=20
> > > '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'delay':
'0.000128829',
>=20
=20
> > > 'lastCheck': '2.2', 'code': 0,
'valid': True, 'version': 0}}
>=20
=20
> > > Thread-1246::DEBUG::2013-10-30
>=20
=20
> > > 11:25:09,945::task::579::TaskManager.Task::(_updateState)
>=20
=20
> > > Task=3D`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::moving from state
prep=
aring
> > > ->
>=20
=20
> > > state finished
>=20
=20
> > > Thread-1246::DEBUG::2013-10-30
>=20
=20
> > >
11:25:09,945::resourceManager::939::ResourceManager.Owner::(releaseAl=
l)
>=20
=20
> > > Owner.releaseAll requests {} resources {}
>=20
=20
> > > Thread-1246::DEBUG::2013-10-30
>=20
=20
> > >
11:25:09,945::resourceManager::976::ResourceManager.Owner::(cancelAll=
)
>=20
=20
> > > Owner.cancelAll requests {}
>=20
=20
> > > Thread-1246::DEBUG::2013-10-30
>=20
=20
> > > 11:25:09,945::task::974::TaskManager.Task::(_decref)
>=20
=20
> > > Task=3D`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::ref 0 aborting
False
>=20
=20
> > > Thread-1249::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,507::BindingXMLRPC::177::vds::(wrapper) client
[172.16.0.5]
>=20
=20
> > > Thread-1249::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,507::task::579::TaskManager.Task::(_updateState)
>=20
=20
> > > Task=3D`8518162d-eb91-44aa-819a-354e05807cb9`::moving from state
init=
->
> > > state
>=20
=20
> > > preparing
>=20
=20
> > > Thread-1249:: INFO::2013-10-30
> > > 11:25:13,507::logUtils::44::dispatcher::(wrapper) Run and protect:
>=20
=20
> > >
getSpmStatus(spUUID=3D'e3cb7694-47f7-4eac-9201-922e381a25dc', options=
=3DNone)
>=20
=20
> > > Thread-1249:: INFO::2013-10-30
> > > 11:25:13,508::logUtils::47::dispatcher::(wrapper) Run and protect:
>=20
=20
> > > getSpmStatus, Return response: {'spm_st':
{'spmId': 1, 'spmStatus':
> > > 'SPM',
>=20
=20
> > > 'spmLver': 3}}
>=20
=20
> > > Thread-1249::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,508::task::1168::TaskManager.Task::(prepare)
>=20
=20
> > > Task=3D`8518162d-eb91-44aa-819a-354e05807cb9`::finished:
{'spm_st':
> > > {'spmId':
>=20
=20
> > > 1, 'spmStatus': 'SPM', 'spmLver': 3}}
>=20
=20
> > > Thread-1249::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,508::task::579::TaskManager.Task::(_updateState)
>=20
=20
> > > Task=3D`8518162d-eb91-44aa-819a-354e05807cb9`::moving from state
prep=
aring
> > > ->
>=20
=20
> > > state finished
>=20
=20
> > > Thread-1249::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,508::resourceManager::939::ResourceManager.Owner::(releaseAl=
l)
>=20
=20
> > > Owner.releaseAll requests {} resources {}
>=20
=20
> > > Thread-1249::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,508::resourceManager::976::ResourceManager.Owner::(cancelAll=
)
>=20
=20
> > > Owner.cancelAll requests {}
>=20
=20
> > > Thread-1249::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,508::task::974::TaskManager.Task::(_decref)
>=20
=20
> > > Task=3D`8518162d-eb91-44aa-819a-354e05807cb9`::ref 0 aborting
False
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,510::BindingXMLRPC::177::vds::(wrapper) client
[172.16.0.5]
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,510::task::579::TaskManager.Task::(_updateState)
>=20
=20
> > > Task=3D`221b537a-2c1b-418f-93ff-bf77d4029097`::moving from state
init=
->
> > > state
>=20
=20
> > > preparing
>=20
=20
> > > Thread-1250:: INFO::2013-10-30
> > > 11:25:13,511::logUtils::44::dispatcher::(wrapper) Run and protect:
>=20
=20
> > >
getStoragePoolInfo(spUUID=3D'e3cb7694-47f7-4eac-9201-922e381a25dc',
>=20
=20
> > > options=3DNone)
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,511::resourceManager::197::ResourceManager.Request::(__init_=
_)
>=20
=20
> > >
ResName=3D`Storage.e3cb7694-47f7-4eac-9201-922e381a25dc`ReqID=3D`42f3=
23ef-aa03-49e6-893f-0e4ff0764b6b`::Request
>=20
=20
> > > was made in '/usr/share/vdsm/storage/hsm.py' line
'2560' at
>=20
=20
> > > 'getStoragePoolInfo'
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,511::resourceManager::541::ResourceManager::(registerResourc=
e)
>=20
=20
> > > Trying to register resource
> > > 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc'
>=20
=20
> > > for lock type 'shared'
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,511::resourceManager::600::ResourceManager::(registerResourc=
e)
>=20
=20
> > > Resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc'
is free. Now
> > > locking
>=20
=20
> > > as 'shared' (1 active user)
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,511::resourceManager::237::ResourceManager.Request::(grant)
>=20
=20
> > >
ResName=3D`Storage.e3cb7694-47f7-4eac-9201-922e381a25dc`ReqID=3D`42f3=
23ef-aa03-49e6-893f-0e4ff0764b6b`::Granted
>=20
=20
> > > request
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,512::task::811::TaskManager.Task::(resourceAcquired)
>=20
=20
> > >
Task=3D`221b537a-2c1b-418f-93ff-bf77d4029097`::_resourcesAcquired:
>=20
=20
> > > Storage.e3cb7694-47f7-4eac-9201-922e381a25dc (shared)
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,512::task::974::TaskManager.Task::(_decref)
>=20
=20
> > > Task=3D`221b537a-2c1b-418f-93ff-bf77d4029097`::ref 1 aborting
False
>=20
=20
> > > Thread-1250:: INFO::2013-10-30
> > > 11:25:13,517::logUtils::47::dispatcher::(wrapper) Run and protect:
>=20
=20
> > > getStoragePoolInfo, Return response: {'info':
{'spm_id': 1,
> > > 'master_uuid':
>=20
=20
> > > '21e3d67f-fdec-4f87-b09e-53af994a6f87', 'name':
'GlusterSD', 'version=
':
> > > '3',
>=20
=20
> > > 'domains':
>=20
=20
> > >
'21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-=
c2ca7f9956b4:Active',
>=20
=20
> > > 'pool_status': 'connected', 'isoprefix':
>=20
=20
> > >
'/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-=
c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111',
>=20
=20
> > > 'type': 'GLUSTERFS', 'master_ver': 1,
'lver': 3}, 'dominfo':
>=20
=20
> > > {'21e3d67f-fdec-4f87-b09e-53af994a6f87':
{'status': 'Active', 'diskfr=
ee':
>=20
=20
> > > '239969632256', 'alerts': [], 'version':
3, 'disktotal': '49212044083=
2'},
>=20
=20
> > > '8e56a914-3395-48ad-ab4c-c2ca7f9956b4':
{'status': 'Active', 'diskfre=
e':
>=20
=20
> > > '201653747712', 'alerts': [], 'version':
0, 'disktotal':
> > > '246783410176'}}}
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,517::task::1168::TaskManager.Task::(prepare)
>=20
=20
> > > Task=3D`221b537a-2c1b-418f-93ff-bf77d4029097`::finished:
{'info':
> > > {'spm_id':
>=20
=20
> > > 1, 'master_uuid':
'21e3d67f-fdec-4f87-b09e-53af994a6f87', 'name':
>=20
=20
> > > 'GlusterSD', 'version': '3',
'domains':
>=20
=20
> > >
'21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-=
c2ca7f9956b4:Active',
>=20
=20
> > > 'pool_status': 'connected', 'isoprefix':
>=20
=20
> > >
'/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-=
c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111',
>=20
=20
> > > 'type': 'GLUSTERFS', 'master_ver': 1,
'lver': 3}, 'dominfo':
>=20
=20
> > > {'21e3d67f-fdec-4f87-b09e-53af994a6f87':
{'status': 'Active', 'diskfr=
ee':
>=20
=20
> > > '239969632256', 'alerts': [], 'version':
3, 'disktotal': '49212044083=
2'},
>=20
=20
> > > '8e56a914-3395-48ad-ab4c-c2ca7f9956b4':
{'status': 'Active', 'diskfre=
e':
>=20
=20
> > > '201653747712', 'alerts': [], 'version':
0, 'disktotal':
> > > '246783410176'}}}
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,517::task::579::TaskManager.Task::(_updateState)
>=20
=20
> > > Task=3D`221b537a-2c1b-418f-93ff-bf77d4029097`::moving from state
prep=
aring
> > > ->
>=20
=20
> > > state finished
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,517::resourceManager::939::ResourceManager.Owner::(releaseAl=
l)
>=20
=20
> > > Owner.releaseAll requests {} resources
>=20
=20
> > > {'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc': <
ResourceRef
>=20
=20
> > > 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc', isValid:
'True' obj:
>=20
=20
> > > 'None'>}
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,518::resourceManager::976::ResourceManager.Owner::(cancelAll=
)
>=20
=20
> > > Owner.cancelAll requests {}
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,518::resourceManager::615::ResourceManager::(releaseResource=
)
>=20
=20
> > > Trying to release resource
'Storage.e3cb7694-47f7-4eac-9201-922e381a2=
5dc'
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,518::resourceManager::634::ResourceManager::(releaseResource=
)
>=20
=20
> > > Released resource
'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' (0
> > > active
>=20
=20
> > > users)
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,518::resourceManager::640::ResourceManager::(releaseResource=
)
>=20
=20
> > > Resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc'
is free, find=
ing
> > > out
>=20
=20
> > > if anyone is waiting for it.
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,518::resourceManager::648::ResourceManager::(releaseResource=
) No
>=20
=20
> > > one is waiting for resource
> > > 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc',
>=20
=20
> > > Clearing records.
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,518::task::974::TaskManager.Task::(_decref)
>=20
=20
> > > Task=3D`221b537a-2c1b-418f-93ff-bf77d4029097`::ref 0 aborting
False
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > > 11:25:15,046::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
> > > '/usr/bin/dd
>=20
=20
> > > iflag=3Ddirect
>=20
=20
> > >
if=3D/rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4=
f87-b09e-53af994a6f87/dom_md/metadata
>=20
=20
> > > bs=3D4096 count=3D1' (cwd None)
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > > 11:25:15,051::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
SUCCES=
S:
>=20
=20
> > > <err> =3D '0+1 records in\n0+1 records out\n497 bytes
(497 B) copied,
>=20
=20
> > > 0.00032077 s, 1.5 MB/s\n'; <rc> =3D 0
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > > 11:25:17,777::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
> > > '/usr/bin/dd
>=20
=20
> > > iflag=3Ddirect
>=20
=20
> > >
if=3D/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-a=
b4c-c2ca7f9956b4/dom_md/metadata
>=20
=20
> > > bs=3D4096 count=3D1' (cwd None)
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > > 11:25:17,780::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
SUCCES=
S:
>=20
=20
> > > <err> =3D '0+1 records in\n0+1 records out\n373 bytes
(373 B) copied,
>=20
=20
> > > 9.9232e-05 s, 3.8 Thread-1239::DEBUG::2013-10-30
>=20
=20
> > > 11:25:03,420::task::974::TaskManager.Task::(_decref)
>=20
=20
> > > Task=3D`d891f3d3-070d-403f-9050-f10651a7fc47`::ref 0 aborting
False
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > >
11:25:04,982::domainMonitor::178::Storage.DomainMonitorThread::(_moni=
torDomain)
>=20
=20
> > > Refreshing domain 21e3d67f-fdec-4f87-b09e-53af994a6f87
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > > 11:25:04,992::fileSD::153::Storage.StorageDomain::(__init__)
Reading
> > > domain
>=20
=20
> > > in path
>=20
=20
> > >
/rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4f87-b=
09e-53af994a6f87
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > >
11:25:04,994::persistentDict::192::Storage.PersistentDict::(__init__)
>=20
=20
> > > Created a persistent dict with FileMetadataRW backend
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > >
11:25:05,005::persistentDict::234::Storage.PersistentDict::(refresh) =
read
>=20
=20
> > > lines (FileMetadataRW)=3D['CLASS=3DData',
'DESCRIPTION=3Dvms',
>=20
=20
> > > 'IOOPTIMEOUTSEC=3D10', 'LEASERETRIES=3D3',
'LEASETIMESEC=3D60', 'LOCK=
POLICY=3D',
>=20
=20
> > > 'LOCKRENEWALINTERVALSEC=3D5',
'MASTER_VERSION=3D1',
>=20
=20
> > > 'POOL_DESCRIPTION=3DGlusterSD',
>=20
=20
> > >
'POOL_DOMAINS=3D21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-=
3395-48ad-ab4c-c2ca7f9956b4:Active',
>=20
=20
> > > 'POOL_SPM_ID=3D1', 'POOL_SPM_LVER=3D3',
>=20
=20
> > > 'POOL_UUID=3De3cb7694-47f7-4eac-9201-922e381a25dc',
>=20
=20
> > > 'REMOTE_PATH=3D172.16.0.100:vms',
'ROLE=3DMaster',
>=20
=20
> > > 'SDUUID=3D21e3d67f-fdec-4f87-b09e-53af994a6f87',
'TYPE=3DGLUSTERFS',
>=20
=20
> > > 'VERSION=3D3',
'_SHA_CKSUM=3D2d0dc3836063d45dd1d8046aaf00bce05c2455ea=
']
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > >
11:25:05,008::fileSD::535::Storage.StorageDomain::(imageGarbageCollec=
tor)
>=20
=20
> > > Removing remnants of deleted images []
>=20
=20
> > > Thread-242:: INFO::2013-10-30
> > > 11:25:05,008::sd::374::Storage.StorageDomain::(_registerResourceNames=
paces)
>=20
=20
> > > Resource namespace 21e3d67f-fdec-4f87-b09e-53af994a6f87_imageNS
alrea=
dy
>=20
=20
> > > registered
>=20
=20
> > > Thread-242:: INFO::2013-10-30
> > > 11:25:05,008::sd::382::Storage.StorageDomain::(_registerResourceNames=
paces)
>=20
=20
> > > Resource namespace 21e3d67f-fdec-4f87-b09e-53af994a6f87_volumeNS
alre=
ady
>=20
=20
> > > registered
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > > 11:25:05,020::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
> > > '/usr/bin/dd
>=20
=20
> > > iflag=3Ddirect
>=20
=20
> > >
if=3D/rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4=
f87-b09e-53af994a6f87/dom_md/metadata
>=20
=20
> > > bs=3D4096 count=3D1' (cwd None)
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > > 11:25:05,026::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
SUCCES=
S:
>=20
=20
> > > <err> =3D '0+1 records in\n0+1 records out\n497 bytes
(497 B) copied,
>=20
=20
> > > 0.000280701 s, 1.8 MB/s\n'; <rc> =3D 0
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > >
11:25:07,721::domainMonitor::178::Storage.DomainMonitorThread::(_moni=
torDomain)
>=20
=20
> > > Refreshing domain 8e56a914-3395-48ad-ab4c-c2ca7f9956b4
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > > 11:25:07,730::fileSD::153::Storage.StorageDomain::(__init__)
Reading
> > > domain
>=20
=20
> > > in path
>=20
=20
> > >
/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-c=
2ca7f9956b4
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > >
11:25:07,731::persistentDict::192::Storage.PersistentDict::(__init__)
>=20
=20
> > > Created a persistent dict with FileMetadataRW backend
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > >
11:25:07,740::persistentDict::234::Storage.PersistentDict::(refresh) =
read
>=20
=20
> > > lines (FileMetadataRW)=3D['CLASS=3DIso',
'DESCRIPTION=3DISO',
> > > 'IOOPTIMEOUTSEC=3D1',
>=20
=20
> > > 'LEASERETRIES=3D3', 'LEASETIMESEC=3D5',
'LOCKPOLICY=3D',
>=20
=20
> > > 'LOCKRENEWALINTERVALSEC=3D5',
'MASTER_VERSION=3D0',
>=20
=20
> > >
'POOL_UUID=3D01b79582-75e3-436d-bade-db653207620b,e3cb7694-47f7-4eac-=
9201-922e381a25dc',
>=20
=20
> > > 'REMOTE_PATH=3D172.16.0.5:/home/iso2',
'ROLE=3DRegular',
>=20
=20
> > > 'SDUUID=3D8e56a914-3395-48ad-ab4c-c2ca7f9956b4',
'TYPE=3DNFS', 'VERSI=
ON=3D0',
>=20
=20
> > > '_SHA_CKSUM=3Db0d102325f279bb8efa68c594cbace0b11c66783']
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > >
11:25:07,741::fileSD::535::Storage.StorageDomain::(imageGarbageCollec=
tor)
>=20
=20
> > > Removing remnants of deleted images []
>=20
=20
> > > Thread-243:: INFO::2013-10-30
> > > 11:25:07,741::sd::374::Storage.StorageDomain::(_registerResourceNames=
paces)
>=20
=20
> > > Resource namespace 8e56a914-3395-48ad-ab4c-c2ca7f9956b4_imageNS
alrea=
dy
>=20
=20
> > > registered
>=20
=20
> > > Thread-243:: INFO::2013-10-30
> > > 11:25:07,741::sd::382::Storage.StorageDomain::(_registerResourceNames=
paces)
>=20
=20
> > > Resource namespace 8e56a914-3395-48ad-ab4c-c2ca7f9956b4_volumeNS
alre=
ady
>=20
=20
> > > registered
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > > 11:25:07,757::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
> > > '/usr/bin/dd
>=20
=20
> > > iflag=3Ddirect
>=20
=20
> > >
if=3D/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-a=
b4c-c2ca7f9956b4/dom_md/metadata
>=20
=20
> > > bs=3D4096 count=3D1' (cwd None)
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > > 11:25:07,760::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
SUCCES=
S:
>=20
=20
> > > <err> =3D '0+1 records in\n0+1 records out\n373 bytes
(373 B) copied,
>=20
=20
> > > 0.000128829 s, 2.9 MB/s\n'; <rc> =3D 0
>=20
=20
> > > Thread-1246::DEBUG::2013-10-30
>=20
=20
> > > 11:25:09,944::task::579::TaskManager.Task::(_updateState)
>=20
=20
> > > Task=3D`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::moving from state
init=
->
> > > state
>=20
=20
> > > preparing
>=20
=20
> > > Thread-1246:: INFO::2013-10-30
> > > 11:25:09,944::logUtils::44::dispatcher::(wrapper) Run and protect:
>=20
=20
> > > repoStats(options=3DNone)
>=20
=20
> > > Thread-1246:: INFO::2013-10-30
> > > 11:25:09,945::logUtils::47::dispatcher::(wrapper) Run and protect:
>=20
=20
> > > repoStats, Return response:
{'21e3d67f-fdec-4f87-b09e-53af994a6f87':
>=20
=20
> > > {'delay': '0.000280701', 'lastCheck':
'4.9', 'code': 0, 'valid': True=
,
>=20
=20
> > > 'version': 3},
'8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'delay':
>=20
=20
> > > '0.000128829', 'lastCheck': '2.2',
'code': 0, 'valid': True, 'version=
':
> > > 0}}
>=20
=20
> > > Thread-1246::DEBUG::2013-10-30
>=20
=20
> > > 11:25:09,945::task::1168::TaskManager.Task::(prepare)
>=20
=20
> > > Task=3D`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::finished:
>=20
=20
> > > {'21e3d67f-fdec-4f87-b09e-53af994a6f87':
{'delay': '0.000280701',
>=20
=20
> > > 'lastCheck': '4.9', 'code': 0,
'valid': True, 'version': 3},
>=20
=20
> > > '8e56a914-3395-48ad-ab4c-c2ca7f9956b4': {'delay':
'0.000128829',
>=20
=20
> > > 'lastCheck': '2.2', 'code': 0,
'valid': True, 'version': 0}}
>=20
=20
> > > Thread-1246::DEBUG::2013-10-30
>=20
=20
> > > 11:25:09,945::task::579::TaskManager.Task::(_updateState)
>=20
=20
> > > Task=3D`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::moving from state
prep=
aring
> > > ->
>=20
=20
> > > state finished
>=20
=20
> > > Thread-1246::DEBUG::2013-10-30
>=20
=20
> > >
11:25:09,945::resourceManager::939::ResourceManager.Owner::(releaseAl=
l)
>=20
=20
> > > Owner.releaseAll requests {} resources {}
>=20
=20
> > > Thread-1246::DEBUG::2013-10-30
>=20
=20
> > >
11:25:09,945::resourceManager::976::ResourceManager.Owner::(cancelAll=
)
>=20
=20
> > > Owner.cancelAll requests {}
>=20
=20
> > > Thread-1246::DEBUG::2013-10-30
>=20
=20
> > > 11:25:09,945::task::974::TaskManager.Task::(_decref)
>=20
=20
> > > Task=3D`c064522b-6fe7-4cb2-b85d-ad1f8185919a`::ref 0 aborting
False
>=20
=20
> > > Thread-1249::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,507::BindingXMLRPC::177::vds::(wrapper) client
[172.16.0.5]
>=20
=20
> > > Thread-1249::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,507::task::579::TaskManager.Task::(_updateState)
>=20
=20
> > > Task=3D`8518162d-eb91-44aa-819a-354e05807cb9`::moving from state
init=
->
> > > state
>=20
=20
> > > preparing
>=20
=20
> > > Thread-1249:: INFO::2013-10-30
> > > 11:25:13,507::logUtils::44::dispatcher::(wrapper) Run and protect:
>=20
=20
> > >
getSpmStatus(spUUID=3D'e3cb7694-47f7-4eac-9201-922e381a25dc', options=
=3DNone)
>=20
=20
> > > Thread-1249:: INFO::2013-10-30
> > > 11:25:13,508::logUtils::47::dispatcher::(wrapper) Run and protect:
>=20
=20
> > > getSpmStatus, Return response: {'spm_st':
{'spmId': 1, 'spmStatus':
> > > 'SPM',
>=20
=20
> > > 'spmLver': 3}}
>=20
=20
> > > Thread-1249::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,508::task::1168::TaskManager.Task::(prepare)
>=20
=20
> > > Task=3D`8518162d-eb91-44aa-819a-354e05807cb9`::finished:
{'spm_st':
> > > {'spmId':
>=20
=20
> > > 1, 'spmStatus': 'SPM', 'spmLver': 3}}
>=20
=20
> > > Thread-1249::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,508::task::579::TaskManager.Task::(_updateState)
>=20
=20
> > > Task=3D`8518162d-eb91-44aa-819a-354e05807cb9`::moving from state
prep=
aring
> > > ->
>=20
=20
> > > state finished
>=20
=20
> > > Thread-1249::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,508::resourceManager::939::ResourceManager.Owner::(releaseAl=
l)
>=20
=20
> > > Owner.releaseAll requests {} resources {}
>=20
=20
> > > Thread-1249::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,508::resourceManager::976::ResourceManager.Owner::(cancelAll=
)
>=20
=20
> > > Owner.cancelAll requests {}
>=20
=20
> > > Thread-1249::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,508::task::974::TaskManager.Task::(_decref)
>=20
=20
> > > Task=3D`8518162d-eb91-44aa-819a-354e05807cb9`::ref 0 aborting
False
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,510::BindingXMLRPC::177::vds::(wrapper) client
[172.16.0.5]
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,510::task::579::TaskManager.Task::(_updateState)
>=20
=20
> > > Task=3D`221b537a-2c1b-418f-93ff-bf77d4029097`::moving from state
init=
->
> > > state
>=20
=20
> > > preparing
>=20
=20
> > > Thread-1250:: INFO::2013-10-30
> > > 11:25:13,511::logUtils::44::dispatcher::(wrapper) Run and protect:
>=20
=20
> > >
getStoragePoolInfo(spUUID=3D'e3cb7694-47f7-4eac-9201-922e381a25dc',
>=20
=20
> > > options=3DNone)
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,511::resourceManager::197::ResourceManager.Request::(__init_=
_)
>=20
=20
> > >
ResName=3D`Storage.e3cb7694-47f7-4eac-9201-922e381a25dc`ReqID=3D`42f3=
23ef-aa03-49e6-893f-0e4ff0764b6b`::Request
>=20
=20
> > > was made in '/usr/share/vdsm/storage/hsm.py' line
'2560' at
>=20
=20
> > > 'getStoragePoolInfo'
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,511::resourceManager::541::ResourceManager::(registerResourc=
e)
>=20
=20
> > > Trying to register resource
> > > 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc'
>=20
=20
> > > for lock type 'shared'
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,511::resourceManager::600::ResourceManager::(registerResourc=
e)
>=20
=20
> > > Resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc'
is free. Now
> > > locking
>=20
=20
> > > as 'shared' (1 active user)
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,511::resourceManager::237::ResourceManager.Request::(grant)
>=20
=20
> > >
ResName=3D`Storage.e3cb7694-47f7-4eac-9201-922e381a25dc`ReqID=3D`42f3=
23ef-aa03-49e6-893f-0e4ff0764b6b`::Granted
>=20
=20
> > > request
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,512::task::811::TaskManager.Task::(resourceAcquired)
>=20
=20
> > >
Task=3D`221b537a-2c1b-418f-93ff-bf77d4029097`::_resourcesAcquired:
>=20
=20
> > > Storage.e3cb7694-47f7-4eac-9201-922e381a25dc (shared)
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,512::task::974::TaskManager.Task::(_decref)
>=20
=20
> > > Task=3D`221b537a-2c1b-418f-93ff-bf77d4029097`::ref 1 aborting
False
>=20
=20
> > > Thread-1250:: INFO::2013-10-30
> > > 11:25:13,517::logUtils::47::dispatcher::(wrapper) Run and protect:
>=20
=20
> > > getStoragePoolInfo, Return response: {'info':
{'spm_id': 1,
> > > 'master_uuid':
>=20
=20
> > > '21e3d67f-fdec-4f87-b09e-53af994a6f87', 'name':
'GlusterSD', 'version=
':
> > > '3',
>=20
=20
> > > 'domains':
>=20
=20
> > >
'21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-=
c2ca7f9956b4:Active',
>=20
=20
> > > 'pool_status': 'connected', 'isoprefix':
>=20
=20
> > >
'/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-=
c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111',
>=20
=20
> > > 'type': 'GLUSTERFS', 'master_ver': 1,
'lver': 3}, 'dominfo':
>=20
=20
> > > {'21e3d67f-fdec-4f87-b09e-53af994a6f87':
{'status': 'Active', 'diskfr=
ee':
>=20
=20
> > > '239969632256', 'alerts': [], 'version':
3, 'disktotal': '49212044083=
2'},
>=20
=20
> > > '8e56a914-3395-48ad-ab4c-c2ca7f9956b4':
{'status': 'Active', 'diskfre=
e':
>=20
=20
> > > '201653747712', 'alerts': [], 'version':
0, 'disktotal':
> > > '246783410176'}}}
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,517::task::1168::TaskManager.Task::(prepare)
>=20
=20
> > > Task=3D`221b537a-2c1b-418f-93ff-bf77d4029097`::finished:
{'info':
> > > {'spm_id':
>=20
=20
> > > 1, 'master_uuid':
'21e3d67f-fdec-4f87-b09e-53af994a6f87', 'name':
>=20
=20
> > > 'GlusterSD', 'version': '3',
'domains':
>=20
=20
> > >
'21e3d67f-fdec-4f87-b09e-53af994a6f87:Active,8e56a914-3395-48ad-ab4c-=
c2ca7f9956b4:Active',
>=20
=20
> > > 'pool_status': 'connected', 'isoprefix':
>=20
=20
> > >
'/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-ab4c-=
c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111',
>=20
=20
> > > 'type': 'GLUSTERFS', 'master_ver': 1,
'lver': 3}, 'dominfo':
>=20
=20
> > > {'21e3d67f-fdec-4f87-b09e-53af994a6f87':
{'status': 'Active', 'diskfr=
ee':
>=20
=20
> > > '239969632256', 'alerts': [], 'version':
3, 'disktotal': '49212044083=
2'},
>=20
=20
> > > '8e56a914-3395-48ad-ab4c-c2ca7f9956b4':
{'status': 'Active', 'diskfre=
e':
>=20
=20
> > > '201653747712', 'alerts': [], 'version':
0, 'disktotal':
> > > '246783410176'}}}
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,517::task::579::TaskManager.Task::(_updateState)
>=20
=20
> > > Task=3D`221b537a-2c1b-418f-93ff-bf77d4029097`::moving from state
prep=
aring
> > > ->
>=20
=20
> > > state finished
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,517::resourceManager::939::ResourceManager.Owner::(releaseAl=
l)
>=20
=20
> > > Owner.releaseAll requests {} resources
>=20
=20
> > > {'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc': <
ResourceRef
>=20
=20
> > > 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc', isValid:
'True' obj:
>=20
=20
> > > 'None'>}
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,518::resourceManager::976::ResourceManager.Owner::(cancelAll=
)
>=20
=20
> > > Owner.cancelAll requests {}
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,518::resourceManager::615::ResourceManager::(releaseResource=
)
>=20
=20
> > > Trying to release resource
'Storage.e3cb7694-47f7-4eac-9201-922e381a2=
5dc'
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,518::resourceManager::634::ResourceManager::(releaseResource=
)
>=20
=20
> > > Released resource
'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc' (0
> > > active
>=20
=20
> > > users)
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,518::resourceManager::640::ResourceManager::(releaseResource=
)
>=20
=20
> > > Resource 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc'
is free, find=
ing
> > > out
>=20
=20
> > > if anyone is waiting for it.
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > >
11:25:13,518::resourceManager::648::ResourceManager::(releaseResource=
) No
>=20
=20
> > > one is waiting for resource
> > > 'Storage.e3cb7694-47f7-4eac-9201-922e381a25dc',
>=20
=20
> > > Clearing records.
>=20
=20
> > > Thread-1250::DEBUG::2013-10-30
>=20
=20
> > > 11:25:13,518::task::974::TaskManager.Task::(_decref)
>=20
=20
> > > Task=3D`221b537a-2c1b-418f-93ff-bf77d4029097`::ref 0 aborting
False
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > > 11:25:15,046::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
> > > '/usr/bin/dd
>=20
=20
> > > iflag=3Ddirect
>=20
=20
> > >
if=3D/rhev/data-center/mnt/glusterSD/172.16.0.100:vms/21e3d67f-fdec-4=
f87-b09e-53af994a6f87/dom_md/metadata
>=20
=20
> > > bs=3D4096 count=3D1' (cwd None)
>=20
=20
> > > Thread-242::DEBUG::2013-10-30
>=20
=20
> > > 11:25:15,051::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
SUCCES=
S:
>=20
=20
> > > <err> =3D '0+1 records in\n0+1 records out\n497 bytes
(497 B) copied,
>=20
=20
> > > 0.00032077 s, 1.5 MB/s\n'; <rc> =3D 0
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > > 11:25:17,777::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
> > > '/usr/bin/dd
>=20
=20
> > > iflag=3Ddirect
>=20
=20
> > >
if=3D/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad-a=
b4c-c2ca7f9956b4/dom_md/metadata
>=20
=20
> > > bs=3D4096 count=3D1' (cwd None)
>=20
=20
> > > Thread-243::DEBUG::2013-10-30
>=20
=20
> > > 11:25:17,780::fileSD::238::Storage.Misc.excCmd::(getReadDelay)
SUCCES=
S:
>=20
=20
> > > <err> =3D '0+1 records in\n0+1 records out\n373 bytes
(373 B) copied,
>=20
=20
> > > 9.9232e-05 s, 3.8 MB/s\n'; <rc> =3D 0
>=20
=20
> > > engine.log
>=20
=20
> > 2013-10-30 11:17:47,729 INFO
[org.ovirt.engine.core.bll.RunVmOnceComm=
and]
>=20
=20
> > > (ajp--127.0.0.1-8702-6) Running command: RunVmOnceCommand
internal:
> > > false.
>=20
=20
> > > Entities affected : ID: 200dfb05-461e-49d9-95a2-c0a7c7ced669
Type: VM
>=20
=20
> > > 2013-10-30 11:17:47,743 INFO
>=20
=20
> > > [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
> > > (ajp--127.0.0.1-8702-6)
>=20
=20
> > > START, CreateVmVDSCommand(HostName =3D Hypervisor, HostId =3D
>=20
=20
> > > da764821-0650-40f0-b7cf-038766fbd2ca,
>=20
=20
> > > vmId=3D200dfb05-461e-49d9-95a2-c0a7c7ced669, vm=3DVM
[CentOS_30]), lo=
g id:
>=20
=20
> > > 638a3a0e
>=20
=20
> > > 2013-10-30 11:17:47,765 INFO
>=20
=20
> > > [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
>=20
=20
> > > (ajp--127.0.0.1-8702-6) START, CreateVDSCommand(HostName =3D
Hypervis=
or,
>=20
=20
> > > HostId =3D da764821-0650-40f0-b7cf-038766fbd2ca,
>=20
=20
> > > vmId=3D200dfb05-461e-49d9-95a2-c0a7c7ced669, vm=3DVM
[CentOS_30]), lo=
g id:
>=20
=20
> > > 297dbca2
>=20
=20
> > > 2013-10-30 11:17:47,784 INFO
>=20
=20
> > > [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
>=20
=20
> > > (ajp--127.0.0.1-8702-6)
>=20
=20
> > > org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand
>=20
=20
> > >
spiceSslCipherSuite=3DDEFAULT,memSize=3D2048,kvmEnable=3Dtrue,smp=3D1=
,vmType=3Dkvm,emulatedMachine=3Dpc-1.0,keyboardLayout=3Den-us,memGuaranteed=
Size=3D1365,pitReinjection=3Dfalse,nice=3D0,display=3Dqxl,smartcardEnable=
=3Dfalse,smpCoresPerSocket=3D1,spiceSecureChannels=3Dsmain,sinputs,scursor,=
splayback,srecord,sdisplay,susbredir,ssmartcard,timeOffset=3D0,transparentH=
ugePages=3Dtrue,vmId=3D200dfb05-461e-49d9-95a2-c0a7c7ced669,devices=3D[Ljav=
a.util.HashMap;@f3eee3e,acpiEnable=3Dtrue,vmName=3DCentOS_30,cpuType=3DWest=
mere,custom=3D{}
>=20
=20
> > > 2013-10-30 11:17:47,785 INFO
>=20
=20
> > > [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
>=20
=20
> > > (ajp--127.0.0.1-8702-6) FINISH, CreateVDSCommand, log id:
297dbca2
>=20
=20
> > > 2013-10-30 11:17:47,786 INFO
>=20
=20
> > > [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
> > > (ajp--127.0.0.1-8702-6)
>=20
=20
> > > FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id:
638a3a0e
>=20
=20
> > > 2013-10-30 11:17:47,812 INFO
>=20
=20
> > >
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector=
]
>=20
=20
> > > (ajp--127.0.0.1-8702-6) Correlation ID: 4f8bc4f9, Job ID:
>=20
=20
> > > 74a77fc0-7e83-44c0-b3d7-334eb38612b4, Call Stack: null, Custom
Event =
ID:
> > > -1,
>=20
=20
> > > Message: VM CentOS_30 was started by admin@internal (Host:
Hypervisor=
).
>=20
=20
> > > 2013-10-30 11:17:51,115 INFO
>=20
=20
> > > [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
>=20
=20
> > > (DefaultQuartzScheduler_Worker-37) START,
FullListVdsCommand(HostName=
=3D
>=20
=20
> > > Hypervisor, HostId =3D da764821-0650-40f0-b7cf-038766fbd2ca,
>=20
=20
> > > vds=3DHost[Hypervisor],
vmIds=3D[200dfb05-461e-49d9-95a2-c0a7c7ced669=
]), log
> > > id:
>=20
=20
> > > 546bd3e2
>=20
=20
> > > 2013-10-30 11:17:51,119 INFO
>=20
=20
> > > [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
>=20
=20
> > > (DefaultQuartzScheduler_Worker-37) FINISH, FullListVdsCommand,
return=
:
>=20
=20
> > > [Ljava.util.HashMap;@318a7164, log id: 546bd3e2
>=20
=20
> > > 2013-10-30 11:17:51,121 INFO
>=20
=20
> > > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>=20
=20
> > > (DefaultQuartzScheduler_Worker-37) Received a scsi Device without
an
> > > address
>=20
=20
> > > when processing VM 200dfb05-461e-49d9-95a2-c0a7c7ced669 devices,
skip=
ping
>=20
=20
> > > device: {model=3Dvirtio-scsi, device=3Dscsi, type=3Dcontroller}
>=20
=20
> > > 2013-10-30 11:17:51,121 INFO
>=20
=20
> > > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>=20
=20
> > > (DefaultQuartzScheduler_Worker-37) Received a cdrom Device
without an
>=20
=20
> > > address when processing VM 200dfb05-461e-49d9-95a2-c0a7c7ced669
devic=
es,
>=20
=20
> > > skipping device: {shared=3Dfalse, bootOrder=3D1, iface=3Dide,
index=
=3D2,
>=20
=20
> > > specParams=3D{}, device=3Dcdrom,
>=20
=20
> > >
path=3D/rhev/data-center/mnt/172.16.0.5:_home_iso2/8e56a914-3395-48ad=
-ab4c-c2ca7f9956b4/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x=
86_64-bin-DVD1.iso,
>=20
=20
> > > type=3Ddisk, readonly=3Dtrue,
deviceId=3De29c7ac7-f872-43e3-90e2-cb22=
05c0203f}
>=20
=20
> > > 2013-10-30 11:17:51,122 INFO
>=20
=20
> > > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>=20
=20
> > > (DefaultQuartzScheduler_Worker-37) Received a disk Device without
an
> > > address
>=20
=20
> > > when processing VM 200dfb05-461e-49d9-95a2-c0a7c7ced669 devices,
skip=
ping
>=20
=20
> > > device: {shared=3Dfalse,
volumeID=3D202adc86-7857-40eb-8ca6-7cec3ef0d=
2d6,
>=20
=20
> > > index=3D0, propagateErrors=3Doff, format=3Draw, type=3Ddisk,
truesize=
=3D4096,
>=20
=20
> > > reqsize=3D0, bootOrder=3D2, iface=3Dvirtio,
>=20
=20
> > > volumeChain=3D[Ljava.lang.Object;@8741843,
>=20
=20
> > > imageID=3Db73a79fe-401b-434c-b023-6de4c3eb100d,
>=20
=20
> > > domainID=3D21e3d67f-fdec-4f87-b09e-53af994a6f87,
specParams=3D{},
>=20
=20
> > > optional=3Dfalse, volumeInfo=3D{volfileServer=3D172.16.0.100,
volPort=
=3D0,
>=20
=20
> > > protocol=3Dgluster, volTransport=3Dtcp,
>=20
=20
> > >
path=3Dvms/21e3d67f-fdec-4f87-b09e-53af994a6f87/images/b73a79fe-401b-=
434c-b023-6de4c3eb100d/202adc86-7857-40eb-8ca6-7cec3ef0d2d6,
>=20
=20
> > > volType=3Dnetwork},
>=20
=20
> > >
path=3D/rhev/data-center/e3cb7694-47f7-4eac-9201-922e381a25dc/21e3d67=
f-fdec-4f87-b09e-53af994a6f87/images/b73a79fe-401b-434c-b023-6de4c3eb100d/2=
02adc86-7857-40eb-8ca6-7cec3ef0d2d6,
>=20
=20
> > > device=3Ddisk, poolID=3De3cb7694-47f7-4eac-9201-922e381a25dc,
readonl=
y=3Dfalse,
>=20
=20
> > > deviceId=3Db73a79fe-401b-434c-b023-6de4c3eb100d,
apparentsize=3D32212=
254720}
>=20
=20
> > > 2013-10-30 11:17:51,123 INFO
>=20
=20
> > > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>=20
=20
> > > (DefaultQuartzScheduler_Worker-37) Received a memballoon Device
witho=
ut
> > > an
>=20
=20
> > > address when processing VM 200dfb05-461e-49d9-95a2-c0a7c7ced669
devic=
es,
>=20
=20
> > > skipping device: {specParams=3D{model=3Dvirtio},
target=3D2097152,
>=20
=20
> > > device=3Dmemballoon, type=3Dballoon,
>=20
=20
> > > deviceId=3Db61b058d-cdfe-4d7c-8e6c-1bfd631cb26c}
>=20
=20
> > > _______________________________________________
>=20
=20
> > > Users mailing list Users(a)ovirt.org
> > >
http://lists.ovirt.org/mailman/listinfo/users
>=20
=20
> >