
This is a multi-part message in MIME format. --------------030606070403030405090600 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable Thnak you for reporting this issue because, I met exactly the same : FC=20 storage domain and sometimes, many of my hosts (15 ) become sometimes=20 unavailable without any apparent action on them. The issue message is : storage domain is unvailable. So it is a desaster = when power management is activated because hosts reboot at the same time = and all VMs go down without migrating. It happened to me two times, and the second time it was less a pity=20 because I desactivated the power management. It may be a serious issue because host stay reacheable and lun is still=20 okay when doing a lvs command. The workaround in this case is to restart the engine (restarting vdsm=20 gives nothing) and then, all the hosts come up. * el6 engine on a separate KVM * implied el7 and el6 hosts * ovirt 3.5.1 and vdsm 4.16.10-8 * 2 FC datacenter on two remote sites with the same engine and both are impacted Le 23/03/2015 16:54, Jonas Israelsson a =E9crit :
Greetings.
Running oVirt 3.5 with a mix of NFS and FC Storage.
Engine running on a seperate KVM VM and Node installed with a pre 3.5=20 ovirt-node "ovirt-node-iso-3.5.0.ovirt35.20140912.el6 (Edited)"
I had some problems with my FC-Storage where the LUNS for a while=20 became unavailable to my Ovirt-host. Everything is now up and running=20 and those luns again are accessible by the host. The NFS domains goes=20 back online but the FC does not.
Thread-22::DEBUG::2015-03-23=20 14:53:02,706::lvm::290::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n=20 /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mapper/"= ]=20 ignore_suspended_devices=3D1 write_cache_state=3D0=20 disable_after_error_count=3D3 obtain_device_list_from_udev=3D0 filter =3D= [=20 '\''r|.*|'\'' ] } global { locking_type=3D1 prioritise_write_locks=3D= 1 =20 wait_for_locks=3D1 use_lvmetad=3D0 } backup { retain_min =3D 50 =20 retain_days =3D 0 } ' --noheadings --units b --nosuffix --separator '|'= =20 --ignoreskippedcluster -o=20 uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_md= a_size,vg_mda_free,lv_count,pv_count,pv_name=20 29f9b165-3674-4384-a1d4-7aa87d923d56 (cwd None)
Thread-24::DEBUG::2015-03-23=20 14:53:02,981::lvm::290::Storage.Misc.excCmd::(cmd) FAILED: <err> =3D ' = =20 Volume group "29f9b165-3674-4384-a1d4-7aa87d923d56" not found\n =20 Skipping volume group 29f9b165-3674-4384-a1d4-7aa87d923d56\n'; <rc> =3D= 5
Thread-24::WARNING::2015-03-23=20 14:53:02,986::lvm::372::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] =
[' Volume group "29f9b165-3674-4384-a1d4-7aa87d923d56" not found', ' =
Skipping volume group 29f9b165-3674-4384-a1d4-7aa87d923d56']
Running the command above manually does indeed give the same output:
# /sbin/lvm vgs --config ' devices { preferred_names =3D=20 ["^/dev/mapper/"] ignore_suspended_devices=3D1 write_cache_state=3D0=20 disable_after_error_count=3D3 obtain_device_list_from_udev=3D0 filter =3D= [=20 '\''r|.*|'\'' ] } global { locking_type=3D1 prioritise_write_locks=3D= 1 =20 wait_for_locks=3D1 use_lvmetad=3D0 } backup { retain_min =3D 50 =20 retain_days =3D 0 } ' --noheadings --units b --nosuffix --separator '|'= =20 --ignoreskippedcluster -o=20 uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_md= a_size,vg_mda_free,lv_count,pv_count,pv_name=20 29f9b165-3674-4384-a1d4-7aa87d923d56
Volume group "29f9b165-3674-4384-a1d4-7aa87d923d56" not found Skipping volume group 29f9b165-3674-4384-a1d4-7aa87d923d56
What puzzles me is that those volume does exist.
lvm vgs VG #PV #LV #SN Attr VSize VFree 22cf06d1-faca-4e17-ac78-d38b7fc300b1 1 13 0 wz--n- 999.62g 986.5= 0g 29f9b165-3674-4384-a1d4-7aa87d923d56 1 8 0 wz--n- 99.62g 95.50= g HostVG 1 4 0 wz--n- 13.77g 52.00= m
--- Volume group --- VG Name 29f9b165-3674-4384-a1d4-7aa87d923d56 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 20 VG Access read/write VG Status resizable MAX LV 0 Cur LV 8 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 99.62 GiB PE Size 128.00 MiB Total PE 797 Alloc PE / Size 33 / 4.12 GiB Free PE / Size 764 / 95.50 GiB VG UUID aAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk
lvm vgs --config ' devices { preferred_names =3D ["^/dev/mapper/"]=20 ignore_suspended_devices=3D1 write_cache_state=3D0=20 disable_after_error_count=3D3 obtain_device_list_from_udev=3D0 } global= { =20 locking_type=3D1 prioritise_write_locks=3D1 wait_for_locks=3D1=20 use_lvmetad=3D0 } backup { retain_min =3D 50 retain_days =3D 0 } '=20 --noheadings --units b --nosuffix --separator '|'=20 --ignoreskippedcluster -o=20 uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_md= a_size,vg_mda_free,lv_count,pv_count,pv_name=20 29f9b165-3674-4384-a1d4-7aa87d923d56
aAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk|29f9b165-3674-4384-a1d4-7aa87d92= 3d56|wz--n-|106971529216|102542344192|134217728|797|764|MDT_LEASETIMESEC=3D= 60,MDT_CLASS=3DData,MDT_VERSION=3D3,MDT_SDUUID=3D29f9b165-3674-4384-a1d4-= 7aa87d923d56,MDT_PV0=3Dpv:36001405c94d80be2ed0482c91a1841b8&44&uuid:muHcY= l-sobG-3LyY-jjfg-3fGf-1cHO-uDk7da&44&pestart:0&44&pecount:797&44&mapoffse= t:0,MDT_LEASERETRIES=3D3,MDT_VGUUID=3DaAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z= 2Uk,MDT_IOOPTIMEOUTSEC=3D10,MDT_LOCKRENEWALINTERVALSEC=3D5,MDT_PHYBLKSIZE= =3D512,MDT_LOGBLKSIZE=3D512,MDT_TYPE=3DFCP,MDT_LOCKPOLICY=3D,MDT_DESCRIPT= ION=3DMaster,RHAT_storage_domain,MDT_POOL_SPM_ID=3D-1,MDT_POOL_DESCRIPTIO= N=3DElementary,MDT_POOL_SPM_LVER=3D-1,MDT_POOL_UUID=3D8c3c5df9-e8ff-4313-= 99c9-385b6c7d896b,MDT_MASTER_VERSION=3D10,MDT_POOL_DOMAINS=3D22cf06d1-fac= a-4e17-ac78-d38b7fc300b1:Active&44&c434ab5a-9d21-42eb-ba1b-dbd716ba3ed1:A= ctive&44&96e62d18-652d-401a-b4b5-b54ecefa331c:Active&44&29f9b165-3674-438= 4-a1d4-7aa87d923d56:Active&44&1a0d3e5a-d2ad-4829-8ebd-ad3ff5463062:Active= ,MDT__SH=20
A_CKSUM=3D7ea9af890755d96563cb7a736f8e3f46ea986f67,MDT_ROLE=3DRegular|1= 34217728|67103744|8|1|/dev/sda=20
[root@patty vdsm]# vdsClient -s 0 getStorageDomainsList (Returns all=20 but only the NFS-Domains) c434ab5a-9d21-42eb-ba1b-dbd716ba3ed1 1a0d3e5a-d2ad-4829-8ebd-ad3ff5463062 a8fd9df0-48f2-40a2-88d4-7bf47fef9b07
engine=3D# select id,storage,storage_name,storage_domain_type from=20 storage_domain_static ; id | storage | =20 storage_name | storage_domain_type --------------------------------------+--------------------------------= --------+------------------------+---------------------=20
072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 |=20 ceab03af-7220-4d42-8f5c-9b557f5d29af | ovirt-image-repository | = 4 1a0d3e5a-d2ad-4829-8ebd-ad3ff5463062 |=20 6564a0b2-2f92-48de-b986-e92de7e28885 | ISO | 2 c434ab5a-9d21-42eb-ba1b-dbd716ba3ed1 |=20 bb54b2b8-00a2-4b84-a886-d76dd70c3cb0 | Export | 3 22cf06d1-faca-4e17-ac78-d38b7fc300b1 |=20 e43eRZ-HACv-YscJ-KNZh-HVwe-tAd2-0oGNHh | Hinken | 1 <---- 'GON= E' 29f9b165-3674-4384-a1d4-7aa87d923d56 |=20 aAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk | Master | 1 <---- 'GONE' a8fd9df0-48f2-40a2-88d4-7bf47fef9b07 |=20 0299ca61-d68e-4282-b6c3-f6e14aef2688 | NFS-DATA | 0
When manually trying to activate one of the above domains the=20 following is written to the engine.log
2015-03-23 16:37:27,193 INFO=20 [org.ovirt.engine.core.bll.storage.SyncLunsInfoForBlockStorageDomainCom= mand]=20 (org.ovirt.thread.pool-8-thread-42) [5f2bcbf9] Running command:=20 SyncLunsInfoForBlockStorageDomainCommand internal: true. Entities=20 affected : ID: 29f9b165-3674-4384-a1d4-7aa87d923d56 Type: Storage 2015-03-23 16:37:27,202 INFO=20 [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand]=20 (org.ovirt.thread.pool-8-thread-42) [5f2bcbf9] START,=20 GetVGInfoVDSCommand(HostName =3D patty.elemementary.se, HostId =3D=20 38792a69-76f3-46d8-8620-9d4b9a5ec21f,=20 VGID=3DaAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk), log id: 6e6f6792 2015-03-23 16:37:27,404 ERROR=20 [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand]=20 (org.ovirt.thread.pool-8-thread-28) [3258de6d] Failed in GetVGInfoVDS=20 method 2015-03-23 16:37:27,404 INFO=20 [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand]=20 (org.ovirt.thread.pool-8-thread-28) [3258de6d] Command=20 org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand return=20 value
OneVGReturnForXmlRpc [mStatus=3DStatusForXmlRpc [mCode=3D506,=20 mMessage=3DVolume Group does not exist: (u'vg_uuid:=20 aAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk',)]]
2015-03-23 16:37:27,406 INFO=20 [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand]=20 (org.ovirt.thread.pool-8-thread-28) [3258de6d] HostName =3D=20 patty.elemementary.se 2015-03-23 16:37:27,407 ERROR=20 [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand]=20 (org.ovirt.thread.pool-8-thread-28) [3258de6d] Command=20 GetVGInfoVDSCommand(HostName =3D patty.elemementary.se, HostId =3D=20 38792a69-76f3-46d8-8620-9d4b9a5ec21f,=20 VGID=3DaAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk) execution failed.=20 Exception: VDSErrorException: VDSGenericException: VDSErrorException:=20 Failed to GetVGInfoVDS, error =3D Volume Group does not exist:=20 (u'vg_uuid: aAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk',), code =3D 506 2015-03-23 16:37:27,409 INFO=20 [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand]=20 (org.ovirt.thread.pool-8-thread-28) [3258de6d] FINISH,=20 GetVGInfoVDSCommand, log id: 2edb7c0d 2015-03-23 16:37:27,410 ERROR=20 [org.ovirt.engine.core.bll.storage.SyncLunsInfoForBlockStorageDomainCom= mand]=20 (org.ovirt.thread.pool-8-thread-28) [3258de6d] Command=20 org.ovirt.engine.core.bll.storage.SyncLunsInfoForBlockStorageDomainComm= and=20 throw Vdc Bll exception. With error message VdcBLLException:=20 org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:=20 VDSGenericException: VDSErrorException: Failed to GetVGInfoVDS, error=20 =3D Volume Group does not exist: (u'vg_uuid:=20 aAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk',), code =3D 506 (Failed with=20 error VolumeGroupDoesNotExist and code 506) 2015-03-23 16:37:27,413 INFO=20 [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSComm= and]=20 (org.ovirt.thread.pool-8-thread-28) [3258de6d] START,=20 ActivateStorageDomainVDSCommand( storagePoolId =3D=20 8c3c5df9-e8ff-4313-99c9-385b6c7d896b, ignoreFailoverLimit =3D false,=20 storageDomainId =3D 29f9b165-3674-4384-a1d4-7aa87d923d56), log id: 7952= 53ee 2015-03-23 16:37:27,482 ERROR=20 [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand]=20 (org.ovirt.thread.pool-8-thread-42) [5f2bcbf9] Failed in GetVGInfoVDS=20 method 2015-03-23 16:37:27,482 INFO=20 [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand]=20 (org.ovirt.thread.pool-8-thread-42) [5f2bcbf9] Command=20 org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand return=20 value OneVGReturnForXmlRpc [mStatus=3DStatusForXmlRpc [mCode=3D506,=20 mMessage=3DVolume Group does not exist: (u'vg_uuid:=20 aAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk',)]]
Could someone (pretty please with sugar on top) point me in the right=20 direction ?
Brgds Jonas
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------030606070403030405090600 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable <html> <head> <meta content=3D"text/html; charset=3Dwindows-1252" http-equiv=3D"Content-Type"> </head> <body bgcolor=3D"#FFFFFF" text=3D"#000000"> Thnak you for reporting this issue because, I met exactly the same : FC storage domain and sometimes, many of my hosts (15 ) become sometimes unavailable without any apparent action on them.<br> The issue message is : storage domain is unvailable. So it is a desaster when power management is activated because hosts reboot at the same time and all VMs go down without migrating.<br> It happened to me two times, and the second time it was less a pity because I desactivated the power management.<br> It may be a serious issue because host stay reacheable and lun is still okay when doing a lvs command.<br> The workaround in this case is to restart the engine (restarting vdsm gives nothing) and then, all the hosts come up.<br> <br> <ul> <li>el6 engine on a separate KVM</li> <li>implied el7 and el6 hosts<br> </li> <li>ovirt 3.5.1 and vdsm 4.16.10-8</li> <li>2 FC datacenter on two remote sites with the same engine and both are impacted<br> </li> </ul> <br> <div class=3D"moz-cite-prefix">Le 23/03/2015 16:54, Jonas Israelsson = a =E9crit=A0:<br> </div> <blockquote cite=3D"mid:5510372C.6030701@israelsson.com" type=3D"cite= ">Greetings. <br> <br> Running oVirt 3.5 with a mix of NFS and FC Storage. <br> <br> Engine running on a seperate KVM VM and Node installed with a pre 3.5 ovirt-node "ovirt-node-iso-3.5.0.ovirt35.20140912.el6 (Edited)" <br> <br> I had some problems with my FC-Storage where the LUNS for a while became unavailable to my Ovirt-host. Everything is now up and running and those luns again are accessible by the host. The NFS domains goes back online but the FC does not. <br> <br> Thread-22::DEBUG::2015-03-23 14:53:02,706::lvm::290::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mapper/"] ignore_suspended_devices=3D1 write_cache_state=3D= 0 disable_after_error_count=3D3 obtain_device_list_from_udev=3D0 filt= er =3D [ '\''r|.*|'\'' ] }=A0 global {=A0 locking_type=3D1 prioritise_write_locks=3D1=A0 wait_for_locks=3D1=A0 use_lvmetad=3D0= }=A0 backup {=A0 retain_min =3D 50=A0 retain_days =3D 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,v= g_mda_size,vg_mda_free,lv_count,pv_count,pv_name 29f9b165-3674-4384-a1d4-7aa87d923d56 (cwd None) <br> <br> Thread-24::DEBUG::2015-03-23 14:53:02,981::lvm::290::Storage.Misc.excCmd::(cmd) FAILED: <err> =3D '=A0 Volume group "29f9b165-3674-4384-a1d4-7aa87d923d56" not found\n=A0 Skipping volume group 29f9b165-3674-4384-a1d4-7aa87d923d56\n'; <rc> =3D 5 <br> <br> Thread-24::WARNING::2015-03-23 14:53:02,986::lvm::372::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['=A0 Volume group "29f9b165-3674-4384-a1d4-7aa87d923d56" not found', '=A0 Skipping volume group 29f9b165-3674-4384-a1d4-7aa87d923d56'] <br> <br> <br> Running the command above manually does indeed give the same output: <br> <br> # /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mapper/"] ignore_suspended_devices=3D1 write_cache_state=3D= 0 disable_after_error_count=3D3 obtain_device_list_from_udev=3D0 filt= er =3D [ '\''r|.*|'\'' ] }=A0 global {=A0 locking_type=3D1 prioritise_write_locks=3D1=A0 wait_for_locks=3D1=A0 use_lvmetad=3D0= }=A0 backup {=A0 retain_min =3D 50=A0 retain_days =3D 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,v= g_mda_size,vg_mda_free,lv_count,pv_count,pv_name 29f9b165-3674-4384-a1d4-7aa87d923d56 <br> <br> =A0 Volume group "29f9b165-3674-4384-a1d4-7aa87d923d56" not found <br> =A0 Skipping volume group 29f9b165-3674-4384-a1d4-7aa87d923d56 <br> <br> What puzzles me is that those volume does exist. <br> <br> lvm vgs <br> =A0 VG=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 #PV #LV #SN Attr=A0=A0 VSize VFree <br> =A0 22cf06d1-faca-4e17-ac78-d38b7fc300b1=A0=A0 1=A0 13=A0=A0 0 wz--= n- 999.62g 986.50g <br> =A0 29f9b165-3674-4384-a1d4-7aa87d923d56=A0=A0 1=A0=A0 8=A0=A0 0 wz= --n-=A0 99.62g 95.50g <br> =A0 HostVG=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 1=A0=A0 4=A0=A0 0 wz--n-=A0 13.77= g 52.00m <br> <br> <br> =A0 --- Volume group --- <br> =A0 VG Name=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 29f9b165-3674= -4384-a1d4-7aa87d923d56 <br> =A0 System ID <br> =A0 Format=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 lvm2 <br> =A0 Metadata Areas=A0=A0=A0=A0=A0=A0=A0 2 <br> =A0 Metadata Sequence No=A0 20 <br> =A0 VG Access=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 read/write <br> =A0 VG Status=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 resizable <br> =A0 MAX LV=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 0 <br> =A0 Cur LV=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 8 <br> =A0 Open LV=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 0 <br> =A0 Max PV=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 0 <br> =A0 Cur PV=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 1 <br> =A0 Act PV=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 1 <br> =A0 VG Size=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 99.62 GiB <br> =A0 PE Size=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 128.00 MiB <br> =A0 Total PE=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 797 <br> =A0 Alloc PE / Size=A0=A0=A0=A0=A0=A0 33 / 4.12 GiB <br> =A0 Free=A0 PE / Size=A0=A0=A0=A0=A0=A0 764 / 95.50 GiB <br> =A0 VG UUID=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 aAoOcw-d9YB-y= 9gP-Tp4M-S0UE-Aqpx-y6Z2Uk <br> <br> lvm vgs --config ' devices { preferred_names =3D ["^/dev/mapper/"] ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3 obtain_device_list_from_udev=3D0 }=A0 global {=A0 locking_type=3D1=A0 prioritise_write_locks=3D1=A0 wait_for_locks=3D1 use_lvmetad=3D0 }=A0 backup {=A0 retain_min =3D = 50=A0 retain_days =3D 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,v= g_mda_size,vg_mda_free,lv_count,pv_count,pv_name 29f9b165-3674-4384-a1d4-7aa87d923d56 <br> <br> <br> aAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk|29f9b165-3674-4384-a1d4-7aa87d923d= 56|wz--n-|106971529216|102542344192|134217728|797|764|MDT_LEASETIMESEC=3D= 60,MDT_CLASS=3DData,MDT_VERSION=3D3,MDT_SDUUID=3D29f9b165-3674-4384-a1d4-= 7aa87d923d56,MDT_PV0=3Dpv:36001405c94d80be2ed0482c91a1841b8&44&uu= id:muHcYl-sobG-3LyY-jjfg-3fGf-1cHO-uDk7da&44&pestart:0&44&= ;pecount:797&44&mapoffset:0,MDT_LEASERETRIES=3D3,MDT_VGUUID=3DaAo= Ocw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk,MDT_IOOPTIMEOUTSEC=3D10,MDT_LOCKRENEW= ALINTERVALSEC=3D5,MDT_PHYBLKSIZE=3D512,MDT_LOGBLKSIZE=3D512,MDT_TYPE=3DFC= P,MDT_LOCKPOLICY=3D,MDT_DESCRIPTION=3DMaster,RHAT_storage_domain,MDT_POOL= _SPM_ID=3D-1,MDT_POOL_DESCRIPTION=3DElementary,MDT_POOL_SPM_LVER=3D-1,MDT= _POOL_UUID=3D8c3c5df9-e8ff-4313-99c9-385b6c7d896b,MDT_MASTER_VERSION=3D10= ,MDT_POOL_DOMAINS=3D22cf06d1-faca-4e17-ac78-d38b7fc300b1:Active&44&am= p;c434ab5a-9d21-42eb-ba1b-dbd716ba3ed1:Active&44&96e62d18-652d-40= 1a-b4b5-b54ecefa331c:Active&44&29f9b165-3674-4384-a1d4-7aa87d923d= 56:Active&44& amp;1a0d3e5a-d2ad-4829-8ebd-ad3ff5463062:Active,MDT__SH <br> A_CKSUM=3D7ea9af890755d96563cb7a736f8e3f46ea986f67,MDT_ROLE=3DRegular|134= 217728|67103744|8|1|/dev/sda <br> <br> <br> [root@patty vdsm]# vdsClient -s 0 getStorageDomainsList (Returns all but only the NFS-Domains) <br> c434ab5a-9d21-42eb-ba1b-dbd716ba3ed1 <br> 1a0d3e5a-d2ad-4829-8ebd-ad3ff5463062 <br> a8fd9df0-48f2-40a2-88d4-7bf47fef9b07 <br> <br> <br> engine=3D# select id,storage,storage_name,storage_domain_type from storage_domain_static ; <br> =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 id=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 | storage=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0 |=A0=A0=A0=A0=A0 storage_name=A0=A0=A0=A0=A0 | storage_domain_type <br> --------------------------------------+----------------------------------= ------+------------------------+--------------------- <br> =A0072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 | ceab03af-7220-4d42-8f5c-9b557f5d29af=A0=A0 | ovirt-image-repository |=A0=A0=A0=A0=A0 4 <br> =A01a0d3e5a-d2ad-4829-8ebd-ad3ff5463062 | 6564a0b2-2f92-48de-b986-e92de7e28885=A0=A0 | ISO |=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0 2 <br> =A0c434ab5a-9d21-42eb-ba1b-dbd716ba3ed1 | bb54b2b8-00a2-4b84-a886-d76dd70c3cb0=A0=A0 | Export |=A0=A0=A0=A0=A0= =A0=A0=A0 3 <br> =A022cf06d1-faca-4e17-ac78-d38b7fc300b1 | e43eRZ-HACv-YscJ-KNZh-HVwe-tAd2-0oGNHh | Hinken |=A0=A0=A0=A0=A0=A0= =A0=A0=A0 1=A0=A0=A0=A0=A0=A0=A0=A0=A0 <---- 'GONE' <br> =A029f9b165-3674-4384-a1d4-7aa87d923d56 | aAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk | Master |=A0=A0=A0=A0=A0 1=A0= =A0=A0=A0=A0 <---- 'GONE' <br> =A0a8fd9df0-48f2-40a2-88d4-7bf47fef9b07 | 0299ca61-d68e-4282-b6c3-f6e14aef2688=A0=A0 | NFS-DATA |=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0 0 <br> <br> When manually trying to activate one of the above domains the following is written to the engine.log <br> <br> 2015-03-23 16:37:27,193 INFO [org.ovirt.engine.core.bll.storage.SyncLunsInfoForBlockStorageDomai= nCommand] (org.ovirt.thread.pool-8-thread-42) [5f2bcbf9] Running command: SyncLunsInfoForBlockStorageDomainCommand internal: true. Entities affected :=A0 ID: 29f9b165-3674-4384-a1d4-7aa87d923d56 Type: Storag= e <br> 2015-03-23 16:37:27,202 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-42) [5f2bcbf9] START, GetVGInfoVDSCommand(HostName =3D patty.elemementary.se, HostId =3D 38792a69-76f3-46d8-8620-9d4b9a5ec21f, VGID=3DaAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk), log id: 6e6f6792 <br> 2015-03-23 16:37:27,404 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-28) [3258de6d] Failed in GetVGInfoVDS method <br> 2015-03-23 16:37:27,404 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-28) [3258de6d] Command org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand return value <br> <br> OneVGReturnForXmlRpc [mStatus=3DStatusForXmlRpc [mCode=3D506, mMessage=3DVolume Group does not exist: (u'vg_uuid: aAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk',)]] <br> <br> 2015-03-23 16:37:27,406 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-28) [3258de6d] HostName =3D patty.elemementary.se <br> 2015-03-23 16:37:27,407 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-28) [3258de6d] Command GetVGInfoVDSCommand(HostName =3D patty.elemementary.se, HostId =3D 38792a69-76f3-46d8-8620-9d4b9a5ec21f, VGID=3DaAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetVGInfoVDS, error =3D Volume Group does not exist: (u'vg_uuid: aAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk',), code =3D 506 <br> 2015-03-23 16:37:27,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-28) [3258de6d] FINISH, GetVGInfoVDSCommand, log id: 2edb7c0d <br> 2015-03-23 16:37:27,410 ERROR [org.ovirt.engine.core.bll.storage.SyncLunsInfoForBlockStorageDomai= nCommand] (org.ovirt.thread.pool-8-thread-28) [3258de6d] Command org.ovirt.engine.core.bll.storage.SyncLunsInfoForBlockStorageDomain= Command throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetVGInfoVDS, error =3D Volume Group does not exist: (u'vg_uuid: aAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk',), code =3D 506 (Failed wit= h error VolumeGroupDoesNotExist and code 506) <br> 2015-03-23 16:37:27,413 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDS= Command] (org.ovirt.thread.pool-8-thread-28) [3258de6d] START, ActivateStorageDomainVDSCommand( storagePoolId =3D 8c3c5df9-e8ff-4313-99c9-385b6c7d896b, ignoreFailoverLimit =3D false= , storageDomainId =3D 29f9b165-3674-4384-a1d4-7aa87d923d56), log id: 795253ee <br> 2015-03-23 16:37:27,482 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-42) [5f2bcbf9] Failed in GetVGInfoVDS method <br> 2015-03-23 16:37:27,482 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-42) [5f2bcbf9] Command org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand return value <br> OneVGReturnForXmlRpc [mStatus=3DStatusForXmlRpc [mCode=3D506, mMessage=3DVolume Group does not exist: (u'vg_uuid: aAoOcw-d9YB-y9gP-Tp4M-S0UE-Aqpx-y6Z2Uk',)]] <br> <br> <br> Could someone (pretty please with sugar on top) point me in the right direction ? <br> <br> Brgds Jonas <br> <br> _______________________________________________ <br> Users mailing list <br> <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.or= g">Users@ovirt.org</a> <br> <a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/m= ailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> <br> </blockquote> <br> </body> </html> --------------030606070403030405090600--