Hello,
f18 server with oVirt engine
ovirt-engine-3.2.0-1.20130113.gitc954518.fc18.noarch
and f18 host
with
vdsm-4.10.3-0.78.gitb005b54.fc18.x86_64
DC is configred wth FCP as default.
Trying to add a LUN I get
Error while executing action New SAN Storage Domain: Error creating a
storage domain
I notice hat it creates PV and VG
pvs:
/dev/mapper/3600507630efe05800000000000001601
c6bb44ee-b824-44a0-a62c-f537a23d2e2b lvm2 a-- 99.62g 99.62g
vgs:
VG #PV #LV #SN Attr VSize VFree
c6bb44ee-b824-44a0-a62c-f537a23d2e2b 1 0 0 wz--n- 99.62g 99.62g
vdsm.log
Thread-26641::DEBUG::2013-01-16
00:34:15,073::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm reload
operation' released the operation mutex
Thread-26641::WARNING::2013-01-16
00:34:15,073::lvm::73::Storage.LVM::(__getattr__)
/dev/mapper/3600507630efe05800000000000001601 can't be reloaded, please
check your storage connections.
Thread-26641::ERROR::2013-01-16
00:34:15,073::task::833::TaskManager.Task::(_setError)
Task=`1dea04e7-56e1-49c3-a702-efa676ef1e7e`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 840, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 2424, in createStorageDomain
domVersion)
File "/usr/share/vdsm/storage/blockSD.py", line 505, in create
numOfPVs = len(lvm.listPVNames(vgName))
File "/usr/share/vdsm/storage/lvm.py", line 1257, in listPVNames
return [pv.name for pv in pvs if pv.vg_name == vgName]
File "/usr/share/vdsm/storage/lvm.py", line 74, in __getattr__
raise AttributeError("Failed reload: %s" % self.name)
AttributeError: Failed reload: /dev/mapper/3600507630efe05800000000000001601
Thread-26641::DEBUG::2013-01-16
00:34:15,074::task::852::TaskManager.Task::(_run)
Task=`1dea04e7-56e1-49c3-a702-efa676ef1e7e`::Task._run:
1dea04e7-56e1-49c3-a702-efa676ef1e7e (2,
'c6bb44ee-b824-44a0-a62c-f537a23d2e2b',
'3600507630efe05800000000000001601',
'x3XSZx-avUC-0NNI-w5K4-nCOp-uxp5-TD9GvH', 1, '3') {} failed - stopping
task
Thread-26641::DEBUG::2013-01-16
00:34:15,074::task::1177::TaskManager.Task::(stop)
Task=`1dea04e7-56e1-49c3-a702-efa676ef1e7e`::stopping in state preparing
(force False)
Thread-26641::DEBUG::2013-01-16
00:34:15,074::task::957::TaskManager.Task::(_decref)
Task=`1dea04e7-56e1-49c3-a702-efa676ef1e7e`::ref 1 aborting True
In messages:
Jan 16 00:34:14 f18ovn03 vdsm Storage.LVM WARNING lvm vgs failed: 5 ['
x3XSZx-avUC-0NNI-w5K4-nCOp-uxp5-TD9GvH|c6bb44ee-b824-44a0-a62c-f537a23d2e2b|wz--n-|106971529216|106971529216|134217728|797|797|RHAT_storage_domain_UNREADY|134217728|67107328']
[' Skipping clustered volume group VG_VIRT04', ' Skipping clustered
volume group VG_VIRT02', ' Skipping clustered volume group VG_VIRT03', '
Skipping clustered volume group VG_VIRT01']
Jan 16 00:34:15 f18ovn03 vdsm Storage.LVM WARNING lvm pvs failed: 5 ['
NQRb0Q-3C0k-3RRo-1LZZ-NNy2-42A1-c7zO8e|/dev/mapper/3600507630efe05800000000000001601|106971529216|c6bb44ee-b824-44a0-a62c-f537a23d2e2b|x3XSZx-avUC-0NNI-w5K4-nCOp-uxp5-TD9GvH|135266304|797|0|2|107374182400']
[' Skipping clustered volume group VG_VIRT04', ' Skipping volume group
VG_VIRT04', ' Skipping clustered volume group VG_VIRT02', ' Skipping
volume group VG_VIRT02', ' Skipping clustered volume group VG_VIRT03', '
Skipping volume group VG_VIRT03', ' Skipping clustered volume group
VG_VIRT03', ' Skipping volume group VG_VIRT03', ' Skipping clustered
volume group VG_VIRT01', ' Skipping volume group VG_VIRT01', ' Skipping
clustered volume group VG_VIRT01', ' Skipping volume group VG_VIRT01']
Jan 16 00:34:15 f18ovn03 vdsm Storage.LVM WARNING
/dev/mapper/3600507630efe05800000000000001601 can't be reloaded, please
check your storage connections.
Jan 16 00:34:15 f18ovn03 vdsm TaskManager.Task ERROR
Task=`1dea04e7-56e1-49c3-a702-efa676ef1e7e`::Unexpected error
Jan 16 00:34:15 f18ovn03 vdsm Storage.Dispatcher.Protect ERROR Failed
reload: /dev/mapper/3600507630efe05800000000000001601
Can it be that the clustered VGs on other LUNs that are skipped is the
cause?
BTW: tomorrow I should have a SAN guy able to mask them ...
Gianluca
Show replies by date
On Wed, Jan 16, 2013 at 1:30 AM, Gianluca Cecchi wrote:
Can it be that the clustered VGs on other LUNs that are skipped is
the
cause?
BTW: tomorrow I should have a SAN guy able to mask them ...
It seems it was a problem/matter related with multipath layer, and I could
create the SD without masking the clustered LUNs
In fact, as I blacklisted the disks (PVs) that composed the VGs of the
interfering clustered logical volumes I was able to add the storage domain
Added this in my node multipath.conf
blacklist {
wwid 3600507630efe0b0c0000000000001183
wwid 3600507630efe0b0c0000000000001082
wwid 3600507630efe0b0c0000000000001182
wwid 3600507630efe0b0c0000000000001081
wwid 3600507630efe0b0c0000000000001181
wwid 3600507630efe0b0c0000000000001080
wwid 3600507630efe0b0c0000000000001180
}
In case you think it should have succeeded also without multipath
blacklisting, you can compare previous vdsm.log with errors I already sent
with the one generated right now with successful creation here:
https://docs.google.com/file/d/0BwoPbcrMv8mvS0NKd2FSWl9Yb0k/edit
Gianluca