<div dir="ltr"><div>Hi guys,<br><br></div><div>I'm trying to run one of my storage
domains, which experienced failure. Unfortunately, I meet a very nasty
error (Storage domain does not exist).<br><br></div><div>Could someone tell me, how to try to restore this domain?<br><br></div><div>P.S.<br></div><div>It's an oVirt 3.4.2-1.el6<br></div><br>******************************<div>****************************************************<br><br>/var/log/messages:<br>Jan 26 12:48:49 node002 vdsm TaskManager.Task ERROR Task=`10d02993-b585-448f-9a50-bd3e8cda7082`::Unexpected error#012Traceback (most recent call last):#012 File "/usr/share/vdsm/storage/task.py",
line 873, in _run#012 return fn(*args, **kargs)#012 File
"/usr/share/vdsm/logUtils.py", line 45, in wrapper#012 res = f(*args,
**kwargs)#012 File "/usr/share/vdsm/storage/hsm.py", line 2959, in getVGInfo#012 return dict(info=self.__getVGsInfo([vgUUID])[0])#012 File "/usr/share/vdsm/storage/hsm.py",
line 2892, in __getVGsInfo#012 vgList = [lvm.getVGbyUUID(vgUUID) for
vgUUID in vgUUIDs]#012 File "/usr/share/vdsm/storage/lvm.py", line 894, in getVGbyUUID#012 raise se.VolumeGroupDoesNotExist("vg_uuid: %s" % vgUUID)#012VolumeGroupDoesNotExist: Volume Group does not exist: ('vg_uuid: gyaCWf-6VKi-lI9W-JT6H-IZdy-rIsB-hTvZ4O',)<br>Jan 26 12:48:49 node002 kernel: device-mapper: table: 253:26: multipath: error getting device<br>Jan 26 12:48:49 node002 kernel: device-mapper: ioctl: error adding target to table<br><br>**********************************************************************************<br><br>/var/log/vdsm.log:<br>Thread-22::ERROR::2015-01-26 12:43:03,376::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain db52e9cb-7306-43fd-aff3-20831bc2bcaf<br>Thread-22::ERROR::2015-01-26 12:43:03,377::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain db52e9cb-7306-43fd-aff3-20831bc2bcaf<br>Thread-22::DEBUG::2015-01-26 12:43:03,377::lvm::373::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex<br>Thread-22::DEBUG::2015-01-26 12:43:03,378::lvm::296::Storage.Misc.excCmd::(cmd)
u'/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [
\'a|/dev/mapper/mpathb|/dev/mapper/mpathc|/dev/mapper/mpathd|/dev/mapper/mpathe|/dev/mapper/mpathf|\',
\'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50
retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name db52e9cb-7306-43fd-aff3-20831bc2bcaf' (cwd None)<br>Thread-22::DEBUG::2015-01-26 12:43:03,462::lvm::296::Storage.Misc.excCmd::(cmd)
FAILED: <err> = ' /dev/mapper/mpathc: Checksum error\n
/dev/mapper/mpathc: Checksum error\n Volume group
"db52e9cb-7306-43fd-aff3-20831bc2bcaf" not found\n Skipping volume group db52e9cb-7306-43fd-aff3-20831bc2bcaf\n'; <rc> = 5<br>Thread-22::WARNING::2015-01-26 12:43:03,466::lvm::378::Storage.LVM::(_reloadvgs)
lvm vgs failed: 5 [] [' /dev/mapper/mpathc: Checksum error', '
/dev/mapper/mpathc: Checksum error', ' Volume group
"db52e9cb-7306-43fd-aff3-20831bc2bcaf" not found', ' Skipping volume group db52e9cb-7306-43fd-aff3-20831bc2bcaf']<br>Thread-22::DEBUG::2015-01-26 12:43:03,466::lvm::415::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex<br>Thread-22::ERROR::2015-01-26 12:43:03,477::sdc::143::Storage.StorageDomainCache::(_findDomain) domain db52e9cb-7306-43fd-aff3-20831bc2bcaf not found<br>Traceback (most recent call last):<br> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain<br> dom = findMethod(sdUUID)<br> File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain<br> raise se.StorageDomainDoesNotExist(sdUUID)<br>StorageDomainDoesNotExist: Storage domain does not exist: (u'db52e9cb-7306-43fd-aff3-20831bc2bcaf',)<br>Thread-22::ERROR::2015-01-26 12:43:03,478::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain db52e9cb-7306-43fd-aff3-20831bc2bcaf monitoring information<br>Traceback (most recent call last):<br> File "/usr/share/vdsm/storage/domainMonitor.py", line 204, in _monitorDomain<br> self.domain = sdCache.produce(self.sdUUID)<br> File "/usr/share/vdsm/storage/sdc.py", line 98, in produce<br> domain.getRealDomain()<br> File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain<br> return self._cache._realProduce(self._sdUUID)<br> File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce<br> domain = self._findDomain(sdUUID)<br> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain<br> dom = findMethod(sdUUID)<br> File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain<br> raise se.StorageDomainDoesNotExist(sdUUID)<br>StorageDomainDoesNotExist: Storage domain does not exist: (u'db52e9cb-7306-43fd-aff3-20831bc2bcaf',)<br>Thread-13::DEBUG::2015-01-26 12:43:05,102::task::595::TaskManager.Task::(_updateState) Task=`b4e85e37-b216-4d29-a448-0711e370a246`::moving from state init -> state preparing<br>Thread-13::INFO::2015-01-26 12:43:05,102::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)<br>Thread-13::INFO::2015-01-26 12:43:05,103::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u'7969d636-1a02-42ba-a50b-2528765cf3d5':
{'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000457574',
'lastCheck': '7.5', 'valid': True}, u'5e1ca1b6-4706-4c79-8924-b8db741c929f':
{'code': 0, 'version': 3, 'acquired': True, 'delay': '0.00100094',
'lastCheck': '6.3', 'valid': True}, u'cb85e6cd-df54-4151-8f3b-7e6d72b7372d':
{'code': 0, 'version': 3, 'acquired': True, 'delay': '0.463061',
'lastCheck': '4.9', 'valid': True}, u'db52e9cb-7306-43fd-aff3-20831bc2bcaf':
{'code': 358, 'version': -1, 'acquired': False, 'delay': '0',
'lastCheck': '1.6', 'valid': False}, u'5f595801-aaa5-42c7-b829-7a34a636407e':
{'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000942979',
'lastCheck': '7.9', 'valid': True}, u'c1ebd0f8-fa32-4fe3-8569-fb7d4ad8faf4': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000424499', 'lastCheck': '7.3', 'valid': True}}<br>Thread-13::DEBUG::2015-01-26 12:43:05,103::task::1185::TaskManager.Task::(prepare) Task=`b4e85e37-b216-4d29-a448-0711e370a246`::finished: {u'7969d636-1a02-42ba-a50b-2528765cf3d5':
{'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000457574',
'lastCheck': '7.5', 'valid': True}, u'5e1ca1b6-4706-4c79-8924-b8db741c929f':
{'code': 0, 'version': 3, 'acquired': True, 'delay': '0.00100094',
'lastCheck': '6.3', 'valid': True}, u'cb85e6cd-df54-4151-8f3b-7e6d72b7372d':
{'code': 0, 'version': 3, 'acquired': True, 'delay': '0.463061',
'lastCheck': '4.9', 'valid': True}, u'db52e9cb-7306-43fd-aff3-20831bc2bcaf':
{'code': 358, 'version': -1, 'acquired': False, 'delay': '0',
'lastCheck': '1.6', 'valid': False}, u'5f595801-aaa5-42c7-b829-7a34a636407e':
{'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000942979',
'lastCheck': '7.9', 'valid': True}, u'c1ebd0f8-fa32-4fe3-8569-fb7d4ad8faf4': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000424499', 'lastCheck': '7.3', 'valid': True}}<br><br>**********************************************************************************<br><br>[root@node002 shim]# multipath -ll<br>mpathe (1NODE_001_LUN01) dm-6 SHIMI,VIRTUAL-DISK<br>size=977G features='0' hwhandler='0' wp=rw<br>`-+- policy='round-robin 0' prio=1 status=active<br> `- 21:0:0:1 sdg 8:96 active ready running<br>mpathd (1NODE_003_LUN01) dm-7 SHIMI,VIRTUAL-DISK<br>size=977G features='0' hwhandler='0' wp=rw<br>`-+- policy='round-robin 0' prio=1 status=active<br> `- 20:0:0:1 sdf 8:80 active ready running<br>mpathc (1NODE_002_LUN01) dm-4 SHIMI,VIRTUAL-DISK<br>size=977G features='0' hwhandler='0' wp=rw<br>`-+- policy='round-robin 0' prio=1 status=active<br> `- 18:0:0:1 sdd 8:48 active ready running<br>mpathb (1ATA_MARVELL_Raid_VD_0_1c3c8ecf5cf00010) dm-1 ATA,MARVELL Raid VD<br>size=1.8T features='0' hwhandler='0' wp=rw<br>`-+- policy='round-robin 0' prio=1 status=active<br> `- 0:0:0:0 sda 8:0 active ready running<br>mpathf (1MANAGER_LUN01) dm-5 SHIMI,VIRTUAL-DISK<br>size=500G features='0' hwhandler='0' wp=rw<br>`-+- policy='round-robin 0' prio=1 status=active<br> `- 19:0:0:1 sde 8:64 active ready running<br><br>**********************************************************************************<br><br>[root@node002 shim]# lsblk<br>NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT<br>sdb 8:16 0 298.1G 0 disk<br>├─sdb1 8:17 0 1G 0 part /boot<br>├─sdb2 8:18 0 4G 0 part [SWAP]<br>└─sdb3 8:19 0 293.1G 0 part<br> └─vg_node002-LogVol00 (dm-0) 253:0 0 293.1G 0 lvm /<br>sda 8:0 0 1.8T 0 disk<br>└─sda1 8:1 0 1.8T 0 part<br>sdd 8:48 0 976.6G 0 disk<br>└─mpathc (dm-4) 253:4 0 976.6G 0 mpath<br>sde 8:64 0 500G 0 disk<br>└─mpathf (dm-5) 253:5 0 500G 0 mpath<br> ├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-metadata (dm-15) 253:15 0 512M 0 lvm<br> ├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-ids (dm-16) 253:16 0 128M 0 lvm<br> ├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-leases (dm-18) 253:18 0 2G 0 lvm<br> ├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-outbox (dm-20) 253:20 0 128M 0 lvm<br> ├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-inbox (dm-21) 253:21 0 128M 0 lvm<br> └─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-master (dm-22) 253:22 0 1G 0 lvm<br>sdf 8:80 0 976.6G 0 disk<br>└─mpathd (dm-7) 253:7 0 976.6G 0 mpath<br> ├─5e1ca1b6--4706--4c79--8924--b8db741c929f-metadata (dm-14) 253:14 0 512M 0 lvm<br> ├─5e1ca1b6--4706--4c79--8924--b8db741c929f-ids (dm-17) 253:17 0 128M 0 lvm<br> ├─5e1ca1b6--4706--4c79--8924--b8db741c929f-leases (dm-19) 253:19 0 2G 0 lvm<br> ├─5e1ca1b6--4706--4c79--8924--b8db741c929f-outbox (dm-23) 253:23 0 128M 0 lvm<br> ├─5e1ca1b6--4706--4c79--8924--b8db741c929f-inbox (dm-24) 253:24 0 128M 0 lvm<br> └─5e1ca1b6--4706--4c79--8924--b8db741c929f-master (dm-25) 253:25 0 1G 0 lvm<br>sdg 8:96 0 976.6G 0 disk<br>└─mpathe (dm-6) 253:6 0 976.6G 0 mpath<br> ├─5f595801--aaa5--42c7--b829--7a34a636407e-metadata (dm-8) 253:8 0 512M 0 lvm<br> ├─5f595801--aaa5--42c7--b829--7a34a636407e-ids (dm-9) 253:9 0 128M 0 lvm<br> ├─5f595801--aaa5--42c7--b829--7a34a636407e-leases (dm-10) 253:10 0 2G 0 lvm<br> ├─5f595801--aaa5--42c7--b829--7a34a636407e-outbox (dm-11) 253:11 0 128M 0 lvm<br> ├─5f595801--aaa5--42c7--b829--7a34a636407e-inbox (dm-12) 253:12 0 128M 0 lvm<br> └─5f595801--aaa5--42c7--b829--7a34a636407e-master (dm-13) 253:13 0 1G 0 lvm<br><br>**********************************************************************************<br><br>[root@node002 shim]# multipath -v3<br>Jan 26 12:46:28 | ram0: device node name blacklisted<br>Jan 26 12:46:28 | ram1: device node name blacklisted<br>Jan 26 12:46:28 | ram2: device node name blacklisted<br>Jan 26 12:46:28 | ram3: device node name blacklisted<br>Jan 26 12:46:28 | ram4: device node name blacklisted<br>Jan 26 12:46:28 | ram5: device node name blacklisted<br>Jan 26 12:46:28 | ram6: device node name blacklisted<br>Jan 26 12:46:28 | ram7: device node name blacklisted<br>Jan 26 12:46:28 | ram8: device node name blacklisted<br>Jan 26 12:46:28 | ram9: device node name blacklisted<br>Jan 26 12:46:28 | ram10: device node name blacklisted<br>Jan 26 12:46:28 | ram11: device node name blacklisted<br>Jan 26 12:46:28 | ram12: device node name blacklisted<br>Jan 26 12:46:28 | ram13: device node name blacklisted<br>Jan 26 12:46:28 | ram14: device node name blacklisted<br>Jan 26 12:46:28 | ram15: device node name blacklisted<br>Jan 26 12:46:28 | loop0: device node name blacklisted<br>Jan 26 12:46:28 | loop1: device node name blacklisted<br>Jan 26 12:46:28 | loop2: device node name blacklisted<br>Jan 26 12:46:28 | loop3: device node name blacklisted<br>Jan 26 12:46:28 | loop4: device node name blacklisted<br>Jan 26 12:46:28 | loop5: device node name blacklisted<br>Jan 26 12:46:28 | loop6: device node name blacklisted<br>Jan 26 12:46:28 | loop7: device node name blacklisted<br>Jan 26 12:46:28 | sdb: not found in pathvec<br>Jan 26 12:46:28 | sdb: mask = 0x3f<br>Jan 26 12:46:28 | sdb: dev_t = 8:16<br>Jan 26 12:46:28 | sdb: size = <a href="tel:625142448" value="+48625142448" target="_blank">625142448</a><br>Jan 26 12:46:28 | sdb: subsystem = scsi<br>Jan 26 12:46:28 | sdb: vendor = ATA<br>Jan 26 12:46:28 | sdb: product = WDC WD3200AAJS-6<br>Jan 26 12:46:28 | sdb: rev = 03.0<br>Jan 26 12:46:28 | sdb: h:b:t:l = 10:0:0:0<br>Jan 26 12:46:28 | sdb: serial = WD-WMAV2HM46197<br>Jan 26 12:46:28 | sdb: get_state<br>Jan 26 12:46:28 | sdb: path checker = directio (config file default)<br>Jan 26 12:46:28 | sdb: checker timeout = 30000 ms (sysfs setting)<br>Jan 26 12:46:28 | sdb: state = running<br>Jan 26 12:46:28 | directio: starting new request<br>Jan 26 12:46:28 | directio: io finished 4096/0<br>Jan 26 12:46:28 | sdb: state = 3<br>Jan 26 12:46:28 | sdb: getuid = /sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/%n (config file default)<br>Jan 26 12:46:28 | sdb: uid = 1ATA_WDC_WD3200AAJS-60Z0A0_WD-WMAV2HM46197 (callout)<br>Jan 26 12:46:28 | sdb: state = running<br>Jan 26 12:46:28 | sdb: detect_prio = 1 (config file default)<br>Jan 26 12:46:28 | sdb: prio = const (config file default)<br>Jan 26 12:46:28 | sdb: const prio = 1<br>Jan 26 12:46:28 | sda: not found in pathvec<br>Jan 26 12:46:28 | sda: mask = 0x3f<br>Jan 26 12:46:28 | sda: dev_t = 8:0<br>Jan 26 12:46:28 | sda: size = 3904897024<br>Jan 26 12:46:28 | sda: subsystem = scsi<br>Jan 26 12:46:28 | sda: vendor = ATA<br>Jan 26 12:46:28 | sda: product = MARVELL Raid VD<br>Jan 26 12:46:28 | sda: rev = MV.R<br>Jan 26 12:46:28 | sda: h:b:t:l = 0:0:0:0<br>Jan 26 12:46:28 | sda: serial = 1c3c8ecf5cf00010<br>Jan 26 12:46:28 | sda: get_state<br>Jan 26 12:46:28 | sda: path checker = directio (config file default)<br>Jan 26 12:46:28 | sda: checker timeout = 30000 ms (sysfs setting)<br>Jan 26 12:46:28 | sda: state = running<br>Jan 26 12:46:28 | directio: starting new request<br>Jan 26 12:46:28 | directio: io finished 4096/0<br>Jan 26 12:46:28 | sda: state = 3<br>Jan 26 12:46:28 | sda: getuid = /sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/%n (config file default)<br>Jan 26 12:46:28 | sda: uid = 1ATA_MARVELL_Raid_VD_0_1c3c8ecf5cf00010 (callout)<br>Jan 26 12:46:28 | sda: state = running<br>Jan 26 12:46:28 | sda: detect_prio = 1 (config file default)<br>Jan 26 12:46:28 | sda: prio = const (config file default)<br>Jan 26 12:46:28 | sda: const prio = 1<br>Jan 26 12:46:28 | dm-0: device node name blacklisted<br>Jan 26 12:46:28 | sdc: not found in pathvec<br>Jan 26 12:46:28 | sdc: mask = 0x3f<br>Jan 26 12:46:28 | sdc: dev_t = 8:32<br>Jan 26 12:46:28 | sdc: size = 0<br>Jan 26 12:46:28 | sdc: subsystem = scsi<br>Jan 26 12:46:28 | sdc: vendor = Multi<br>Jan 26 12:46:28 | sdc: product = Flash Reader<br>Jan 26 12:46:28 | sdc: rev = 1.00<br>Jan 26 12:46:28 | sdc: h:b:t:l = 12:0:0:0<br>Jan 26 12:46:28 | dm-1: device node name blacklisted<br>Jan 26 12:46:28 | dm-2: device node name blacklisted<br>Jan 26 12:46:28 | dm-3: device node name blacklisted<br>Jan 26 12:46:28 | sdd: not found in pathvec<br>Jan 26 12:46:28 | sdd: mask = 0x3f<br>Jan 26 12:46:28 | sdd: dev_t = 8:48<br>Jan 26 12:46:28 | sdd: size = 2048000000<br>Jan 26 12:46:28 | sdd: subsystem = scsi<br>Jan 26 12:46:28 | sdd: vendor = SHIMI<br>Jan 26 12:46:28 | sdd: product = VIRTUAL-DISK<br>Jan 26 12:46:28 | sdd: rev = 0001<br>Jan 26 12:46:28 | sdd: h:b:t:l = 18:0:0:1<br>Jan 26 12:46:28 | sdd: tgt_node_name = pl.mycomp.shimi:node002.target0<br>Jan 26 12:46:28 | sdd: serial = beaf11<br>Jan 26 12:46:28 | sdd: get_state<br>Jan 26 12:46:28 | sdd: path checker = directio (config file default)<br>Jan 26 12:46:28 | sdd: checker timeout = 30000 ms (sysfs setting)<br>Jan 26 12:46:28 | sdd: state = running<br>Jan 26 12:46:28 | directio: starting new request<br>Jan 26 12:46:28 | directio: io finished 4096/0<br>Jan 26 12:46:28 | sdd: state = 3<br>Jan 26 12:46:28 | sdd: getuid = /sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/%n (config file default)<br>Jan 26 12:46:28 | sdd: uid = 1NODE_002_LUN01 (callout)<br>Jan 26 12:46:28 | sdd: state = running<br>Jan 26 12:46:28 | sdd: detect_prio = 1 (config file default)<br>Jan 26 12:46:28 | sdd: prio = const (config file default)<br>Jan 26 12:46:28 | sdd: const prio = 1<br>Jan 26 12:46:28 | dm-4: device node name blacklisted<br>Jan 26 12:46:28 | sde: not found in pathvec<br>Jan 26 12:46:28 | sde: mask = 0x3f<br>Jan 26 12:46:28 | sde: dev_t = 8:64<br>Jan 26 12:46:28 | sde: size = 1048576000<br>Jan 26 12:46:28 | sde: subsystem = scsi<br>Jan 26 12:46:28 | sde: vendor = SHIMI<br>Jan 26 12:46:28 | sde: product = VIRTUAL-DISK<br>Jan 26 12:46:28 | sde: rev = 0001<br>Jan 26 12:46:28 | sde: h:b:t:l = 19:0:0:1<br>Jan 26 12:46:28 | sde: tgt_node_name = pl.mycomp.shimi:manager.target0<br>Jan 26 12:46:28 | sde: serial = beaf11<br>Jan 26 12:46:28 | sde: get_state<br>Jan 26 12:46:28 | sde: path checker = directio (config file default)<br>Jan 26 12:46:28 | sde: checker timeout = 30000 ms (sysfs setting)<br>Jan 26 12:46:28 | sde: state = running<br>Jan 26 12:46:28 | directio: starting new request<br>Jan 26 12:46:28 | directio: io finished 4096/0<br>Jan 26 12:46:28 | sde: state = 3<br>Jan 26 12:46:28 | sde: getuid = /sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/%n (config file default)<br>Jan 26 12:46:28 | sde: uid = 1MANAGER_LUN01 (callout)<br>Jan 26 12:46:28 | sde: state = running<br>Jan 26 12:46:28 | sde: detect_prio = 1 (config file default)<br>Jan 26 12:46:28 | sde: prio = const (config file default)<br>Jan 26 12:46:28 | sde: const prio = 1<br>Jan 26 12:46:28 | sdf: not found in pathvec<br>Jan 26 12:46:28 | sdf: mask = 0x3f<br>Jan 26 12:46:28 | sdf: dev_t = 8:80<br>Jan 26 12:46:28 | sdf: size = 2048000000<br>Jan 26 12:46:28 | sdf: subsystem = scsi<br>Jan 26 12:46:28 | sdf: vendor = SHIMI<br>Jan 26 12:46:28 | sdf: product = VIRTUAL-DISK<br>Jan 26 12:46:28 | sdf: rev = 0001<br>Jan 26 12:46:28 | sdf: h:b:t:l = 20:0:0:1<br>Jan 26 12:46:28 | sdf: tgt_node_name = pl.mycomp.shimi:node003.target0<br>Jan 26 12:46:28 | sdf: serial = beaf11<br>Jan 26 12:46:28 | sdf: get_state<br>Jan 26 12:46:28 | sdf: path checker = directio (config file default)<br>Jan 26 12:46:28 | sdf: checker timeout = 30000 ms (sysfs setting)<br>Jan 26 12:46:28 | sdf: state = running<br>Jan 26 12:46:28 | directio: starting new request<br>Jan 26 12:46:28 | directio: io finished 4096/0<br>Jan 26 12:46:28 | sdf: state = 3<br>Jan 26 12:46:28 | sdf: getuid = /sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/%n (config file default)<br>Jan 26 12:46:28 | sdf: uid = 1NODE_003_LUN01 (callout)<br>Jan 26 12:46:28 | sdf: state = running<br>Jan 26 12:46:28 | sdf: detect_prio = 1 (config file default)<br>Jan 26 12:46:28 | sdf: prio = const (config file default)<br>Jan 26 12:46:28 | sdf: const prio = 1<br>Jan 26 12:46:28 | sdg: not found in pathvec<br>Jan 26 12:46:28 | sdg: mask = 0x3f<br>Jan 26 12:46:28 | sdg: dev_t = 8:96<br>Jan 26 12:46:28 | sdg: size = 2048000000<br>Jan 26 12:46:28 | sdg: subsystem = scsi<br>Jan 26 12:46:28 | sdg: vendor = SHIMI<br>Jan 26 12:46:28 | sdg: product = VIRTUAL-DISK<br>Jan 26 12:46:28 | sdg: rev = 0001<br>Jan 26 12:46:28 | sdg: h:b:t:l = 21:0:0:1<br>Jan 26 12:46:28 | sdg: tgt_node_name = pl.mycomp.shimi:node001.target0<br>Jan 26 12:46:28 | sdg: serial = beaf11<br>Jan 26 12:46:28 | sdg: get_state<br>Jan 26 12:46:28 | sdg: path checker = directio (config file default)<br>Jan 26 12:46:28 | sdg: checker timeout = 30000 ms (sysfs setting)<br>Jan 26 12:46:28 | sdg: state = running<br>Jan 26 12:46:28 | directio: starting new request<br>Jan 26 12:46:28 | directio: io finished 4096/0<br>Jan 26 12:46:28 | sdg: state = 3<br>Jan 26 12:46:28 | sdg: getuid = /sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/%n (config file default)<br>Jan 26 12:46:28 | sdg: uid = 1NODE_001_LUN01 (callout)<br>Jan 26 12:46:28 | sdg: state = running<br>Jan 26 12:46:28 | sdg: detect_prio = 1 (config file default)<br>Jan 26 12:46:28 | sdg: prio = const (config file default)<br>Jan 26 12:46:28 | sdg: const prio = 1<br>Jan 26 12:46:28 | dm-5: device node name blacklisted<br>Jan 26 12:46:28 | dm-6: device node name blacklisted<br>Jan 26 12:46:28 | dm-7: device node name blacklisted<br>Jan 26 12:46:28 | dm-8: device node name blacklisted<br>Jan 26 12:46:28 | dm-9: device node name blacklisted<br>Jan 26 12:46:28 | dm-10: device node name blacklisted<br>Jan 26 12:46:28 | dm-11: device node name blacklisted<br>Jan 26 12:46:28 | dm-12: device node name blacklisted<br>Jan 26 12:46:28 | dm-13: device node name blacklisted<br>Jan 26 12:46:28 | dm-14: device node name blacklisted<br>Jan 26 12:46:28 | dm-15: device node name blacklisted<br>Jan 26 12:46:28 | dm-16: device node name blacklisted<br>Jan 26 12:46:28 | dm-17: device node name blacklisted<br>Jan 26 12:46:28 | dm-18: device node name blacklisted<br>Jan 26 12:46:28 | dm-19: device node name blacklisted<br>Jan 26 12:46:28 | dm-20: device node name blacklisted<br>Jan 26 12:46:28 | dm-21: device node name blacklisted<br>Jan 26 12:46:28 | dm-22: device node name blacklisted<br>Jan 26 12:46:28 | dm-23: device node name blacklisted<br>Jan 26 12:46:28 | dm-24: device node name blacklisted<br>Jan 26 12:46:28 | dm-25: device node name blacklisted<br>===== paths list =====<br>uuid hcil dev dev_t pri dm_st chk_st<br>1ATA_WDC_WD3200AAJS-60Z0A0_WD-WMAV2HM46197 10:0:0:0 sdb 8:16 1 undef ready<br>1ATA_MARVELL_Raid_VD_0_1c3c8ecf5cf00010 0:0:0:0 sda 8:0 1 undef ready<br> 12:0:0:0 sdc 8:32 -1 undef faulty<br>1NODE_002_LUN01 18:0:0:1 sdd 8:48 1 undef ready<br>1MANAGER_LUN01 19:0:0:1 sde 8:64 1 undef ready<br>1NODE_003_LUN01 20:0:0:1 sdf 8:80 1 undef ready<br>1NODE_001_LUN01 21:0:0:1 sdg 8:96 1 undef ready<br>Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:96 1<br>Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:96 A 0<br>Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:80 1<br>Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:80 A 0<br>Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:48 1<br>Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:48 A 0<br>Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:0 1<br>Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:0 A 0<br>Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:64 1<br>Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:64 A 0<br>Jan 26 12:46:28 | Found matching wwid [1ATA_WDC_WD3200AAJS-60Z0A0_WD-WMAV2HM46197] in bindings file. Setting alias to mpatha<br>Jan 26 12:46:28 | sdb: ownership set to mpatha<br>Jan 26 12:46:28 | sdb: not found in pathvec<br>Jan 26 12:46:28 | sdb: mask = 0xc<br>Jan 26 12:46:28 | sdb: get_state<br>Jan 26 12:46:28 | sdb: state = running<br>Jan 26 12:46:28 | directio: starting new request<br>Jan 26 12:46:28 | directio: io finished 4096/0<br>Jan 26 12:46:28 | sdb: state = 3<br>Jan 26 12:46:28 | sdb: state = running<br>Jan 26 12:46:28 | sdb: const prio = 1<br>Jan 26 12:46:28 | mpatha: pgfailover = -1 (internal default)<br>Jan 26 12:46:28 | mpatha: pgpolicy = failover (internal default)<br>Jan 26 12:46:28 | mpatha: selector = round-robin 0 (internal default)<br>Jan 26 12:46:28 | mpatha: features = 0 (internal default)<br>Jan 26 12:46:28 | mpatha: hwhandler = 0 (internal default)<br>Jan 26 12:46:28 | mpatha: rr_weight = 1 (internal default)<br>Jan 26 12:46:28 | mpatha: minio = 1 rq (config file default)<br>Jan 26 12:46:28 | mpatha: no_path_retry = -1 (config file default)<br>Jan 26 12:46:28 | pg_timeout = NONE (internal default)<br>Jan 26 12:46:28 | mpatha: fast_io_fail_tmo = 5 (config file default)<br>Jan 26 12:46:28 | mpatha: dev_loss_tmo = 30 (config file default)<br>Jan 26 12:46:28 | mpatha: retain_attached_hw_handler = 1 (config file default)<br>Jan 26 12:46:28 | failed to find rport_id for target10:0:0<br>Jan 26 12:46:28 | mpatha: set ACT_CREATE (map does not exist)<br>Jan 26 12:46:28 | mpatha: domap (0) failure for create/reload map<br>Jan 26 12:46:28 | mpatha: ignoring map<br><br>**********************************************************************************<br><br>[root@node002 shim]# iscsiadm -m session -o show<br>tcp: [6] <a href="http://192.168.1.12:3260" target="_blank">192.168.1.12:3260</a>,1 pl.mycomp.shimi:node002.target0<br>tcp: [7] <a href="http://192.168.1.11:3260" target="_blank">192.168.1.11:3260</a>,1 pl.mycomp.shimi:manager.target0<br>tcp: [8] <a href="http://192.168.1.14:3260" target="_blank">192.168.1.14:3260</a>,1 pl.mycomp.shimi:node003.target0<br>tcp: [9] <a href="http://192.168.1.13:3260" target="_blank">192.168.1.13:3260</a>,1 pl.mycomp.shimi:node001.target0<br><br>**********************************************************************************<br><br>[root@node002 shim]# iptables -L<br>Chain INPUT (policy ACCEPT)<br>target prot opt source destination<br><br>Chain FORWARD (policy ACCEPT)<br>target prot opt source destination<br><br>Chain OUTPUT (policy ACCEPT)<br>target prot opt source destination<br><br>**********************************************************************************<br><br>[root@node002 shim]# sestatus<br>SELinux status: disabled<br><br></div></div>