On 26/01/15 13:05 +0100, shimano wrote:
Hi guys,
I'm trying to run one of my storage domains, which experienced failure.
Unfortunately, I meet a very nasty error (Storage domain does not exist).
Could someone tell me, how to try to restore this domain?
Could you try moving the host to Maintenance mode and then Activate it
again please. I've encountered situations where vdsm restarts and
engine does not reconnect storage until an Activate action happens.
Let's see if this is your issue.
P.S.
It's an oVirt 3.4.2-1.el6
**********************************************************************************
/var/log/messages:
Jan 26 12:48:49 node002 vdsm TaskManager.Task ERROR
Task=`10d02993-b585-448f-9a50-bd3e8cda7082`::Unexpected error#012Traceback
(most recent call last):#012 File "/usr/share/vdsm/storage/task.py", line
873, in _run#012 return fn(*args, **kargs)#012 File
"/usr/share/vdsm/logUtils.py", line 45, in wrapper#012 res = f(*args,
**kwargs)#012 File "/usr/share/vdsm/storage/hsm.py", line 2959, in
getVGInfo#012 return dict(info=self.__getVGsInfo([vgUUID])[0])#012 File
"/usr/share/vdsm/storage/hsm.py", line 2892, in __getVGsInfo#012 vgList
= [lvm.getVGbyUUID(vgUUID) for vgUUID in vgUUIDs]#012 File
"/usr/share/vdsm/storage/lvm.py", line 894, in getVGbyUUID#012 raise
se.VolumeGroupDoesNotExist("vg_uuid: %s" %
vgUUID)#012VolumeGroupDoesNotExist: Volume Group does not exist: ('vg_uuid:
gyaCWf-6VKi-lI9W-JT6H-IZdy-rIsB-hTvZ4O',)
Jan 26 12:48:49 node002 kernel: device-mapper: table: 253:26: multipath:
error getting device
Jan 26 12:48:49 node002 kernel: device-mapper: ioctl: error adding target
to table
**********************************************************************************
/var/log/vdsm.log:
Thread-22::ERROR::2015-01-26
12:43:03,376::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain db52e9cb-7306-43fd-aff3-20831bc2bcaf
Thread-22::ERROR::2015-01-26
12:43:03,377::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain db52e9cb-7306-43fd-aff3-20831bc2bcaf
Thread-22::DEBUG::2015-01-26
12:43:03,377::lvm::373::OperationMutex::(_reloadvgs) Operation 'lvm reload
operation' got the operation mutex
Thread-22::DEBUG::2015-01-26
12:43:03,378::lvm::296::Storage.Misc.excCmd::(cmd) u'/usr/bin/sudo -n
/sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
\'a|/dev/mapper/mpathb|/dev/mapper/mpathc|/dev/mapper/mpathd|/dev/mapper/mpathe|/dev/mapper/mpathf|\',
\'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
db52e9cb-7306-43fd-aff3-20831bc2bcaf' (cwd None)
Thread-22::DEBUG::2015-01-26
12:43:03,462::lvm::296::Storage.Misc.excCmd::(cmd) FAILED: <err> = '
/dev/mapper/mpathc: Checksum error\n /dev/mapper/mpathc: Checksum error\n
Volume group "db52e9cb-7306-43fd-aff3-20831bc2bcaf" not found\n Skipping
volume group db52e9cb-7306-43fd-aff3-20831bc2bcaf\n'; <rc> = 5
Thread-22::WARNING::2015-01-26
12:43:03,466::lvm::378::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['
/dev/mapper/mpathc: Checksum error', ' /dev/mapper/mpathc: Checksum
error', ' Volume group "db52e9cb-7306-43fd-aff3-20831bc2bcaf" not
found',
' Skipping volume group db52e9cb-7306-43fd-aff3-20831bc2bcaf']
Thread-22::DEBUG::2015-01-26
12:43:03,466::lvm::415::OperationMutex::(_reloadvgs) Operation 'lvm reload
operation' released the operation mutex
Thread-22::ERROR::2015-01-26
12:43:03,477::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
db52e9cb-7306-43fd-aff3-20831bc2bcaf not found
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'db52e9cb-7306-43fd-aff3-20831bc2bcaf',)
Thread-22::ERROR::2015-01-26
12:43:03,478::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain)
Error while collecting domain db52e9cb-7306-43fd-aff3-20831bc2bcaf
monitoring information
Traceback (most recent call last):
File "/usr/share/vdsm/storage/domainMonitor.py", line 204, in
_monitorDomain
self.domain = sdCache.produce(self.sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 98, in produce
domain.getRealDomain()
File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'db52e9cb-7306-43fd-aff3-20831bc2bcaf',)
Thread-13::DEBUG::2015-01-26
12:43:05,102::task::595::TaskManager.Task::(_updateState)
Task=`b4e85e37-b216-4d29-a448-0711e370a246`::moving from state init ->
state preparing
Thread-13::INFO::2015-01-26
12:43:05,102::logUtils::44::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-13::INFO::2015-01-26
12:43:05,103::logUtils::47::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {u'7969d636-1a02-42ba-a50b-2528765cf3d5':
{'code': 0, 'version': 0, 'acquired': True, 'delay':
'0.000457574',
'lastCheck': '7.5', 'valid': True},
u'5e1ca1b6-4706-4c79-8924-b8db741c929f': {'code': 0, 'version':
3,
'acquired': True, 'delay': '0.00100094', 'lastCheck':
'6.3', 'valid':
True}, u'cb85e6cd-df54-4151-8f3b-7e6d72b7372d': {'code': 0,
'version': 3,
'acquired': True, 'delay': '0.463061', 'lastCheck':
'4.9', 'valid': True},
u'db52e9cb-7306-43fd-aff3-20831bc2bcaf': {'code': 358, 'version':
-1,
'acquired': False, 'delay': '0', 'lastCheck':
'1.6', 'valid': False},
u'5f595801-aaa5-42c7-b829-7a34a636407e': {'code': 0, 'version':
3,
'acquired': True, 'delay': '0.000942979', 'lastCheck':
'7.9', 'valid':
True}, u'c1ebd0f8-fa32-4fe3-8569-fb7d4ad8faf4': {'code': 0,
'version': 0,
'acquired': True, 'delay': '0.000424499', 'lastCheck':
'7.3', 'valid':
True}}
Thread-13::DEBUG::2015-01-26
12:43:05,103::task::1185::TaskManager.Task::(prepare)
Task=`b4e85e37-b216-4d29-a448-0711e370a246`::finished:
{u'7969d636-1a02-42ba-a50b-2528765cf3d5': {'code': 0, 'version':
0,
'acquired': True, 'delay': '0.000457574', 'lastCheck':
'7.5', 'valid':
True}, u'5e1ca1b6-4706-4c79-8924-b8db741c929f': {'code': 0,
'version': 3,
'acquired': True, 'delay': '0.00100094', 'lastCheck':
'6.3', 'valid':
True}, u'cb85e6cd-df54-4151-8f3b-7e6d72b7372d': {'code': 0,
'version': 3,
'acquired': True, 'delay': '0.463061', 'lastCheck':
'4.9', 'valid': True},
u'db52e9cb-7306-43fd-aff3-20831bc2bcaf': {'code': 358, 'version':
-1,
'acquired': False, 'delay': '0', 'lastCheck':
'1.6', 'valid': False},
u'5f595801-aaa5-42c7-b829-7a34a636407e': {'code': 0, 'version':
3,
'acquired': True, 'delay': '0.000942979', 'lastCheck':
'7.9', 'valid':
True}, u'c1ebd0f8-fa32-4fe3-8569-fb7d4ad8faf4': {'code': 0,
'version': 0,
'acquired': True, 'delay': '0.000424499', 'lastCheck':
'7.3', 'valid':
True}}
**********************************************************************************
[root@node002 shim]# multipath -ll
mpathe (1NODE_001_LUN01) dm-6 SHIMI,VIRTUAL-DISK
size=977G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 21:0:0:1 sdg 8:96 active ready running
mpathd (1NODE_003_LUN01) dm-7 SHIMI,VIRTUAL-DISK
size=977G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 20:0:0:1 sdf 8:80 active ready running
mpathc (1NODE_002_LUN01) dm-4 SHIMI,VIRTUAL-DISK
size=977G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 18:0:0:1 sdd 8:48 active ready running
mpathb (1ATA_MARVELL_Raid_VD_0_1c3c8ecf5cf00010) dm-1 ATA,MARVELL Raid VD
size=1.8T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 0:0:0:0 sda 8:0 active ready running
mpathf (1MANAGER_LUN01) dm-5 SHIMI,VIRTUAL-DISK
size=500G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 19:0:0:1 sde 8:64 active ready running
**********************************************************************************
[root@node002 shim]# lsblk
NAME MAJ:MIN RM
SIZE RO TYPE MOUNTPOINT
sdb 8:16 0
298.1G 0 disk
├─sdb1 8:17
0 1G 0 part /boot
├─sdb2 8:18
0 4G 0 part [SWAP]
└─sdb3 8:19 0
293.1G 0 part
└─vg_node002-LogVol00 (dm-0) 253:0 0
293.1G 0 lvm /
sda 8:0 0
1.8T 0 disk
└─sda1 8:1 0
1.8T 0 part
sdd 8:48 0
976.6G 0 disk
└─mpathc (dm-4) 253:4 0
976.6G 0 mpath
sde 8:64 0
500G 0 disk
└─mpathf (dm-5) 253:5 0
500G 0 mpath
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-metadata (dm-15) 253:15 0
512M 0 lvm
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-ids (dm-16) 253:16 0
128M 0 lvm
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-leases (dm-18) 253:18
0 2G 0 lvm
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-outbox (dm-20) 253:20 0
128M 0 lvm
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-inbox (dm-21) 253:21 0
128M 0 lvm
└─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-master (dm-22) 253:22
0 1G 0 lvm
sdf 8:80 0
976.6G 0 disk
└─mpathd (dm-7) 253:7 0
976.6G 0 mpath
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-metadata (dm-14) 253:14 0
512M 0 lvm
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-ids (dm-17) 253:17 0
128M 0 lvm
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-leases (dm-19) 253:19
0 2G 0 lvm
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-outbox (dm-23) 253:23 0
128M 0 lvm
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-inbox (dm-24) 253:24 0
128M 0 lvm
└─5e1ca1b6--4706--4c79--8924--b8db741c929f-master (dm-25) 253:25
0 1G 0 lvm
sdg 8:96 0
976.6G 0 disk
└─mpathe (dm-6) 253:6 0
976.6G 0 mpath
├─5f595801--aaa5--42c7--b829--7a34a636407e-metadata (dm-8) 253:8 0
512M 0 lvm
├─5f595801--aaa5--42c7--b829--7a34a636407e-ids (dm-9) 253:9 0
128M 0 lvm
├─5f595801--aaa5--42c7--b829--7a34a636407e-leases (dm-10) 253:10
0 2G 0 lvm
├─5f595801--aaa5--42c7--b829--7a34a636407e-outbox (dm-11) 253:11 0
128M 0 lvm
├─5f595801--aaa5--42c7--b829--7a34a636407e-inbox (dm-12) 253:12 0
128M 0 lvm
└─5f595801--aaa5--42c7--b829--7a34a636407e-master (dm-13) 253:13
0 1G 0 lvm
**********************************************************************************
[root@node002 shim]# multipath -v3
Jan 26 12:46:28 | ram0: device node name blacklisted
Jan 26 12:46:28 | ram1: device node name blacklisted
Jan 26 12:46:28 | ram2: device node name blacklisted
Jan 26 12:46:28 | ram3: device node name blacklisted
Jan 26 12:46:28 | ram4: device node name blacklisted
Jan 26 12:46:28 | ram5: device node name blacklisted
Jan 26 12:46:28 | ram6: device node name blacklisted
Jan 26 12:46:28 | ram7: device node name blacklisted
Jan 26 12:46:28 | ram8: device node name blacklisted
Jan 26 12:46:28 | ram9: device node name blacklisted
Jan 26 12:46:28 | ram10: device node name blacklisted
Jan 26 12:46:28 | ram11: device node name blacklisted
Jan 26 12:46:28 | ram12: device node name blacklisted
Jan 26 12:46:28 | ram13: device node name blacklisted
Jan 26 12:46:28 | ram14: device node name blacklisted
Jan 26 12:46:28 | ram15: device node name blacklisted
Jan 26 12:46:28 | loop0: device node name blacklisted
Jan 26 12:46:28 | loop1: device node name blacklisted
Jan 26 12:46:28 | loop2: device node name blacklisted
Jan 26 12:46:28 | loop3: device node name blacklisted
Jan 26 12:46:28 | loop4: device node name blacklisted
Jan 26 12:46:28 | loop5: device node name blacklisted
Jan 26 12:46:28 | loop6: device node name blacklisted
Jan 26 12:46:28 | loop7: device node name blacklisted
Jan 26 12:46:28 | sdb: not found in pathvec
Jan 26 12:46:28 | sdb: mask = 0x3f
Jan 26 12:46:28 | sdb: dev_t = 8:16
Jan 26 12:46:28 | sdb: size = 625142448
Jan 26 12:46:28 | sdb: subsystem = scsi
Jan 26 12:46:28 | sdb: vendor = ATA
Jan 26 12:46:28 | sdb: product = WDC WD3200AAJS-6
Jan 26 12:46:28 | sdb: rev = 03.0
Jan 26 12:46:28 | sdb: h:b:t:l = 10:0:0:0
Jan 26 12:46:28 | sdb: serial = WD-WMAV2HM46197
Jan 26 12:46:28 | sdb: get_state
Jan 26 12:46:28 | sdb: path checker = directio (config file default)
Jan 26 12:46:28 | sdb: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sdb: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdb: state = 3
Jan 26 12:46:28 | sdb: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sdb: uid = 1ATA_WDC_WD3200AAJS-60Z0A0_WD-WMAV2HM46197
(callout)
Jan 26 12:46:28 | sdb: state = running
Jan 26 12:46:28 | sdb: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sdb: prio = const (config file default)
Jan 26 12:46:28 | sdb: const prio = 1
Jan 26 12:46:28 | sda: not found in pathvec
Jan 26 12:46:28 | sda: mask = 0x3f
Jan 26 12:46:28 | sda: dev_t = 8:0
Jan 26 12:46:28 | sda: size = 3904897024
Jan 26 12:46:28 | sda: subsystem = scsi
Jan 26 12:46:28 | sda: vendor = ATA
Jan 26 12:46:28 | sda: product = MARVELL Raid VD
Jan 26 12:46:28 | sda: rev = MV.R
Jan 26 12:46:28 | sda: h:b:t:l = 0:0:0:0
Jan 26 12:46:28 | sda: serial = 1c3c8ecf5cf00010
Jan 26 12:46:28 | sda: get_state
Jan 26 12:46:28 | sda: path checker = directio (config file default)
Jan 26 12:46:28 | sda: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sda: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sda: state = 3
Jan 26 12:46:28 | sda: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sda: uid = 1ATA_MARVELL_Raid_VD_0_1c3c8ecf5cf00010
(callout)
Jan 26 12:46:28 | sda: state = running
Jan 26 12:46:28 | sda: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sda: prio = const (config file default)
Jan 26 12:46:28 | sda: const prio = 1
Jan 26 12:46:28 | dm-0: device node name blacklisted
Jan 26 12:46:28 | sdc: not found in pathvec
Jan 26 12:46:28 | sdc: mask = 0x3f
Jan 26 12:46:28 | sdc: dev_t = 8:32
Jan 26 12:46:28 | sdc: size = 0
Jan 26 12:46:28 | sdc: subsystem = scsi
Jan 26 12:46:28 | sdc: vendor = Multi
Jan 26 12:46:28 | sdc: product = Flash Reader
Jan 26 12:46:28 | sdc: rev = 1.00
Jan 26 12:46:28 | sdc: h:b:t:l = 12:0:0:0
Jan 26 12:46:28 | dm-1: device node name blacklisted
Jan 26 12:46:28 | dm-2: device node name blacklisted
Jan 26 12:46:28 | dm-3: device node name blacklisted
Jan 26 12:46:28 | sdd: not found in pathvec
Jan 26 12:46:28 | sdd: mask = 0x3f
Jan 26 12:46:28 | sdd: dev_t = 8:48
Jan 26 12:46:28 | sdd: size = 2048000000
Jan 26 12:46:28 | sdd: subsystem = scsi
Jan 26 12:46:28 | sdd: vendor = SHIMI
Jan 26 12:46:28 | sdd: product = VIRTUAL-DISK
Jan 26 12:46:28 | sdd: rev = 0001
Jan 26 12:46:28 | sdd: h:b:t:l = 18:0:0:1
Jan 26 12:46:28 | sdd: tgt_node_name = pl.mycomp.shimi:node002.target0
Jan 26 12:46:28 | sdd: serial = beaf11
Jan 26 12:46:28 | sdd: get_state
Jan 26 12:46:28 | sdd: path checker = directio (config file default)
Jan 26 12:46:28 | sdd: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sdd: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdd: state = 3
Jan 26 12:46:28 | sdd: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sdd: uid = 1NODE_002_LUN01 (callout)
Jan 26 12:46:28 | sdd: state = running
Jan 26 12:46:28 | sdd: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sdd: prio = const (config file default)
Jan 26 12:46:28 | sdd: const prio = 1
Jan 26 12:46:28 | dm-4: device node name blacklisted
Jan 26 12:46:28 | sde: not found in pathvec
Jan 26 12:46:28 | sde: mask = 0x3f
Jan 26 12:46:28 | sde: dev_t = 8:64
Jan 26 12:46:28 | sde: size = 1048576000
Jan 26 12:46:28 | sde: subsystem = scsi
Jan 26 12:46:28 | sde: vendor = SHIMI
Jan 26 12:46:28 | sde: product = VIRTUAL-DISK
Jan 26 12:46:28 | sde: rev = 0001
Jan 26 12:46:28 | sde: h:b:t:l = 19:0:0:1
Jan 26 12:46:28 | sde: tgt_node_name = pl.mycomp.shimi:manager.target0
Jan 26 12:46:28 | sde: serial = beaf11
Jan 26 12:46:28 | sde: get_state
Jan 26 12:46:28 | sde: path checker = directio (config file default)
Jan 26 12:46:28 | sde: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sde: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sde: state = 3
Jan 26 12:46:28 | sde: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sde: uid = 1MANAGER_LUN01 (callout)
Jan 26 12:46:28 | sde: state = running
Jan 26 12:46:28 | sde: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sde: prio = const (config file default)
Jan 26 12:46:28 | sde: const prio = 1
Jan 26 12:46:28 | sdf: not found in pathvec
Jan 26 12:46:28 | sdf: mask = 0x3f
Jan 26 12:46:28 | sdf: dev_t = 8:80
Jan 26 12:46:28 | sdf: size = 2048000000
Jan 26 12:46:28 | sdf: subsystem = scsi
Jan 26 12:46:28 | sdf: vendor = SHIMI
Jan 26 12:46:28 | sdf: product = VIRTUAL-DISK
Jan 26 12:46:28 | sdf: rev = 0001
Jan 26 12:46:28 | sdf: h:b:t:l = 20:0:0:1
Jan 26 12:46:28 | sdf: tgt_node_name = pl.mycomp.shimi:node003.target0
Jan 26 12:46:28 | sdf: serial = beaf11
Jan 26 12:46:28 | sdf: get_state
Jan 26 12:46:28 | sdf: path checker = directio (config file default)
Jan 26 12:46:28 | sdf: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sdf: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdf: state = 3
Jan 26 12:46:28 | sdf: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sdf: uid = 1NODE_003_LUN01 (callout)
Jan 26 12:46:28 | sdf: state = running
Jan 26 12:46:28 | sdf: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sdf: prio = const (config file default)
Jan 26 12:46:28 | sdf: const prio = 1
Jan 26 12:46:28 | sdg: not found in pathvec
Jan 26 12:46:28 | sdg: mask = 0x3f
Jan 26 12:46:28 | sdg: dev_t = 8:96
Jan 26 12:46:28 | sdg: size = 2048000000
Jan 26 12:46:28 | sdg: subsystem = scsi
Jan 26 12:46:28 | sdg: vendor = SHIMI
Jan 26 12:46:28 | sdg: product = VIRTUAL-DISK
Jan 26 12:46:28 | sdg: rev = 0001
Jan 26 12:46:28 | sdg: h:b:t:l = 21:0:0:1
Jan 26 12:46:28 | sdg: tgt_node_name = pl.mycomp.shimi:node001.target0
Jan 26 12:46:28 | sdg: serial = beaf11
Jan 26 12:46:28 | sdg: get_state
Jan 26 12:46:28 | sdg: path checker = directio (config file default)
Jan 26 12:46:28 | sdg: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sdg: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdg: state = 3
Jan 26 12:46:28 | sdg: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sdg: uid = 1NODE_001_LUN01 (callout)
Jan 26 12:46:28 | sdg: state = running
Jan 26 12:46:28 | sdg: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sdg: prio = const (config file default)
Jan 26 12:46:28 | sdg: const prio = 1
Jan 26 12:46:28 | dm-5: device node name blacklisted
Jan 26 12:46:28 | dm-6: device node name blacklisted
Jan 26 12:46:28 | dm-7: device node name blacklisted
Jan 26 12:46:28 | dm-8: device node name blacklisted
Jan 26 12:46:28 | dm-9: device node name blacklisted
Jan 26 12:46:28 | dm-10: device node name blacklisted
Jan 26 12:46:28 | dm-11: device node name blacklisted
Jan 26 12:46:28 | dm-12: device node name blacklisted
Jan 26 12:46:28 | dm-13: device node name blacklisted
Jan 26 12:46:28 | dm-14: device node name blacklisted
Jan 26 12:46:28 | dm-15: device node name blacklisted
Jan 26 12:46:28 | dm-16: device node name blacklisted
Jan 26 12:46:28 | dm-17: device node name blacklisted
Jan 26 12:46:28 | dm-18: device node name blacklisted
Jan 26 12:46:28 | dm-19: device node name blacklisted
Jan 26 12:46:28 | dm-20: device node name blacklisted
Jan 26 12:46:28 | dm-21: device node name blacklisted
Jan 26 12:46:28 | dm-22: device node name blacklisted
Jan 26 12:46:28 | dm-23: device node name blacklisted
Jan 26 12:46:28 | dm-24: device node name blacklisted
Jan 26 12:46:28 | dm-25: device node name blacklisted
===== paths list =====
uuid hcil dev dev_t pri dm_st
chk_st
1ATA_WDC_WD3200AAJS-60Z0A0_WD-WMAV2HM46197 10:0:0:0 sdb 8:16 1 undef
ready
1ATA_MARVELL_Raid_VD_0_1c3c8ecf5cf00010 0:0:0:0 sda 8:0 1 undef
ready
12:0:0:0 sdc 8:32 -1 undef
faulty
1NODE_002_LUN01 18:0:0:1 sdd 8:48 1 undef
ready
1MANAGER_LUN01 19:0:0:1 sde 8:64 1 undef
ready
1NODE_003_LUN01 20:0:0:1 sdf 8:80 1 undef
ready
1NODE_001_LUN01 21:0:0:1 sdg 8:96 1 undef
ready
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:96 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:96 A 0
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:80 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:80 A 0
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:48 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:48 A 0
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:0 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:0 A 0
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:64 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:64 A 0
Jan 26 12:46:28 | Found matching wwid
[1ATA_WDC_WD3200AAJS-60Z0A0_WD-WMAV2HM46197] in bindings file. Setting
alias to mpatha
Jan 26 12:46:28 | sdb: ownership set to mpatha
Jan 26 12:46:28 | sdb: not found in pathvec
Jan 26 12:46:28 | sdb: mask = 0xc
Jan 26 12:46:28 | sdb: get_state
Jan 26 12:46:28 | sdb: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdb: state = 3
Jan 26 12:46:28 | sdb: state = running
Jan 26 12:46:28 | sdb: const prio = 1
Jan 26 12:46:28 | mpatha: pgfailover = -1 (internal default)
Jan 26 12:46:28 | mpatha: pgpolicy = failover (internal default)
Jan 26 12:46:28 | mpatha: selector = round-robin 0 (internal default)
Jan 26 12:46:28 | mpatha: features = 0 (internal default)
Jan 26 12:46:28 | mpatha: hwhandler = 0 (internal default)
Jan 26 12:46:28 | mpatha: rr_weight = 1 (internal default)
Jan 26 12:46:28 | mpatha: minio = 1 rq (config file default)
Jan 26 12:46:28 | mpatha: no_path_retry = -1 (config file default)
Jan 26 12:46:28 | pg_timeout = NONE (internal default)
Jan 26 12:46:28 | mpatha: fast_io_fail_tmo = 5 (config file default)
Jan 26 12:46:28 | mpatha: dev_loss_tmo = 30 (config file default)
Jan 26 12:46:28 | mpatha: retain_attached_hw_handler = 1 (config file
default)
Jan 26 12:46:28 | failed to find rport_id for target10:0:0
Jan 26 12:46:28 | mpatha: set ACT_CREATE (map does not exist)
Jan 26 12:46:28 | mpatha: domap (0) failure for create/reload map
Jan 26 12:46:28 | mpatha: ignoring map
**********************************************************************************
[root@node002 shim]# iscsiadm -m session -o show
tcp: [6] 192.168.1.12:3260,1 pl.mycomp.shimi:node002.target0
tcp: [7] 192.168.1.11:3260,1 pl.mycomp.shimi:manager.target0
tcp: [8] 192.168.1.14:3260,1 pl.mycomp.shimi:node003.target0
tcp: [9] 192.168.1.13:3260,1 pl.mycomp.shimi:node001.target0
**********************************************************************************
[root@node002 shim]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
**********************************************************************************
[root@node002 shim]# sestatus
SELinux status: disabled
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
Adam Litke