[ovirt-users] Hosted engine on iscsi storage

Simone Tiraboschi stirabos at redhat.com
Thu May 5 09:14:40 EDT 2016


On Thu, May 5, 2016 at 2:35 PM, Darran Carey <darran.carey at pawsey.org.au> wrote:
> Hi Simone,
>
> Please find the log files attached. Thank you very much for taking the time
> to look at this problem.
>
> Regards,
> Darran.

Indeed VDSM is returning an empty device list:

Thread-17518::DEBUG::2016-05-05
16:05:05,409::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e96b9df-b656-45bd-899b-c94ec9be5c52`::moving from state init ->
state preparing
Thread-17518::INFO::2016-05-05
16:05:05,410::logUtils::48::dispatcher::(wrapper) Run and protect:
getDeviceList(storageType=3, guids=(), checkStatus=True, options={})
Thread-17518::DEBUG::2016-05-05
16:05:05,410::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method (storage.sdc.refreshStorage)
Thread-17518::DEBUG::2016-05-05
16:05:05,411::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-17518::DEBUG::2016-05-05
16:05:05,411::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method (storage.iscsi.rescan)
Thread-17518::DEBUG::2016-05-05
16:05:05,411::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-17518::DEBUG::2016-05-05
16:05:05,412::iscsi::434::Storage.ISCSI::(rescan) Performing SCSI
scan, this will take up to 30 seconds
Thread-17518::DEBUG::2016-05-05
16:05:05,413::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/taskset --cpu-list 0-7 /usr/bin/sudo -n /sbin/iscsiadm -m
session -R (cwd None)
Thread-17518::DEBUG::2016-05-05
16:05:05,463::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-17518::DEBUG::2016-05-05
16:05:05,464::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method (storage.hba.rescan)
Thread-17518::DEBUG::2016-05-05
16:05:05,464::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-17518::DEBUG::2016-05-05
16:05:05,464::hba::56::Storage.HBA::(rescan) Starting scan
Thread-17518::DEBUG::2016-05-05
16:05:05,661::hba::62::Storage.HBA::(rescan) Scan finished
Thread-17518::DEBUG::2016-05-05
16:05:05,662::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-17518::DEBUG::2016-05-05
16:05:05,662::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/sbin/multipath
(cwd None)
Thread-17518::DEBUG::2016-05-05
16:05:05,747::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS:
<err> = ''; <rc> = 0
Thread-17518::DEBUG::2016-05-05
16:05:05,748::utils::671::root::(execCmd) /usr/bin/taskset --cpu-list
0-7 /sbin/udevadm settle --timeout=5 (cwd None)
Thread-17518::DEBUG::2016-05-05
16:05:05,768::utils::689::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-17518::DEBUG::2016-05-05
16:05:05,771::lvm::497::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation mutex
Thread-17518::DEBUG::2016-05-05
16:05:05,772::lvm::499::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation mutex
Thread-17518::DEBUG::2016-05-05
16:05:05,772::lvm::508::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation mutex
Thread-17518::DEBUG::2016-05-05
16:05:05,773::lvm::510::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation mutex
Thread-17518::DEBUG::2016-05-05
16:05:05,773::lvm::528::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation mutex
Thread-17518::DEBUG::2016-05-05
16:05:05,774::lvm::530::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation mutex
Thread-17518::DEBUG::2016-05-05
16:05:05,774::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-17518::DEBUG::2016-05-05
16:05:05,775::lvm::319::Storage.OperationMutex::(_reloadpvs) Operation
'lvm reload operation' got the operation mutex
Thread-17518::DEBUG::2016-05-05
16:05:05,777::lvm::290::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-7 /usr/bin/sudo -n /usr/sbin/lvm pvs --config ' devices {
preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1
write_cache_state=0 disable_after_error_count=3 filter = [
'\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1
wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50
retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size
(cwd None)
Thread-17518::DEBUG::2016-05-05
16:05:05,987::lvm::290::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '
WARNING: lvmetad is running but disabled. Restart lvmetad before
enabling it!\n'; <rc> = 0
Thread-17518::DEBUG::2016-05-05
16:05:05,987::lvm::347::Storage.OperationMutex::(_reloadpvs) Operation
'lvm reload operation' released the operation mutex
Thread-17518::DEBUG::2016-05-05
16:05:06,003::lvm::290::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-7 /usr/bin/sudo -n /usr/sbin/lvm pvcreate --config '
devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ '\''r|.*|'\'' ] }  global {
locking_type=1  prioritise_write_locks=1  wait_for_locks=1
use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --test
--metadatasize 128m --metadatacopies 2 --metadataignore y (cwd None)
Thread-17518::DEBUG::2016-05-05
16:05:06,054::lvm::290::Storage.Misc.excCmd::(cmd) FAILED: <err> = "
WARNING: lvmetad is running but disabled. Restart lvmetad before
enabling it!\n  TEST MODE: Metadata will NOT be updated and volumes
will not be (de)activated.\n  Please enter a physical volume path.\n
Run `pvcreate --help' for more information.\n"; <rc> = 3
Thread-17518::DEBUG::2016-05-05
16:05:06,056::lvm::864::Storage.LVM::(testPVCreate) rc: 3, out: [],
err: ['  WARNING: lvmetad is running but disabled. Restart lvmetad
before enabling it!', '  TEST MODE: Metadata will NOT be updated and
volumes will not be (de)activated.', '  Please enter a physical volume
path.', "  Run `pvcreate --help' for more information."], unusedDevs:
set([]), usedDevs: set([])
Thread-17518::INFO::2016-05-05
16:05:06,056::logUtils::51::dispatcher::(wrapper) Run and protect:
getDeviceList, Return response: {'devList': []}
Thread-17518::DEBUG::2016-05-05
16:05:06,057::task::1191::Storage.TaskManager.Task::(prepare)
Task=`6e96b9df-b656-45bd-899b-c94ec9be5c52`::finished: {'devList': []}
Thread-17518::DEBUG::2016-05-05
16:05:06,057::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e96b9df-b656-45bd-899b-c94ec9be5c52`::moving from state
preparing -> state finished


You can check the initiator name used by VDSM with
 vdsClient -s 0 getVdsCaps | grep ISCSIInitiatorName

Can you please check if you correctly configured ACLs for that initiator?


> On 2016-05-05 17:06, Simone Tiraboschi wrote:
>>
>> On Thu, May 5, 2016 at 10:22 AM, Darran Carey
>> <darran.carey at pawsey.org.au> wrote:
>>>
>>> Hi All,
>>>
>>> I am trying to install the hosted engine on an iscsi target but get the
>>> following error:
>>>
>>> [root at virt-host01 ~]# hosted-engine --deploy
>>> ...
>>>           --== STORAGE CONFIGURATION ==--
>>>
>>>           During customization use CTRL-D to abort.
>>>           Please specify the storage you would like to use (glusterfs,
>>> iscsi, fc, nfs3, nfs4)[nfs3]: iscsi
>>>           Please specify the iSCSI portal IP address: 10.43.0.100
>>>           Please specify the iSCSI portal port [3260]:
>>>           Please specify the iSCSI portal user:
>>>           Please specify the target name
>>>
>>> (iqn.2001-05.com.equallogic:0-8a0906-8bb896109-6060000000b57145-iscsi-vol-01)
>>>
>>> [iqn.2001-05.com.equallogic:0-8a0906-8bb896109-6060000000b57145-iscsi-vol-01]:
>>> [ INFO  ] Discovering iSCSI node
>>> [ INFO  ] Connecting to the storage server
>>> [ INFO  ] Discovering iSCSI node
>>> [ INFO  ] Connecting to the storage server
>>> [ ERROR ] Failed to execute stage 'Environment customization': Unable to
>>> retrieve the list of LUN(s) please check the SELinux log and settings on
>>> your iscsi target
>>>
>>>
>>> The relevant excerpt from the log file is:
>>>
>>> 2016-05-05 16:05:09 DEBUG otopi.context context._executeMethod:156 method
>>> exception
>>> Traceback (most recent call last):
>>>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in
>>> _executeMethod
>>>     method['method']()
>>>   File
>>>
>>> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/blockd.py",
>>> line 591, in _customization
>>>     lunGUID = self._customize_lun(self.domainType, target)
>>>   File
>>>
>>> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/blockd.py",
>>> line 209, in _customize_lun
>>>     iqn=target,
>>>   File
>>>
>>> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/blockd.py",
>>> line 374, in _iscsi_get_lun_list
>>>     raise RuntimeError("Unable to retrieve the list of LUN(s) please "
>>> RuntimeError: Unable to retrieve the list of LUN(s) please check the
>>> SELinux
>>> log and settings on your iscsi target
>>> 2016-05-05 16:05:09 ERROR otopi.context context._executeMethod:165 Failed
>>> to
>>> execute stage 'Environment customization': Unable to retrieve the list of
>>> LUN(s) please check the SELinux log and settings on your iscsi target
>>
>>
>> Can you please attach the whole hosted-engine-setup log file and vdsm
>> logs?
>>
>>> This is on CentOS 7 with ovirt 3.6.
>>> selinux is disabled.
>>>
>>> I can mount the iscsi target fine using iscsiadm or the Dell Equallogic
>>> Host
>>> Integration Toolkit commands.
>>>
>>> I think the first problem is the call to self.cli.getDeviceList in
>>> blockd.py
>>> is returning an empty list, but I don't know what that function is
>>> actually
>>> doing.
>>>
>>> Has anyone experienced similar behaviour or has any suggestions as to
>>> what I
>>> should check next?
>>>
>>> Thanks,
>>> Darran.
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users


More information about the Users mailing list