[ovirt-users] Hosted engine on iscsi and data domain on GFS2
Sherif Nagy
sherif.nagy at gmail.com
Mon May 11 15:49:41 UTC 2015
A quick update, the storage directory is created with root.root so vdsmd is
not able to write, I managed to change the ownership to 36:36 and every
thing worked fine "after the directory has been created".
I think this is what happens behind the scene:
- check if directory exits or not.
- if not creates it with 755 , root.root
- sudo and mount file systems
- try to write __DIRECT_IO_TEST__ fails
- sudo umount
- directory gets deleted
Can someone confirm ?
Regards,
Sherif
On May 8, 2015 12:38 PM, "Sherif Nagy" <sherif.nagy at gmail.com> wrote:
> HI,
>
> I have centos 7.1 latest update for hosts and VM that hosts the hosted
> engine, the hosted engine uses iSCSI storage and so far it is working
> correctly *as expected* , I have 1 shared DC, 1 Cluster and trying to add
> GFS2 storage as a master data domain, I am using corosync , pacemaker , pcs
> to manage the DLM and cLVM that is also working as expected, however
> whenever I try to add the GFS2 storage I get the following error message
> from VDSM.log
>
> —————
> Thread-884::ERROR::2015-05-08
> 11:45:59,291::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain 2435cdc4-61eb-4ad6-9669-e900f13b7f1d
> Thread-884::DEBUG::2015-05-08
> 11:45:59,291::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm
> reload operation' got the operation mutex
> Thread-884::DEBUG::2015-05-08
> 11:45:59,295::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_co
> unt=3 obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/CETUSRAID1E0A04V01|/dev/mapper/CETUSRAID1E0A04V02|/dev/mapper/CETUSRAID3E0A01V01|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks
> =1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } '
> --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_c
> ount,pv_count,pv_name 2435cdc4-61eb-4ad6-9669-e900f13b7f1d (cwd None)
> Thread-884::DEBUG::2015-05-08
> 11:45:59,397::lvm::291::Storage.Misc.excCmd::(cmd) FAILED: <err> = '
> Volume group "2435cdc4-61eb-4ad6-9669-e900f13b7f1d" not found\n Cannot
> process volume group 2435cdc4-61eb-4ad6-9669-e900f13b7f1d\n'; <rc>
> = 5
> Thread-884::WARNING::2015-05-08
> 11:45:59,400::lvm::376::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['
> Volume group "2435cdc4-61eb-4ad6-9669-e900f13b7f1d" not found', ' Cannot
> process volume group 2435cdc4-61eb-4ad6-9669-e900f13b7f1d
> ']
> Thread-884::DEBUG::2015-05-08
> 11:45:59,401::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm
> reload operation' released the operation mutex
> Thread-885::DEBUG::2015-05-08
> 11:45:59,403::__init__::298::IOProcessClient::(_run) Starting IOProcess...
> Thread-884::ERROR::2015-05-08
> 11:45:59,420::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
> 2435cdc4-61eb-4ad6-9669-e900f13b7f1d not found
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
> dom = findMethod(sdUUID)
> File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
> raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist:
> (u'2435cdc4-61eb-4ad6-9669-e900f13b7f1d',)
> Thread-884::INFO::2015-05-08
> 11:45:59,421::nfsSD::69::Storage.StorageDomain::(create)
> sdUUID=2435cdc4-61eb-4ad6-9669-e900f13b7f1d domainName=imagesGFS
> remotePath=/dev/mapper/gfs2_imgs-gfs2_imgs_lvm domClass=1
> Thread-884::DEBUG::2015-05-08
> 11:45:59,434::__init__::298::IOProcessClient::(_run) Starting IOProcess...
> Thread-884::ERROR::2015-05-08
> 11:45:59,446::task::866::Storage.TaskManager.Task::(_setError)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::Unexpected error
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/task.py", line 873, in _run
> return fn(*args, **kargs)
> File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
> res = f(*args, **kwargs)
> File "/usr/share/vdsm/storage/hsm.py", line 2691, in createStorageDomain
> domVersion)
> File "/usr/share/vdsm/storage/nfsSD.py", line 80, in create
> version)
> File "/usr/share/vdsm/storage/nfsSD.py", line 49, in _preCreateValidation
> fileSD.validateFileSystemFeatures(sdUUID, domPath)
> File "/usr/share/vdsm/storage/fileSD.py", line 88, in
> validateFileSystemFeatures
> oop.getProcessPool(sdUUID).directTouch(testFilePath)
> File "/usr/share/vdsm/storage/outOfProcess.py", line 351, in directTouch
> ioproc.touch(path, flags, mode)
> File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 507,
> in touch
> self.timeout)
> File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 391,
> in _sendCommand
> raise OSError(errcode, errstr)
> OSError: [Errno 13] Permission denied
> Thread-884::DEBUG::2015-05-08
> 11:45:59,447::task::885::Storage.TaskManager.Task::(_run)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::Task._run:
> b574d8c6-af2b-4909-be53-248a5caff1c1 (6,
> u'2435cdc4-61eb-4ad6-9669-e900f13b7f1d', u'imagesGFS'
> , u'/dev/mapper/gfs2_imgs-gfs2_imgs_lvm', 1, u'3') {} failed - stopping
> task
> Thread-884::DEBUG::2015-05-08
> 11:45:59,447::task::1217::Storage.TaskManager.Task::(stop)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::stopping in state preparing
> (force False)
> Thread-884::DEBUG::2015-05-08
> 11:45:59,448::task::993::Storage.TaskManager.Task::(_decref)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::ref 1 aborting True
> Thread-884::INFO::2015-05-08
> 11:45:59,448::task::1171::Storage.TaskManager.Task::(prepare)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::aborting: Task is aborted:
> u'[Errno 13] Permission denied' - code 100
> Thread-884::DEBUG::2015-05-08
> 11:45:59,448::task::1176::Storage.TaskManager.Task::(prepare)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::Prepare: aborted: [Errno 13]
> Permission denied
> Thread-884::DEBUG::2015-05-08
> 11:45:59,449::task::993::Storage.TaskManager.Task::(_decref)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::ref 0 aborting True
> Thread-884::DEBUG::2015-05-08
> 11:45:59,449::task::928::Storage.TaskManager.Task::(_doAbort)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::Task._doAbort: force False
> Thread-884::DEBUG::2015-05-08
> 11:45:59,449::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-884::DEBUG::2015-05-08
> 11:45:59,449::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::moving from state preparing ->
> state aborting
> Thread-884::DEBUG::2015-05-08
> 11:45:59,448::task::993::Storage.TaskManager.Task::(_decref)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::ref 1 aborting True
> Thread-884::INFO::2015-05-08
> 11:45:59,448::task::1171::Storage.TaskManager.Task::(prepare)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::aborting: Task is aborted:
> u'[Errno 13] Permission denied' - code 100
> Thread-884::DEBUG::2015-05-08
> 11:45:59,448::task::1176::Storage.TaskManager.Task::(prepare)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::Prepare: aborted: [Errno 13]
> Permission denied
> Thread-884::DEBUG::2015-05-08
> 11:45:59,449::task::993::Storage.TaskManager.Task::(_decref)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::ref 0 aborting True
> Thread-884::DEBUG::2015-05-08
> 11:45:59,449::task::928::Storage.TaskManager.Task::(_doAbort)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::Task._doAbort: force False
> Thread-884::DEBUG::2015-05-08
> 11:45:59,449::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-884::DEBUG::2015-05-08
> 11:45:59,449::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::moving from state preparing ->
> state aborting
> Thread-884::DEBUG::2015-05-08
> 11:45:59,450::task::550::Storage.TaskManager.Task::(__state_aborting)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::_aborting: recover policy none
> Thread-884::DEBUG::2015-05-08
> 11:45:59,450::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`b574d8c6-af2b-4909-be53-248a5caff1c1`::moving from state aborting ->
> state failed
> Thread-884::DEBUG::2015-05-08
> 11:45:59,450::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
> Thread-884::DEBUG::2015-05-08
> 11:45:59,450::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-884::ERROR::2015-05-08
> 11:45:59,451::dispatcher::79::Storage.Dispatcher::(wrapper) [Errno 13]
> Permission denied
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/dispatcher.py", line 71, in wrapper
> result = ctask.prepare(func, *args, **kwargs)
> File "/usr/share/vdsm/storage/task.py", line 103, in wrapper
> return m(self, *a, **kw)
> File "/usr/share/vdsm/storage/task.py", line 1179, in prepare
> raise self.error
> OSError: [Errno 13] Permission denied
> Thread-884::DEBUG::2015-05-08
> 11:45:59,452::stompReactor::163::yajsonrpc.StompServer::(send) Sending
> response
> JsonRpc (StompReactor)::DEBUG::2015-05-08
> 11:45:59,918::stompReactor::98::Broker.StompAdapter::(handle_frame)
> Handling message <StompFrame command='SEND'>
> JsonRpcServer::DEBUG::2015-05-08
> 11:45:59,920::__init__::506::jsonrpc.JsonRpcServer::(serve_requests)
> Waiting for request
> Thread-887::DEBUG::2015-05-08
> 11:45:59,920::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling
> 'StoragePool.disconnectStorageServer' in bridge with {u'connectionParams':
> [{u'password': u'', u'id': u'a52aaa59-a449-4158-bece-2e0a
> 0916dc78', u'connection': u'/dev/mapper/gfs2_imgs-gfs2_imgs_lvm',
> u'mnt_options': u'noatime,nodiratime', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'gfs2', u'iqn': u'', u'port': u''}], u'storagepoolID':
> u'00000000-0000-0000-0000-000000000
> 000', u'domainType': 6}
> Thread-887::DEBUG::2015-05-08
> 11:45:59,924::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`dacc3945-1a0d-4b80-9e0b-dd0050179001`::moving from state init ->
> state preparing
> Thread-887::INFO::2015-05-08
> 11:45:59,925::logUtils::44::dispatcher::(wrapper) Run and protect:
> disconnectStorageServer(domType=6,
> spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'iqn': u'',
> u'port': u'', u'connection': u'/dev/
> mapper/gfs2_imgs-gfs2_imgs_lvm', u'mnt_options': u'noatime,nodiratime',
> u'user': u'', u'tpgt': u'1', u'vfs_type': u'gfs2', u'password': '******',
> u'id': u'a52aaa59-a449-4158-bece-2e0a0916dc78'}], options=None)
> Thread-887::DEBUG::2015-05-08
> 11:45:59,926::mount::227::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo -n
> /usr/bin/umount -f -l
> /rhev/data-center/mnt/_dev_mapper_gfs2__imgs-gfs2__imgs__lvm (cwd None)
> Thread-887::DEBUG::2015-05-08
> 11:45:59,995::misc::741::Storage.SamplingMethod::(__call__) Trying to enter
> sampling method (storage.sdc.refreshStorage)
> Thread-887::DEBUG::2015-05-08
> 11:45:59,997::misc::743::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> ————
> Any idea what that might be ?
>
> Thank You
> Sherif
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150511/5d7edaba/attachment-0001.html>
More information about the Users
mailing list