Vijay?
-----Original Message-----
From: Itamar Heim [mailto:iheim@redhat.com]
Sent: Thursday, June 21, 2012 12:47 AM
To: зоррыч
Cc: 'Daniel Paikov'; users(a)ovirt.org; Vijay Bellur
Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)
On 06/20/2012 11:41 PM, зоррыч wrote:
ok, so this is still not available in fedora based on the last comment:
From #gluster i figure that fuse still does not support O_DIRECT From linux-fsdevel, it
looks like patches to enable O_DIRECT in fuse are just getting in.
vijay - any estimation on when this may be available?
thanks,
Itamar
-----Original Message-----
From: users-bounces(a)ovirt.org [mailto:users-bounces@ovirt.org] On
Behalf Of зоррыч
Sent: Wednesday, June 20, 2012 3:11 PM
To: 'Itamar Heim'
Cc: 'Daniel Paikov'; users(a)ovirt.org
Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)
I'm sorry for your persistence, but I'm trying to mount gluster storage not NFS
storage.
Such a document is to gluster storage?
How can I see what line ovirt considered invalid in the file metadata?
-----Original Message-----
From: Itamar Heim [mailto:iheim@redhat.com]
Sent: Tuesday, June 19, 2012 7:55 PM
To: зоррыч
Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users(a)ovirt.org; 'Daniel
Paikov'
Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)
On 06/19/2012 11:34 AM, зоррыч wrote:
> I do not understand you
> The catalog, which is mounted Gloucester storage available for recording and vdsm
successfully creates the necessary files to it.
can you please try the NFS troubleshooting approach on this first to try and diagnose the
issue?
http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues
>
>
>
>
> -----Original Message-----
> From: Itamar Heim [mailto:iheim@redhat.com]
> Sent: Monday, June 18, 2012 7:07 PM
> To: зоррыч
> Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users(a)ovirt.org;
'Daniel Paikov'
> Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)
>
> On 06/18/2012 06:03 PM, зоррыч wrote:
>> Posix FS storage
>
> and you can mount this from vdsm via sudo with same mount options and use it?
>
>>
>>
>>
>> -----Original Message-----
>> From: Itamar Heim [mailto:iheim@redhat.com]
>> Sent: Monday, June 18, 2012 6:29 PM
>> To: зоррыч
>> Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users(a)ovirt.org;
'Daniel Paikov'
>> Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)
>>
>> On 06/18/2012 04:50 PM, зоррыч wrote:
>>> Any ideas for solutions?
>>>
>>> Is this a bug?
>>>
>>> *From:*users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] *On
>>> Behalf Of *зоррыч
>>> *Sent:* Sunday, June 17, 2012 12:04 AM
>>> *To:* 'Vijay Bellur'; 'Robert Middleswarth'
>>> *Cc:* users(a)ovirt.org; 'Daniel Paikov'
>>> *Subject:* Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)
>>>
>>> I have updated GlusterFS and volume successfully created
>>>
>>> Thank you!
>>>
>>> But I was not able to mount a storage domain.
>>
>> an NFS or Posix FS storage domain?
>>
>>>
>>> Vdsm.log:
>>>
>>> Thread-21025::DEBUG::2012-06-16
>>> 15:43:21,495::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]
>>>
>>> Thread-21025::DEBUG::2012-06-16
>>> 15:43:21,495::task::588::TaskManager.Task::(_updateState)
>>> Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::moving from state init
>>> -> state preparing
>>>
>>> Thread-21025::INFO::2012-06-16
>>> 15:43:21,503::logUtils::37::dispatcher::(wrapper) Run and protect:
>>> validateStorageServerConnection(domType=6,
>>> spUUID='00000000-0000-0000-0000-000000000000',
conList=[{'port':
>>> '',
>>> 'connection': '10.1.20.7:/sd2', 'iqn': '',
'portal': '', 'user':
>>> '',
>>> 'vfs_type': 'glusterfs', 'password':
'******', 'id':
>>> '00000000-0000-0000-0000-000000000000'}], options=None)
>>>
>>> Thread-21025::INFO::2012-06-16
>>> 15:43:21,503::logUtils::39::dispatcher::(wrapper) Run and protect:
>>> validateStorageServerConnection, Return response: {'statuslist':
>>> [{'status': 0, 'id':
'00000000-0000-0000-0000-000000000000'}]}
>>>
>>> Thread-21025::DEBUG::2012-06-16
>>> 15:43:21,503::task::1172::TaskManager.Task::(prepare)
>>> Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::finished:
{'statuslist':
>>> [{'status': 0, 'id':
'00000000-0000-0000-0000-000000000000'}]}
>>>
>>> Thread-21025::DEBUG::2012-06-16
>>> 15:43:21,503::task::588::TaskManager.Task::(_updateState)
>>> Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::moving from state
>>> preparing
>>> -> state finished
>>>
>>> Thread-21025::DEBUG::2012-06-16
>>> 15:43:21,503::resourceManager::809::ResourceManager.Owner::(release
>>> A
>>> l
>>> l
>>> ) Owner.releaseAll requests {} resources {}
>>>
>>> Thread-21025::DEBUG::2012-06-16
>>> 15:43:21,504::resourceManager::844::ResourceManager.Owner::(cancelA
>>> l
>>> l
>>> )
>>> Owner.cancelAll requests {}
>>>
>>> Thread-21025::DEBUG::2012-06-16
>>> 15:43:21,504::task::978::TaskManager.Task::(_decref)
>>> Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::ref 0 aborting False
>>>
>>> Thread-21026::DEBUG::2012-06-16
>>> 15:43:21,526::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]
>>>
>>> Thread-21026::DEBUG::2012-06-16
>>> 15:43:21,526::task::588::TaskManager.Task::(_updateState)
>>> Task=`2a6538e5-b961-478a-bce6-f5ded1a62bca`::moving from state init
>>> -> state preparing
>>>
>>> Thread-21026::INFO::2012-06-16
>>> 15:43:21,527::logUtils::37::dispatcher::(wrapper) Run and protect:
>>> connectStorageServer(domType=6,
>>> spUUID='00000000-0000-0000-0000-000000000000',
conList=[{'port':
>>> '',
>>> 'connection': '10.1.20.7:/sd2', 'iqn': '',
'portal': '', 'user':
>>> '',
>>> 'vfs_type': 'glusterfs', 'password':
'******', 'id':
>>> 'e7766e1d-f2c6-45ee-900e-00c6689649cd'}], options=None)
>>>
>>> Thread-21026::DEBUG::2012-06-16
>>> 15:43:21,530::lvm::460::OperationMutex::(_invalidateAllPvs)
>>> Operation 'lvm invalidate operation' got the operation mutex
>>>
>>> Thread-21026::DEBUG::2012-06-16
>>> 15:43:21,531::lvm::462::OperationMutex::(_invalidateAllPvs)
>>> Operation 'lvm invalidate operation' released the operation mutex
>>>
>>> Thread-21026::DEBUG::2012-06-16
>>> 15:43:21,531::lvm::472::OperationMutex::(_invalidateAllVgs)
>>> Operation 'lvm invalidate operation' got the operation mutex
>>>
>>> Thread-21026::DEBUG::2012-06-16
>>> 15:43:21,531::lvm::474::OperationMutex::(_invalidateAllVgs)
>>> Operation 'lvm invalidate operation' released the operation mutex
>>>
>>> Thread-21026::DEBUG::2012-06-16
>>> 15:43:21,531::lvm::493::OperationMutex::(_invalidateAllLvs)
>>> Operation 'lvm invalidate operation' got the operation mutex
>>>
>>> Thread-21026::DEBUG::2012-06-16
>>> 15:43:21,531::lvm::495::OperationMutex::(_invalidateAllLvs)
>>> Operation 'lvm invalidate operation' released the operation mutex
>>>
>>> Thread-21026::INFO::2012-06-16
>>> 15:43:21,532::logUtils::39::dispatcher::(wrapper) Run and protect:
>>> connectStorageServer, Return response: {'statuslist':
[{'status':
>>> 0,
>>> 'id': 'e7766e1d-f2c6-45ee-900e-00c6689649cd'}]}
>>>
>>> Thread-21026::DEBUG::2012-06-16
>>> 15:43:21,532::task::1172::TaskManager.Task::(prepare)
>>> Task=`2a6538e5-b961-478a-bce6-f5ded1a62bca`::finished:
{'statuslist':
>>> [{'status': 0, 'id':
'e7766e1d-f2c6-45ee-900e-00c6689649cd'}]}
>>>
>>> Thread-21026::DEBUG::2012-06-16
>>> 15:43:21,532::task::588::TaskManager.Task::(_updateState)
>>> Task=`2a6538e5-b961-478a-bce6-f5ded1a62bca`::moving from state
>>> preparing
>>> -> state finished
>>>
>>> Thread-21026::DEBUG::2012-06-16
>>> 15:43:21,532::resourceManager::809::ResourceManager.Owner::(release
>>> A
>>> l
>>> l
>>> ) Owner.releaseAll requests {} resources {}
>>>
>>> Thread-21026::DEBUG::2012-06-16
>>> 15:43:21,532::resourceManager::844::ResourceManager.Owner::(cancelA
>>> l
>>> l
>>> )
>>> Owner.cancelAll requests {}
>>>
>>> Thread-21026::DEBUG::2012-06-16
>>> 15:43:21,532::task::978::TaskManager.Task::(_decref)
>>> Task=`2a6538e5-b961-478a-bce6-f5ded1a62bca`::ref 0 aborting False
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,610::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,611::task::588::TaskManager.Task::(_updateState)
>>> Task=`fdeab954-7958-4cf3-8dd3-09e06329bf92`::moving from state init
>>> -> state preparing
>>>
>>> Thread-21027::INFO::2012-06-16
>>> 15:43:21,611::logUtils::37::dispatcher::(wrapper) Run and protect:
>>> createStorageDomain(storageType=6,
>>> sdUUID='711293b8-019c-4f41-8cab-df03dd843556',
domainName='dfdf',
>>> typeSpecificArg='10.1.20.7:/sd2', domClass=1,
domVersion='0',
>>> options=None)
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,611::misc::1053::SamplingMethod::(__call__) Trying to
>>> enter sampling method (storage.sdc.refreshStorage)
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,611::misc::1055::SamplingMethod::(__call__) Got in to
>>> sampling method
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,612::misc::1053::SamplingMethod::(__call__) Trying to
>>> enter sampling method (storage.iscsi.rescan)
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,612::misc::1055::SamplingMethod::(__call__) Got in to
>>> sampling method
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,612::__init__::1164::Storage.Misc.excCmd::(_log)
>>> '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,639::__init__::1164::Storage.Misc.excCmd::(_log) FAILED:
>>> <err> = 'iscsiadm: No session found.\n';<rc> = 21
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,639::misc::1063::SamplingMethod::(__call__) Returning last
>>> result
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,805::__init__::1164::Storage.Misc.excCmd::(_log)
>>> '/usr/bin/sudo -n /sbin/multipath' (cwd None)
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,863::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:
>>> <err> = '';<rc> = 0
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,864::lvm::460::OperationMutex::(_invalidateAllPvs)
>>> Operation 'lvm invalidate operation' got the operation mutex
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,864::lvm::462::OperationMutex::(_invalidateAllPvs)
>>> Operation 'lvm invalidate operation' released the operation mutex
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,864::lvm::472::OperationMutex::(_invalidateAllVgs)
>>> Operation 'lvm invalidate operation' got the operation mutex
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,865::lvm::474::OperationMutex::(_invalidateAllVgs)
>>> Operation 'lvm invalidate operation' released the operation mutex
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,865::lvm::493::OperationMutex::(_invalidateAllLvs)
>>> Operation 'lvm invalidate operation' got the operation mutex
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,865::lvm::495::OperationMutex::(_invalidateAllLvs)
>>> Operation 'lvm invalidate operation' released the operation mutex
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,865::misc::1063::SamplingMethod::(__call__) Returning last
>>> result
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,866::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm
>>> reload operation' got the operation mutex
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:21,867::__init__::1164::Storage.Misc.excCmd::(_log)
>>> '/usr/bin/sudo -n /sbin/lvm vgs --config " devices {
>>> preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1
>>> write_cache_state=0
>>> disable_after_error_count=3 filter = [ \\"a%35000c50001770ea3%\\
>>> <file:///\\%22a%2535000c50001770ea3%25\>", \\"r%.*%\\
>>> <file:///\\%22r%25.*%25\>" ] } global { locking_type=1
>>> prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min =
>>> 50 retain_days = 0 } " --noheadings --units b --nosuffix
>>> --separator | -o
>>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,v
>>> g _ m da_size,vg_mda_free 711293b8-019c-4f41-8cab-df03dd843556'
>>> (cwd
>>> None)
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,013::__init__::1164::Storage.Misc.excCmd::(_log) FAILED:
>>> <err> = ' Volume group
"711293b8-019c-4f41-8cab-df03dd843556" not
>>> found\n';<rc> = 5
>>>
>>> Thread-21027::WARNING::2012-06-16
>>> 15:43:22,014::lvm::356::Storage.LVM::(_reloadvgs) lvm vgs failed: 5
>>> [] [' Volume group "711293b8-019c-4f41-8cab-df03dd843556" not
>>> found']
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,016::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm
>>> reload operation' released the operation mutex
>>>
>>> Thread-21027::INFO::2012-06-16
>>> 15:43:22,020::nfsSD::64::Storage.StorageDomain::(create)
>>> sdUUID=711293b8-019c-4f41-8cab-df03dd843556 domainName=dfdf
>>> remotePath=10.1.20.7:/sd2 domClass=1
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,037::persistentDict::185::Storage.PersistentDict::(__init_
>>> _
>>> ) Created a persistent dict with FileMetadataRW backend
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,038::persistentDict::226::Storage.PersistentDict::(refresh
>>> )
>>> read lines (FileMetadataRW)=[]
>>>
>>> Thread-21027::WARNING::2012-06-16
>>> 15:43:22,038::persistentDict::248::Storage.PersistentDict::(refresh
>>> ) data has no embedded checksum - trust it as it is
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,038::persistentDict::162::Storage.PersistentDict::(transac
>>> t
>>> i
>>> o
>>> n)
>>> Starting transaction
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,038::persistentDict::168::Storage.PersistentDict::(transac
>>> t
>>> i
>>> o
>>> n)
>>> Flushing changes
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,039::persistentDict::287::Storage.PersistentDict::(flush)
>>> about to write lines (FileMetadataRW)=['CLASS=Data',
>>> 'DESCRIPTION=dfdf', 'IOOPTIMEOUTSEC=1',
'LEASERETRIES=3',
>>> 'LEASETIMESEC=5', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5',
>>> 'POOL_UUID=', 'REMOTE_PATH=10.1.20.7:/sd2',
'ROLE=Regular',
>>> 'SDUUID=711293b8-019c-4f41-8cab-df03dd843556',
>>> 'TYPE=SHAREDFS', 'VERSION=0',
>>> '_SHA_CKSUM=2e3c4fc88aa713dedbb0d708375966e158327797']
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,041::persistentDict::170::Storage.PersistentDict::(transac
>>> t
>>> i
>>> o
>>> n)
>>> Finished transaction
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,042::fileSD::107::Storage.StorageDomain::(__init__)
>>> Reading domain in path
>>> /rhev/data-center/mnt/10.1.20.7:_sd2/711293b8-019c-4f41-8cab-df03dd
>>> 8
>>> 4
>>> 3
>>> 556
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,042::persistentDict::185::Storage.PersistentDict::(__init_
>>> _
>>> ) Created a persistent dict with FileMetadataRW backend
>>>
>>> Thread-21027::ERROR::2012-06-16
>>> 15:43:22,043::task::853::TaskManager.Task::(_setError)
>>> Task=`fdeab954-7958-4cf3-8dd3-09e06329bf92`::Unexpected error
>>>
>>> Traceback (most recent call last):
>>>
>>> File "/usr/share/vdsm/storage/task.py", line 861, in _run
>>>
>>> return fn(*args, **kargs)
>>>
>>> File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
>>>
>>> res = f(*args, **kwargs)
>>>
>>> File "/usr/share/vdsm/storage/hsm.py", line 2136, in
>>> createStorageDomain
>>>
>>> typeSpecificArg, storageType, domVersion)
>>>
>>> File "/usr/share/vdsm/storage/nfsSD.py", line 90, in create
>>>
>>> fsd = cls(os.path.join(mntPoint, sdUUID))
>>>
>>> File "/usr/share/vdsm/storage/fileSD.py", line 113, in __init__
>>>
>>> sdUUID = metadata[sd.DMDK_SDUUID]
>>>
>>> File "/usr/share/vdsm/storage/persistentDict.py", line 85, in
>>> __getitem__
>>>
>>> return dec(self._dict[key])
>>>
>>> File "/usr/share/vdsm/storage/persistentDict.py", line 193, in
>>> __getitem__
>>>
>>> with self._accessWrapper():
>>>
>>> File "/usr/lib64/python2.6/contextlib.py", line 16, in __enter__
>>>
>>> return self.gen.next()
>>>
>>> File "/usr/share/vdsm/storage/persistentDict.py", line 147, in
>>> _accessWrapper
>>>
>>> self.refresh()
>>>
>>> File "/usr/share/vdsm/storage/persistentDict.py", line 224, in
>>> refresh
>>>
>>> lines = self._metaRW.readlines()
>>>
>>> File "/usr/share/vdsm/storage/fileSD.py", line 82, in readlines
>>>
>>> return
>>> misc.stripNewLines(self._oop.directReadLines(self._metafile))
>>>
>>> File "/usr/share/vdsm/storage/processPool.py", line 63, in wrapper
>>>
>>> return self.runExternally(func, *args, **kwds)
>>>
>>> File "/usr/share/vdsm/storage/processPool.py", line 74, in
>>> runExternally
>>>
>>> return self._procPool.runExternally(*args, **kwargs)
>>>
>>> File "/usr/share/vdsm/storage/processPool.py", line 170, in
>>> runExternally
>>>
>>> raise err
>>>
>>> OSError: [Errno 22] Invalid argument:
>>>
'/rhev/data-center/mnt/10.1.20.7:_sd2/711293b8-019c-4f41-8cab-df03dd843556/dom_md/metadata'
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,044::task::872::TaskManager.Task::(_run)
>>> Task=`fdeab954-7958-4cf3-8dd3-09e06329bf92`::Task._run:
>>> fdeab954-7958-4cf3-8dd3-09e06329bf92 (6,
>>> '711293b8-019c-4f41-8cab-df03dd843556', 'dfdf',
'10.1.20.7:/sd2',
>>> 1,
>>> '0') {} failed - stopping task
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,044::task::1199::TaskManager.Task::(stop)
>>> Task=`fdeab954-7958-4cf3-8dd3-09e06329bf92`::stopping in state
>>> preparing (force False)
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,044::task::978::TaskManager.Task::(_decref)
>>> Task=`fdeab954-7958-4cf3-8dd3-09e06329bf92`::ref 1 aborting True
>>>
>>> Thread-21027::INFO::2012-06-16
>>> 15:43:22,045::task::1157::TaskManager.Task::(prepare)
>>> Task=`fdeab954-7958-4cf3-8dd3-09e06329bf92`::aborting: Task is aborted:
>>> u"[Errno 22] Invalid argument:
>>>
'/rhev/data-center/mnt/10.1.20.7:_sd2/711293b8-019c-4f41-8cab-df03dd843556/dom_md/metadata'"
>>> - code 100
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,045::task::1162::TaskManager.Task::(prepare)
>>> Task=`fdeab954-7958-4cf3-8dd3-09e06329bf92`::Prepare: aborted:
>>> [Errno 22] Invalid argument:
>>>
'/rhev/data-center/mnt/10.1.20.7:_sd2/711293b8-019c-4f41-8cab-df03dd843556/dom_md/metadata'
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,045::task::978::TaskManager.Task::(_decref)
>>> Task=`fdeab954-7958-4cf3-8dd3-09e06329bf92`::ref 0 aborting True
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,045::task::913::TaskManager.Task::(_doAbort)
>>> Task=`fdeab954-7958-4cf3-8dd3-09e06329bf92`::Task._doAbort: force
>>> False
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,045::resourceManager::844::ResourceManager.Owner::(cancelA
>>> l
>>> l
>>> )
>>> Owner.cancelAll requests {}
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,046::task::588::TaskManager.Task::(_updateState)
>>> Task=`fdeab954-7958-4cf3-8dd3-09e06329bf92`::moving from state
>>> preparing
>>> -> state aborting
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,046::task::537::TaskManager.Task::(__state_aborting)
>>> Task=`fdeab954-7958-4cf3-8dd3-09e06329bf92`::_aborting: recover
>>> policy none
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,046::task::588::TaskManager.Task::(_updateState)
>>> Task=`fdeab954-7958-4cf3-8dd3-09e06329bf92`::moving from state
>>> aborting
>>> -> state failed
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,046::resourceManager::809::ResourceManager.Owner::(release
>>> A
>>> l
>>> l
>>> ) Owner.releaseAll requests {} resources {}
>>>
>>> Thread-21027::DEBUG::2012-06-16
>>> 15:43:22,047::resourceManager::844::ResourceManager.Owner::(cancelA
>>> l
>>> l
>>> )
>>> Owner.cancelAll requests {}
>>>
>>> Thread-21027::ERROR::2012-06-16
>>> 15:43:22,047::dispatcher::69::Storage.Dispatcher.Protect::(run)
>>> [Errno 22] Invalid argument:
>>>
'/rhev/data-center/mnt/10.1.20.7:_sd2/711293b8-019c-4f41-8cab-df03dd843556/dom_md/metadata'
>>>
>>> Traceback (most recent call last):
>>>
>>> File "/usr/share/vdsm/storage/dispatcher.py", line 61, in run
>>>
>>> result = ctask.prepare(self.func, *args, **kwargs)
>>>
>>> File "/usr/share/vdsm/storage/task.py", line 1164, in prepare
>>>
>>> raise self.error
>>>
>>> OSError: [Errno 22] Invalid argument:
>>>
'/rhev/data-center/mnt/10.1.20.7:_sd2/711293b8-019c-4f41-8cab-df03dd843556/dom_md/metadata'
>>>
>>> Thread-21035::DEBUG::2012-06-16
>>> 15:43:26,510::task::588::TaskManager.Task::(_updateState)
>>> Task=`283d6276-7bdd-4c35-ad6f-6e8514c277f0`::moving from state init
>>> -> state preparing
>>>
>>> Thread-21035::INFO::2012-06-16
>>> 15:43:26,510::logUtils::37::dispatcher::(wrapper) Run and protect:
>>> repoStats(options=None)
>>>
>>> Thread-21035::INFO::2012-06-16
>>> 15:43:26,511::logUtils::39::dispatcher::(wrapper) Run and protect:
>>> repoStats, Return response: {}
>>>
>>> Thread-21035::DEBUG::2012-06-16
>>> 15:43:26,511::task::1172::TaskManager.Task::(prepare)
>>> Task=`283d6276-7bdd-4c35-ad6f-6e8514c277f0`::finished: {}
>>>
>>> Thread-21035::DEBUG::2012-06-16
>>> 15:43:26,511::task::588::TaskManager.Task::(_updateState)
>>> Task=`283d6276-7bdd-4c35-ad6f-6e8514c277f0`::moving from state
>>> preparing
>>> -> state finished
>>>
>>> Thread-21035::DEBUG::2012-06-16
>>> 15:43:26,511::resourceManager::809::ResourceManager.Owner::(release
>>> A
>>> l
>>> l
>>> ) Owner.releaseAll requests {} resources {}
>>>
>>> Thread-21035::DEBUG::2012-06-16
>>> 15:43:26,511::resourceManager::844::ResourceManager.Owner::(cancelA
>>> l
>>> l
>>> )
>>> Owner.cancelAll requests {}
>>>
>>> Thread-21035::DEBUG::2012-06-16
>>> 15:43:26,511::task::978::TaskManager.Task::(_decref)
>>> Task=`283d6276-7bdd-4c35-ad6f-6e8514c277f0`::ref 0 aborting False
>>>
>>> ^C
>>>
>>> [root@noc-3-synt mnt]# cat
>>> /rhev/data-center/mnt/10.1.20.7:_sd2/711293b8-019c-4f41-8cab-df03dd
>>> 8
>>> 4
>>> 3
>>> 556/dom_md/metadata
>>>
>>> CLASS=Data
>>>
>>> DESCRIPTION=dfdf
>>>
>>> IOOPTIMEOUTSEC=1
>>>
>>> LEASERETRIES=3
>>>
>>> LEASETIMESEC=5
>>>
>>> LOCKPOLICY=
>>>
>>> LOCKRENEWALINTERVALSEC=5
>>>
>>> POOL_UUID=
>>>
>>> REMOTE_PATH=10.1.20.7:/sd2
>>>
>>> ROLE=Regular
>>>
>>> SDUUID=711293b8-019c-4f41-8cab-df03dd843556
>>>
>>> TYPE=SHAREDFS
>>>
>>> VERSION=0
>>>
>>> _SHA_CKSUM=2e3c4fc88aa713dedbb0d708375966e158327797
>>>
>>> *From:*Vijay Bellur [mailto:vbellur@redhat.com]
>>> *Sent:* Saturday, June 16, 2012 10:19 PM
>>> *To:* Robert Middleswarth
>>> *Cc:* зоррыч; users@ovirt.org<mailto:users@ovirt.org>; Daniel
>>> Paikov
>>> *Subject:* Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)
>>>
>>>
>>>
>>> On 06/16/2012 11:08 AM, Robert Middleswarth wrote:
>>>
>>> I am seeing the same thing. I also notice that glusterfs seems to
>>> die every-time I try. I am wonder if this could be a glusterfs / f17 issue.
>>>
>>>
>>> Are you running GlusterFS 3.2.x in Fedora 17? For this volume
>>> creation to complete successfully from oVirt, you will need GlusterFS 3.3.0.
>>> You can download Fedora RPMs for 3.3.0 from:
>>>
>>>
http://download.gluster.com/pub/gluster/glusterfs/3.3/LATEST/Fedora
>>> /
>>>
>>>
>>> -Vijay
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users