No subject
Thu May 24 20:21:22 EDT 2012
just getting in.
>
> vijay - any estimation on when this may be available?
>
> thanks,
> Itamar
>
>>
>>
>> -----Original Message-----
>> From: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] On=20
>> Behalf Of =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87
>> Sent: Wednesday, June 20, 2012 3:11 PM
>> To: 'Itamar Heim'
>> Cc: 'Daniel Paikov'; users at ovirt.org
>> Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)
>>
>> I'm sorry for your persistence, but I'm trying to mount gluster =
storage not NFS storage.
>> Such a document is to gluster storage?
>>
>>
>> How can I see what line ovirt considered invalid in the file =
metadata?
>>
>>
>>
>> -----Original Message-----
>> From: Itamar Heim [mailto:iheim at redhat.com]
>> Sent: Tuesday, June 19, 2012 7:55 PM
>> To: =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87
>> Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users at ovirt.org; 'Daniel =
Paikov'
>> Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)
>>
>> On 06/19/2012 11:34 AM, =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 wrote:
>>> I do not understand you
>>> The catalog, which is mounted Gloucester storage available for =
recording and vdsm successfully creates the necessary files to it.
>> can you please try the NFS troubleshooting approach on this first to =
try and diagnose the issue?
>> http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues
>>
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: Itamar Heim [mailto:iheim at redhat.com]
>>> Sent: Monday, June 18, 2012 7:07 PM
>>> To: =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87
>>> Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users at ovirt.org; 'Daniel =
Paikov'
>>> Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)
>>>
>>> On 06/18/2012 06:03 PM, =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 wrote:
>>>> Posix FS storage
>>> and you can mount this from vdsm via sudo with same mount options =
and use it?
>>>
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: Itamar Heim [mailto:iheim at redhat.com]
>>>> Sent: Monday, June 18, 2012 6:29 PM
>>>> To: =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87
>>>> Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users at ovirt.org; 'Daniel =
Paikov'
>>>> Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)
>>>>
>>>> On 06/18/2012 04:50 PM, =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 wrote:
>>>>> Any ideas for solutions?
>>>>>
>>>>> Is this a bug?
>>>>>
>>>>> *From:*users-bounces at ovirt.org [mailto:users-bounces at ovirt.org]=20
>>>>> *On Behalf Of *=D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87
>>>>> *Sent:* Sunday, June 17, 2012 12:04 AM
>>>>> *To:* 'Vijay Bellur'; 'Robert Middleswarth'
>>>>> *Cc:* users at ovirt.org; 'Daniel Paikov'
>>>>> *Subject:* Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)
>>>>>
>>>>> I have updated GlusterFS and volume successfully created
>>>>>
>>>>> Thank you!
>>>>>
>>>>> But I was not able to mount a storage domain.
>>>> an NFS or Posix FS storage domain?
>>>>
>>>>> Vdsm.log:
>>>>>
>>>>> Thread-21025::DEBUG::2012-06-16
>>>>> 15:43:21,495::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]
>>>>>
>>>>> Thread-21025::DEBUG::2012-06-16
>>>>> 15:43:21,495::task::588::TaskManager.Task::(_updateState)
>>>>> Task=3D`8d841c96-43e3-4d4b-b115-a36c4adf695a`::moving from state=20
>>>>> init
>>>>> -> state preparing
>>>>>
>>>>> Thread-21025::INFO::2012-06-16
>>>>> 15:43:21,503::logUtils::37::dispatcher::(wrapper) Run and protect:
>>>>> validateStorageServerConnection(domType=3D6,
>>>>> spUUID=3D'00000000-0000-0000-0000-000000000000', =
conList=3D[{'port':
>>>>> '',
>>>>> 'connection': '10.1.20.7:/sd2', 'iqn': '', 'portal': '', 'user':
>>>>> '',
>>>>> 'vfs_type': 'glusterfs', 'password': '******', 'id':
>>>>> '00000000-0000-0000-0000-000000000000'}], options=3DNone)
>>>>>
>>>>> Thread-21025::INFO::2012-06-16
>>>>> 15:43:21,503::logUtils::39::dispatcher::(wrapper) Run and protect:
>>>>> validateStorageServerConnection, Return response: {'statuslist':
>>>>> [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
>>>>>
>>>>> Thread-21025::DEBUG::2012-06-16
>>>>> 15:43:21,503::task::1172::TaskManager.Task::(prepare)
>>>>> Task=3D`8d841c96-43e3-4d4b-b115-a36c4adf695a`::finished: =
{'statuslist':
>>>>> [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
>>>>>
>>>>> Thread-21025::DEBUG::2012-06-16
>>>>> 15:43:21,503::task::588::TaskManager.Task::(_updateState)
>>>>> Task=3D`8d841c96-43e3-4d4b-b115-a36c4adf695a`::moving from state=20
>>>>> preparing
>>>>> -> state finished
>>>>>
>>>>> Thread-21025::DEBUG::2012-06-16
>>>>> 15:43:21,503::resourceManager::809::ResourceManager.Owner::(releas
>>>>> e
>>>>> A
>>>>> l
>>>>> l
>>>>> ) Owner.releaseAll requests {} resources {}
>>>>>
>>>>> Thread-21025::DEBUG::2012-06-16
>>>>> 15:43:21,504::resourceManager::844::ResourceManager.Owner::(cancel
>>>>> A
>>>>> l
>>>>> l
>>>>> )
>>>>> Owner.cancelAll requests {}
>>>>>
>>>>> Thread-21025::DEBUG::2012-06-16
>>>>> 15:43:21,504::task::978::TaskManager.Task::(_decref)
>>>>> Task=3D`8d841c96-43e3-4d4b-b115-a36c4adf695a`::ref 0 aborting =
False
>>>>>
>>>>> Thread-21026::DEBUG::2012-06-16
>>>>> 15:43:21,526::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]
>>>>>
>>>>> Thread-21026::DEBUG::2012-06-16
>>>>> 15:43:21,526::task::588::TaskManager.Task::(_updateState)
>>>>> Task=3D`2a6538e5-b961-478a-bce6-f5ded1a62bca`::moving from state=20
>>>>> init
>>>>> -> state preparing
>>>>>
>>>>> Thread-21026::INFO::2012-06-16
>>>>> 15:43:21,527::logUtils::37::dispatcher::(wrapper) Run and protect:
>>>>> connectStorageServer(domType=3D6,
>>>>> spUUID=3D'00000000-0000-0000-0000-000000000000', =
conList=3D[{'port':
>>>>> '',
>>>>> 'connection': '10.1.20.7:/sd2', 'iqn': '', 'portal': '', 'user':
>>>>> '',
>>>>> 'vfs_type': 'glusterfs', 'password': '******', 'id':
>>>>> 'e7766e1d-f2c6-45ee-900e-00c6689649cd'}], options=3DNone)
>>>>>
>>>>> Thread-21026::DEBUG::2012-06-16
>>>>> 15:43:21,530::lvm::460::OperationMutex::(_invalidateAllPvs)
>>>>> Operation 'lvm invalidate operation' got the operation mutex
>>>>>
>>>>> Thread-21026::DEBUG::2012-06-16
>>>>> 15:43:21,531::lvm::462::OperationMutex::(_invalidateAllPvs)
>>>>> Operation 'lvm invalidate operation' released the operation mutex
>>>>>
>>>>> Thread-21026::DEBUG::2012-06-16
>>>>> 15:43:21,531::lvm::472::OperationMutex::(_invalidateAllVgs)
>>>>> Operation 'lvm invalidate operation' got the operation mutex
>>>>>
>>>>> Thread-21026::DEBUG::2012-06-16
>>>>> 15:43:21,531::lvm::474::OperationMutex::(_invalidateAllVgs)
>>>>> Operation 'lvm invalidate operation' released the operation mutex
>>>>>
>>>>> Thread-21026::DEBUG::2012-06-16
>>>>> 15:43:21,531::lvm::493::OperationMutex::(_invalidateAllLvs)
>>>>> Operation 'lvm invalidate operation' got the operation mutex
>>>>>
>>>>> Thread-21026::DEBUG::2012-06-16
>>>>> 15:43:21,531::lvm::495::OperationMutex::(_invalidateAllLvs)
>>>>> Operation 'lvm invalidate operation' released the operation mutex
>>>>>
>>>>> Thread-21026::INFO::2012-06-16
>>>>> 15:43:21,532::logUtils::39::dispatcher::(wrapper) Run and protect:
>>>>> connectStorageServer, Return response: {'statuslist': [{'status':
>>>>> 0,
>>>>> 'id': 'e7766e1d-f2c6-45ee-900e-00c6689649cd'}]}
>>>>>
>>>>> Thread-21026::DEBUG::2012-06-16
>>>>> 15:43:21,532::task::1172::TaskManager.Task::(prepare)
>>>>> Task=3D`2a6538e5-b961-478a-bce6-f5ded1a62bca`::finished: =
{'statuslist':
>>>>> [{'status': 0, 'id': 'e7766e1d-f2c6-45ee-900e-00c6689649cd'}]}
>>>>>
>>>>> Thread-21026::DEBUG::2012-06-16
>>>>> 15:43:21,532::task::588::TaskManager.Task::(_updateState)
>>>>> Task=3D`2a6538e5-b961-478a-bce6-f5ded1a62bca`::moving from state=20
>>>>> preparing
>>>>> -> state finished
>>>>>
>>>>> Thread-21026::DEBUG::2012-06-16
>>>>> 15:43:21,532::resourceManager::809::ResourceManager.Owner::(releas
>>>>> e
>>>>> A
>>>>> l
>>>>> l
>>>>> ) Owner.releaseAll requests {} resources {}
>>>>>
>>>>> Thread-21026::DEBUG::2012-06-16
>>>>> 15:43:21,532::resourceManager::844::ResourceManager.Owner::(cancel
>>>>> A
>>>>> l
>>>>> l
>>>>> )
>>>>> Owner.cancelAll requests {}
>>>>>
>>>>> Thread-21026::DEBUG::2012-06-16
>>>>> 15:43:21,532::task::978::TaskManager.Task::(_decref)
>>>>> Task=3D`2a6538e5-b961-478a-bce6-f5ded1a62bca`::ref 0 aborting =
False
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,610::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,611::task::588::TaskManager.Task::(_updateState)
>>>>> Task=3D`fdeab954-7958-4cf3-8dd3-09e06329bf92`::moving from state=20
>>>>> init
>>>>> -> state preparing
>>>>>
>>>>> Thread-21027::INFO::2012-06-16
>>>>> 15:43:21,611::logUtils::37::dispatcher::(wrapper) Run and protect:
>>>>> createStorageDomain(storageType=3D6,
>>>>> sdUUID=3D'711293b8-019c-4f41-8cab-df03dd843556', =
domainName=3D'dfdf',=20
>>>>> typeSpecificArg=3D'10.1.20.7:/sd2', domClass=3D1, =
domVersion=3D'0',
>>>>> options=3DNone)
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,611::misc::1053::SamplingMethod::(__call__) Trying to=20
>>>>> enter sampling method (storage.sdc.refreshStorage)
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,611::misc::1055::SamplingMethod::(__call__) Got in to=20
>>>>> sampling method
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,612::misc::1053::SamplingMethod::(__call__) Trying to=20
>>>>> enter sampling method (storage.iscsi.rescan)
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,612::misc::1055::SamplingMethod::(__call__) Got in to=20
>>>>> sampling method
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,612::__init__::1164::Storage.Misc.excCmd::(_log)
>>>>> '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,639::__init__::1164::Storage.Misc.excCmd::(_log) FAILED:
>>>>> <err> =3D 'iscsiadm: No session found.\n';<rc> =3D 21
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,639::misc::1063::SamplingMethod::(__call__) Returning=20
>>>>> last result
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,805::__init__::1164::Storage.Misc.excCmd::(_log)
>>>>> '/usr/bin/sudo -n /sbin/multipath' (cwd None)
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,863::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:
>>>>> <err> =3D '';<rc> =3D 0
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,864::lvm::460::OperationMutex::(_invalidateAllPvs)
>>>>> Operation 'lvm invalidate operation' got the operation mutex
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,864::lvm::462::OperationMutex::(_invalidateAllPvs)
>>>>> Operation 'lvm invalidate operation' released the operation mutex
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,864::lvm::472::OperationMutex::(_invalidateAllVgs)
>>>>> Operation 'lvm invalidate operation' got the operation mutex
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,865::lvm::474::OperationMutex::(_invalidateAllVgs)
>>>>> Operation 'lvm invalidate operation' released the operation mutex
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,865::lvm::493::OperationMutex::(_invalidateAllLvs)
>>>>> Operation 'lvm invalidate operation' got the operation mutex
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,865::lvm::495::OperationMutex::(_invalidateAllLvs)
>>>>> Operation 'lvm invalidate operation' released the operation mutex
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,865::misc::1063::SamplingMethod::(__call__) Returning=20
>>>>> last result
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,866::lvm::352::OperationMutex::(_reloadvgs) Operation=20
>>>>> 'lvm reload operation' got the operation mutex
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:21,867::__init__::1164::Storage.Misc.excCmd::(_log)
>>>>> '/usr/bin/sudo -n /sbin/lvm vgs --config " devices {=20
>>>>> preferred_names =3D [\\"^/dev/mapper/\\"] =
ignore_suspended_devices=3D1=20
>>>>> write_cache_state=3D0
>>>>> disable_after_error_count=3D3 filter =3D [ =
\\"a%35000c50001770ea3%\\=20
>>>>> <file:///\\%22a%2535000c50001770ea3%25\>", \\"r%.*%\\=20
>>>>> <file:///\\%22r%25.*%25\>" ] } global { locking_type=3D1
>>>>> prioritise_write_locks=3D1 wait_for_locks=3D1 } backup { =
retain_min =3D=20
>>>>> 50 retain_days =3D 0 } " --noheadings --units b --nosuffix=20
>>>>> --separator | -o=20
>>>>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,
>>>>> v g _ m da_size,vg_mda_free 711293b8-019c-4f41-8cab-df03dd843556'
>>>>> (cwd
>>>>> None)
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,013::__init__::1164::Storage.Misc.excCmd::(_log) FAILED:
>>>>> <err> =3D ' Volume group =
"711293b8-019c-4f41-8cab-df03dd843556" not
>>>>> found\n';<rc> =3D 5
>>>>>
>>>>> Thread-21027::WARNING::2012-06-16
>>>>> 15:43:22,014::lvm::356::Storage.LVM::(_reloadvgs) lvm vgs failed:=20
>>>>> 5 [] [' Volume group "711293b8-019c-4f41-8cab-df03dd843556" not=20
>>>>> found']
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,016::lvm::379::OperationMutex::(_reloadvgs) Operation=20
>>>>> 'lvm reload operation' released the operation mutex
>>>>>
>>>>> Thread-21027::INFO::2012-06-16
>>>>> 15:43:22,020::nfsSD::64::Storage.StorageDomain::(create)
>>>>> sdUUID=3D711293b8-019c-4f41-8cab-df03dd843556 domainName=3Ddfdf
>>>>> remotePath=3D10.1.20.7:/sd2 domClass=3D1
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,037::persistentDict::185::Storage.PersistentDict::(__init
>>>>> _
>>>>> _
>>>>> ) Created a persistent dict with FileMetadataRW backend
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,038::persistentDict::226::Storage.PersistentDict::(refres
>>>>> h
>>>>> )
>>>>> read lines (FileMetadataRW)=3D[]
>>>>>
>>>>> Thread-21027::WARNING::2012-06-16
>>>>> 15:43:22,038::persistentDict::248::Storage.PersistentDict::(refres
>>>>> h
>>>>> ) data has no embedded checksum - trust it as it is
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,038::persistentDict::162::Storage.PersistentDict::(transa
>>>>> c
>>>>> t
>>>>> i
>>>>> o
>>>>> n)
>>>>> Starting transaction
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,038::persistentDict::168::Storage.PersistentDict::(transa
>>>>> c
>>>>> t
>>>>> i
>>>>> o
>>>>> n)
>>>>> Flushing changes
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,039::persistentDict::287::Storage.PersistentDict::(flush)
>>>>> about to write lines (FileMetadataRW)=3D['CLASS=3DData',=20
>>>>> 'DESCRIPTION=3Ddfdf', 'IOOPTIMEOUTSEC=3D1', 'LEASERETRIES=3D3',=20
>>>>> 'LEASETIMESEC=3D5', 'LOCKPOLICY=3D', 'LOCKRENEWALINTERVALSEC=3D5', =
>>>>> 'POOL_UUID=3D', 'REMOTE_PATH=3D10.1.20.7:/sd2', 'ROLE=3DRegular',=20
>>>>> 'SDUUID=3D711293b8-019c-4f41-8cab-df03dd843556',
>>>>> 'TYPE=3DSHAREDFS', 'VERSION=3D0',
>>>>> '_SHA_CKSUM=3D2e3c4fc88aa713dedbb0d708375966e158327797']
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,041::persistentDict::170::Storage.PersistentDict::(transa
>>>>> c
>>>>> t
>>>>> i
>>>>> o
>>>>> n)
>>>>> Finished transaction
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,042::fileSD::107::Storage.StorageDomain::(__init__)
>>>>> Reading domain in path
>>>>> /rhev/data-center/mnt/10.1.20.7:_sd2/711293b8-019c-4f41-8cab-df03d
>>>>> d
>>>>> 8
>>>>> 4
>>>>> 3
>>>>> 556
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,042::persistentDict::185::Storage.PersistentDict::(__init
>>>>> _
>>>>> _
>>>>> ) Created a persistent dict with FileMetadataRW backend
>>>>>
>>>>> Thread-21027::ERROR::2012-06-16
>>>>> 15:43:22,043::task::853::TaskManager.Task::(_setError)
>>>>> Task=3D`fdeab954-7958-4cf3-8dd3-09e06329bf92`::Unexpected error
>>>>>
>>>>> Traceback (most recent call last):
>>>>>
>>>>> File "/usr/share/vdsm/storage/task.py", line 861, in _run
>>>>>
>>>>> return fn(*args, **kargs)
>>>>>
>>>>> File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
>>>>>
>>>>> res =3D f(*args, **kwargs)
>>>>>
>>>>> File "/usr/share/vdsm/storage/hsm.py", line 2136, in=20
>>>>> createStorageDomain
>>>>>
>>>>> typeSpecificArg, storageType, domVersion)
>>>>>
>>>>> File "/usr/share/vdsm/storage/nfsSD.py", line 90, in create
>>>>>
>>>>> fsd =3D cls(os.path.join(mntPoint, sdUUID))
>>>>>
>>>>> File "/usr/share/vdsm/storage/fileSD.py", line 113, in __init__
>>>>>
>>>>> sdUUID =3D metadata[sd.DMDK_SDUUID]
>>>>>
>>>>> File "/usr/share/vdsm/storage/persistentDict.py", line 85, in=20
>>>>> __getitem__
>>>>>
>>>>> return dec(self._dict[key])
>>>>>
>>>>> File "/usr/share/vdsm/storage/persistentDict.py", line 193, in=20
>>>>> __getitem__
>>>>>
>>>>> with self._accessWrapper():
>>>>>
>>>>> File "/usr/lib64/python2.6/contextlib.py", line 16, in __enter__
>>>>>
>>>>> return self.gen.next()
>>>>>
>>>>> File "/usr/share/vdsm/storage/persistentDict.py", line 147, in=20
>>>>> _accessWrapper
>>>>>
>>>>> self.refresh()
>>>>>
>>>>> File "/usr/share/vdsm/storage/persistentDict.py", line 224, in=20
>>>>> refresh
>>>>>
>>>>> lines =3D self._metaRW.readlines()
>>>>>
>>>>> File "/usr/share/vdsm/storage/fileSD.py", line 82, in readlines
>>>>>
>>>>> return
>>>>> misc.stripNewLines(self._oop.directReadLines(self._metafile))
>>>>>
>>>>> File "/usr/share/vdsm/storage/processPool.py", line 63, in wrapper
>>>>>
>>>>> return self.runExternally(func, *args, **kwds)
>>>>>
>>>>> File "/usr/share/vdsm/storage/processPool.py", line 74, in=20
>>>>> runExternally
>>>>>
>>>>> return self._procPool.runExternally(*args, **kwargs)
>>>>>
>>>>> File "/usr/share/vdsm/storage/processPool.py", line 170, in=20
>>>>> runExternally
>>>>>
>>>>> raise err
>>>>>
>>>>> OSError: [Errno 22] Invalid argument:
>>>>> =
'/rhev/data-center/mnt/10.1.20.7:_sd2/711293b8-019c-4f41-8cab-df03dd84355=
6/dom_md/metadata'
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,044::task::872::TaskManager.Task::(_run)
>>>>> Task=3D`fdeab954-7958-4cf3-8dd3-09e06329bf92`::Task._run:
>>>>> fdeab954-7958-4cf3-8dd3-09e06329bf92 (6,=20
>>>>> '711293b8-019c-4f41-8cab-df03dd843556', 'dfdf', '10.1.20.7:/sd2',=20
>>>>> 1,
>>>>> '0') {} failed - stopping task
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,044::task::1199::TaskManager.Task::(stop)
>>>>> Task=3D`fdeab954-7958-4cf3-8dd3-09e06329bf92`::stopping in state=20
>>>>> preparing (force False)
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,044::task::978::TaskManager.Task::(_decref)
>>>>> Task=3D`fdeab954-7958-4cf3-8dd3-09e06329bf92`::ref 1 aborting True
>>>>>
>>>>> Thread-21027::INFO::2012-06-16
>>>>> 15:43:22,045::task::1157::TaskManager.Task::(prepare)
>>>>> Task=3D`fdeab954-7958-4cf3-8dd3-09e06329bf92`::aborting: Task is =
aborted:
>>>>> u"[Errno 22] Invalid argument:
>>>>> =
'/rhev/data-center/mnt/10.1.20.7:_sd2/711293b8-019c-4f41-8cab-df03dd84355=
6/dom_md/metadata'"
>>>>> - code 100
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,045::task::1162::TaskManager.Task::(prepare)
>>>>> Task=3D`fdeab954-7958-4cf3-8dd3-09e06329bf92`::Prepare: aborted:
>>>>> [Errno 22] Invalid argument:
>>>>> =
'/rhev/data-center/mnt/10.1.20.7:_sd2/711293b8-019c-4f41-8cab-df03dd84355=
6/dom_md/metadata'
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,045::task::978::TaskManager.Task::(_decref)
>>>>> Task=3D`fdeab954-7958-4cf3-8dd3-09e06329bf92`::ref 0 aborting True
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,045::task::913::TaskManager.Task::(_doAbort)
>>>>> Task=3D`fdeab954-7958-4cf3-8dd3-09e06329bf92`::Task._doAbort: =
force=20
>>>>> False
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,045::resourceManager::844::ResourceManager.Owner::(cancel
>>>>> A
>>>>> l
>>>>> l
>>>>> )
>>>>> Owner.cancelAll requests {}
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,046::task::588::TaskManager.Task::(_updateState)
>>>>> Task=3D`fdeab954-7958-4cf3-8dd3-09e06329bf92`::moving from state=20
>>>>> preparing
>>>>> -> state aborting
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,046::task::537::TaskManager.Task::(__state_aborting)
>>>>> Task=3D`fdeab954-7958-4cf3-8dd3-09e06329bf92`::_aborting: recover=20
>>>>> policy none
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,046::task::588::TaskManager.Task::(_updateState)
>>>>> Task=3D`fdeab954-7958-4cf3-8dd3-09e06329bf92`::moving from state=20
>>>>> aborting
>>>>> -> state failed
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,046::resourceManager::809::ResourceManager.Owner::(releas
>>>>> e
>>>>> A
>>>>> l
>>>>> l
>>>>> ) Owner.releaseAll requests {} resources {}
>>>>>
>>>>> Thread-21027::DEBUG::2012-06-16
>>>>> 15:43:22,047::resourceManager::844::ResourceManager.Owner::(cancel
>>>>> A
>>>>> l
>>>>> l
>>>>> )
>>>>> Owner.cancelAll requests {}
>>>>>
>>>>> Thread-21027::ERROR::2012-06-16
>>>>> 15:43:22,047::dispatcher::69::Storage.Dispatcher.Protect::(run)
>>>>> [Errno 22] Invalid argument:
>>>>> =
'/rhev/data-center/mnt/10.1.20.7:_sd2/711293b8-019c-4f41-8cab-df03dd84355=
6/dom_md/metadata'
>>>>>
>>>>> Traceback (most recent call last):
>>>>>
>>>>> File "/usr/share/vdsm/storage/dispatcher.py", line 61, in run
>>>>>
>>>>> result =3D ctask.prepare(self.func, *args, **kwargs)
>>>>>
>>>>> File "/usr/share/vdsm/storage/task.py", line 1164, in prepare
>>>>>
>>>>> raise self.error
>>>>>
>>>>> OSError: [Errno 22] Invalid argument:
>>>>> =
'/rhev/data-center/mnt/10.1.20.7:_sd2/711293b8-019c-4f41-8cab-df03dd84355=
6/dom_md/metadata'
>>>>>
>>>>> Thread-21035::DEBUG::2012-06-16
>>>>> 15:43:26,510::task::588::TaskManager.Task::(_updateState)
>>>>> Task=3D`283d6276-7bdd-4c35-ad6f-6e8514c277f0`::moving from state=20
>>>>> init
>>>>> -> state preparing
>>>>>
>>>>> Thread-21035::INFO::2012-06-16
>>>>> 15:43:26,510::logUtils::37::dispatcher::(wrapper) Run and protect:
>>>>> repoStats(options=3DNone)
>>>>>
>>>>> Thread-21035::INFO::2012-06-16
>>>>> 15:43:26,511::logUtils::39::dispatcher::(wrapper) Run and protect:
>>>>> repoStats, Return response: {}
>>>>>
>>>>> Thread-21035::DEBUG::2012-06-16
>>>>> 15:43:26,511::task::1172::TaskManager.Task::(prepare)
>>>>> Task=3D`283d6276-7bdd-4c35-ad6f-6e8514c277f0`::finished: {}
>>>>>
>>>>> Thread-21035::DEBUG::2012-06-16
>>>>> 15:43:26,511::task::588::TaskManager.Task::(_updateState)
>>>>> Task=3D`283d6276-7bdd-4c35-ad6f-6e8514c277f0`::moving from state=20
>>>>> preparing
>>>>> -> state finished
>>>>>
>>>>> Thread-21035::DEBUG::2012-06-16
>>>>> 15:43:26,511::resourceManager::809::ResourceManager.Owner::(releas
>>>>> e
>>>>> A
>>>>> l
>>>>> l
>>>>> ) Owner.releaseAll requests {} resources {}
>>>>>
>>>>> Thread-21035::DEBUG::2012-06-16
>>>>> 15:43:26,511::resourceManager::844::ResourceManager.Owner::(cancel
>>>>> A
>>>>> l
>>>>> l
>>>>> )
>>>>> Owner.cancelAll requests {}
>>>>>
>>>>> Thread-21035::DEBUG::2012-06-16
>>>>> 15:43:26,511::task::978::TaskManager.Task::(_decref)
>>>>> Task=3D`283d6276-7bdd-4c35-ad6f-6e8514c277f0`::ref 0 aborting =
False
>>>>>
>>>>> ^C
>>>>>
>>>>> [root at noc-3-synt mnt]# cat
>>>>> /rhev/data-center/mnt/10.1.20.7:_sd2/711293b8-019c-4f41-8cab-df03d
>>>>> d
>>>>> 8
>>>>> 4
>>>>> 3
>>>>> 556/dom_md/metadata
>>>>>
>>>>> CLASS=3DData
>>>>>
>>>>> DESCRIPTION=3Ddfdf
>>>>>
>>>>> IOOPTIMEOUTSEC=3D1
>>>>>
>>>>> LEASERETRIES=3D3
>>>>>
>>>>> LEASETIMESEC=3D5
>>>>>
>>>>> LOCKPOLICY=3D
>>>>>
>>>>> LOCKRENEWALINTERVALSEC=3D5
>>>>>
>>>>> POOL_UUID=3D
>>>>>
>>>>> REMOTE_PATH=3D10.1.20.7:/sd2
>>>>>
>>>>> ROLE=3DRegular
>>>>>
>>>>> SDUUID=3D711293b8-019c-4f41-8cab-df03dd843556
>>>>>
>>>>> TYPE=3DSHAREDFS
>>>>>
>>>>> VERSION=3D0
>>>>>
>>>>> _SHA_CKSUM=3D2e3c4fc88aa713dedbb0d708375966e158327797
>>>>>
>>>>> *From:*Vijay Bellur [mailto:vbellur at redhat.com]
>>>>> *Sent:* Saturday, June 16, 2012 10:19 PM
>>>>> *To:* Robert Middleswarth
>>>>> *Cc:* =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87; =
users at ovirt.org<mailto:users at ovirt.org>; Daniel=20
>>>>> Paikov
>>>>> *Subject:* Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)
>>>>>
>>>>>
>>>>>
>>>>> On 06/16/2012 11:08 AM, Robert Middleswarth wrote:
>>>>>
>>>>> I am seeing the same thing. I also notice that glusterfs seems to=20
>>>>> die every-time I try. I am wonder if this could be a glusterfs / =
f17 issue.
>>>>>
>>>>>
>>>>> Are you running GlusterFS 3.2.x in Fedora 17? For this volume=20
>>>>> creation to complete successfully from oVirt, you will need =
GlusterFS 3.3.0.
>>>>> You can download Fedora RPMs for 3.3.0 from:
>>>>>
>>>>> http://download.gluster.com/pub/gluster/glusterfs/3.3/LATEST/Fedor
>>>>> a
>>>>> /
>>>>>
>>>>>
>>>>> -Vijay
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users at ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
More information about the Users
mailing list