This is what I got:
Error in GUI: Cannot add Storage. Internal error, Storage Connection
doesn't exist.
Permissions:
ls -la /media/ceva2/Ovirt/Storage/
total 8
drwxrwxr-x. 2 vdsm kvm 4096 Nov 6 09:04 .
drwxr-xr-x. 5 vdsm kvm 4096 Nov 2 10:55 ..
df | grep /media/ceva2
/dev/mapper/1ATA_WDC_WD2500BB-00GUA0_WD-WCAL73625324p2 144237688 124962468
11948376 92% /media/ceva2
This is with the latest nightly on ovirt and vdsm.
This is the error from vdsm:
Thread-366::DEBUG::2012-11-08
08:31:41,244::BindingXMLRPC::161::vds::(wrapper) [79.112.94.67]
Thread-366::DEBUG::2012-11-08
08:31:41,244::task::568::TaskManager.Task::(_updateState)
Task=`2b324a0a-96ef-4ecd-8ad3-b8588ed93fd5`::moving from state init ->
state preparing
Thread-366::INFO::2012-11-08
08:31:41,245::logUtils::37::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection(domType=4,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection':
'/media/ceva2/Ovirt/Storage', 'iqn': '', 'portal':
'', 'user': '',
'password': '******', 'id':
'00000000-0000-0000-0000-000000000000', 'port':
''}], options=None)
Thread-366::INFO::2012-11-08
08:31:41,245::logUtils::39::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection, Return response: {'statuslist':
[{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-366::DEBUG::2012-11-08
08:31:41,245::task::1151::TaskManager.Task::(prepare)
Task=`2b324a0a-96ef-4ecd-8ad3-b8588ed93fd5`::finished: {'statuslist':
[{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-366::DEBUG::2012-11-08
08:31:41,245::task::568::TaskManager.Task::(_updateState)
Task=`2b324a0a-96ef-4ecd-8ad3-b8588ed93fd5`::moving from state preparing ->
state finished
Thread-366::DEBUG::2012-11-08
08:31:41,245::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-366::DEBUG::2012-11-08
08:31:41,245::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-366::DEBUG::2012-11-08
08:31:41,246::task::957::TaskManager.Task::(_decref)
Task=`2b324a0a-96ef-4ecd-8ad3-b8588ed93fd5`::ref 0 aborting False
Thread-367::DEBUG::2012-11-08
08:31:41,304::BindingXMLRPC::161::vds::(wrapper) [79.112.94.67]
Thread-367::DEBUG::2012-11-08
08:31:41,305::task::568::TaskManager.Task::(_updateState)
Task=`2844a5a4-7148-4d44-858c-fc75abab1a5f`::moving from state init ->
state preparing
Thread-367::INFO::2012-11-08
08:31:41,305::logUtils::37::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=4,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection':
'/media/ceva2/Ovirt/Storage', 'iqn': '', 'portal':
'', 'user': '',
'password': '******', 'id':
'00000000-0000-0000-0000-000000000000', 'port':
''}], options=None)
Thread-367::ERROR::2012-11-08
08:31:41,430::hsm::2057::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 2054, in connectStorageServer
conObj.connect()
File "/usr/share/vdsm/storage/storageServer.py", line 462, in connect
if not self.checkTarget():
File "/usr/share/vdsm/storage/storageServer.py", line 449, in checkTarget
fileSD.validateDirAccess(self._path))
File "/usr/share/vdsm/storage/fileSD.py", line 51, in validateDirAccess
getProcPool().fileUtils.validateAccess(dirPath)
File "/usr/share/vdsm/storage/remoteFileHandler.py", line 277, in
callCrabRPCFunction
*args, **kwargs)
File "/usr/share/vdsm/storage/remoteFileHandler.py", line 180, in
callCrabRPCFunction
rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)
File "/usr/share/vdsm/storage/remoteFileHandler.py", line 149, in _recvAll
timeLeft):
File "/usr/lib64/python2.7/contextlib.py", line 84, in helper
return GeneratorContextManager(func(*args, **kwds))
File "/usr/share/vdsm/storage/remoteFileHandler.py", line 136, in _poll
raise Timeout()
Timeout
Thread-367::INFO::2012-11-08
08:31:41,432::logUtils::39::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 100,
'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-367::DEBUG::2012-11-08
08:31:41,433::task::1151::TaskManager.Task::(prepare)
Task=`2844a5a4-7148-4d44-858c-fc75abab1a5f`::finished: {'statuslist':
[{'status': 100, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-367::DEBUG::2012-11-08
08:31:41,433::task::568::TaskManager.Task::(_updateState)
Task=`2844a5a4-7148-4d44-858c-fc75abab1a5f`::moving from state preparing ->
state finished
Thread-367::DEBUG::2012-11-08
08:31:41,434::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-367::DEBUG::2012-11-08
08:31:41,434::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-367::DEBUG::2012-11-08
08:31:41,435::task::957::TaskManager.Task::(_decref)
Task=`2844a5a4-7148-4d44-858c-fc75abab1a5f`::ref 0 aborting False
Thread-370::DEBUG::2012-11-08
08:31:41,784::BindingXMLRPC::161::vds::(wrapper) [79.112.94.67]
Thread-370::DEBUG::2012-11-08
08:31:41,784::task::568::TaskManager.Task::(_updateState)
Task=`aa8526b6-0772-4909-bad3-b2cd7ad589cc`::moving from state init ->
state preparing
Thread-370::INFO::2012-11-08
08:31:41,784::logUtils::37::dispatcher::(wrapper) Run and protect:
disconnectStorageServer(domType=4,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection':
'/media/ceva2/Ovirt/Storage', 'iqn': '', 'portal':
'', 'user': '',
'password': '******', 'id':
'00000000-0000-0000-0000-000000000000', 'port':
''}], options=None)
Thread-370::DEBUG::2012-11-08
08:31:41,785::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.sdc.refreshStorage)
Thread-370::DEBUG::2012-11-08
08:31:41,785::misc::1028::SamplingMethod::(__call__) Got in to sampling
method
Thread-370::DEBUG::2012-11-08
08:31:41,785::misc::1026::SamplingMethod::(__call__) Trying to enter
sampling method (storage.iscsi.rescan)
Thread-370::DEBUG::2012-11-08
08:31:41,785::misc::1028::SamplingMethod::(__call__) Got in to sampling
method
Thread-370::DEBUG::2012-11-08
08:31:41,785::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/iscsiadm -m session -R' (cwd None)
Thread-370::DEBUG::2012-11-08
08:31:41,810::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> =
'iscsiadm: No session found.\n'; <rc> = 21
Thread-370::DEBUG::2012-11-08
08:31:41,811::misc::1036::SamplingMethod::(__call__) Returning last result
MainProcess|Thread-370::DEBUG::2012-11-08
08:31:41,814::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host0/scan' (cwd None)
MainProcess|Thread-370::DEBUG::2012-11-08
08:31:41,817::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host1/scan' (cwd None)
MainProcess|Thread-370::DEBUG::2012-11-08
08:31:41,820::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host2/scan' (cwd None)
MainProcess|Thread-370::DEBUG::2012-11-08
08:31:41,822::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host3/scan' (cwd None)
MainProcess|Thread-370::DEBUG::2012-11-08
08:31:42,827::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host4/scan' (cwd None)
MainProcess|Thread-370::DEBUG::2012-11-08
08:31:42,835::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/dd
of=/sys/class/scsi_host/host5/scan' (cwd None)
MainProcess|Thread-370::DEBUG::2012-11-08
08:31:42,842::iscsi::388::Storage.ISCSI::(forceIScsiScan) Performing SCSI
scan, this will take up to 30 seconds
Thread-370::DEBUG::2012-11-08
08:31:44,846::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/sbin/multipath' (cwd None)
Thread-370::DEBUG::2012-11-08
08:31:44,902::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
Thread-370::DEBUG::2012-11-08
08:31:44,903::lvm::477::OperationMutex::(_invalidateAllPvs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-370::DEBUG::2012-11-08
08:31:44,904::lvm::479::OperationMutex::(_invalidateAllPvs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-370::DEBUG::2012-11-08
08:31:44,905::lvm::488::OperationMutex::(_invalidateAllVgs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-370::DEBUG::2012-11-08
08:31:44,905::lvm::490::OperationMutex::(_invalidateAllVgs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-370::DEBUG::2012-11-08
08:31:44,906::lvm::508::OperationMutex::(_invalidateAllLvs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-370::DEBUG::2012-11-08
08:31:44,906::lvm::510::OperationMutex::(_invalidateAllLvs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-370::DEBUG::2012-11-08
08:31:44,907::misc::1036::SamplingMethod::(__call__) Returning last result
Thread-370::INFO::2012-11-08
08:31:44,907::logUtils::39::dispatcher::(wrapper) Run and protect:
disconnectStorageServer, Return response: {'statuslist': [{'status': 0,
'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-370::DEBUG::2012-11-08
08:31:44,908::task::1151::TaskManager.Task::(prepare)
Task=`aa8526b6-0772-4909-bad3-b2cd7ad589cc`::finished: {'statuslist':
[{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-370::DEBUG::2012-11-08
08:31:44,908::task::568::TaskManager.Task::(_updateState)
Task=`aa8526b6-0772-4909-bad3-b2cd7ad589cc`::moving from state preparing ->
state finished
Thread-370::DEBUG::2012-11-08
08:31:44,909::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-370::DEBUG::2012-11-08
08:31:44,909::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-370::DEBUG::2012-11-08
08:31:44,910::task::957::TaskManager.Task::(_decref)
Task=`aa8526b6-0772-4909-bad3-b2cd7ad589cc`::ref 0 aborting False
On Wed, Nov 7, 2012 at 10:00 PM, Itamar Heim <iheim(a)redhat.com> wrote:
On 11/07/2012 02:59 PM, Cristian Falcas wrote:
> Bummer.
>
> And the sdk from nightly is not working because it's too new, from what
> I sow on the mailing list?
>
> I have errors even when I'm trying to add the storage manually, from the
> engine.Should I give up on nightly and use the beta ones, or there is an
> other solution for this?
>
shouldn't fail from admin. which vdsm version, logs?
>
> On Wed, Nov 7, 2012 at 3:47 PM, Moran Goldboim <mgoldboi(a)redhat.com
> <mailto:mgoldboi@redhat.com>> wrote:
>
> you are using ovirt-sdk from stable repo and engine from nightly
> repo, those doesn't work together.
>
>
>
> On 11/07/2012 01:40 PM, Cristian Falcas wrote:
>
> Hi all,
>
> Can someone help me with this error:
>
> AIO: Adding Local Datacenter and cluster...
> [ ERROR ]
> Error: could not create ovirtsdk API object
>
>
> trace from the log file
>
> 2012-11-07 13:34:44::DEBUG::all_in_one___**100::220::root::
> Initiating the API object
> 2012-11-07 13:34:44::ERROR::all_in_one___**100::231::root::
>
> Traceback (most recent call last):
> File
> "/usr/share/ovirt-engine/__**scripts/plugins/all_in_one___**
> 100.py",
> line 228, in initAPI
> ca_file=basedefs.FILE_CA_CRT__**_SRC,
>
> TypeError: __init__() got an unexpected keyword argument 'ca_file'
>
> 2012-11-07 13:34:44::DEBUG::setup___**sequences::62::root::
>
> Traceback (most recent call last):
> File "/usr/share/ovirt-engine/__**scripts/setup_sequences.py",
>
> line 60, in run
> function()
> File
> "/usr/share/ovirt-engine/__**scripts/plugins/all_in_one___**
> 100.py",
> line 232, in initAPI
> raise Exception(ERROR_CREATE_API___**OBJECT)
>
> Exception: Error: could not create ovirtsdk API object
>
>
> Versions installed:
>
> ovirt-engine-3.1.0-3.20121106.**__git6891171.fc17.noarch
> ovirt-engine-backend-3.1.0-3._**_20121106.git6891171.fc17.__**
> noarch
> ovirt-engine-cli-3.1.0.6-1.__**fc17.noarch
> ovirt-engine-config-3.1.0-3.__**20121106.git6891171.fc17.__**
> noarch
> ovirt-engine-dbscripts-3.1.0-_**_3.20121106.git6891171.fc17.__**
> noarch
> ovirt-engine-genericapi-3.1.0-**__3.20121106.git6891171.fc17._**
> _noarch
> ovirt-engine-notification-__**service-3.1.0-3.20121106.__**
> git6891171.fc17.noarch
> ovirt-engine-restapi-3.1.0-3._**_20121106.git6891171.fc17.__**
> noarch
> ovirt-engine-sdk-3.1.0.4-1.__**fc17.noarch
> ovirt-engine-setup-3.1.0-3.__**20121106.git6891171.fc17.__**
> noarch
> ovirt-engine-setup-plugin-__**allinone-3.1.0-3.20121106.__**
> git6891171.fc17.noarch
> ovirt-engine-tools-common-3.1.**__0-3.20121106.git6891171.**
> fc17.__noarch
> ovirt-engine-userportal-3.1.0-**__3.20121106.git6891171.fc17._**
> _noarch
> ovirt-engine-webadmin-portal-_**_3.1.0-3.20121106.git6891171._**
> _fc17.noarch
>
>
> ______________________________**___________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
>
http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.o...
>
<
http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org...
> >
>
>
>
>
>
>
> ______________________________**_________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org...
>
>