attached is the vdsm log after trying one time with cli and one time from
gui
On Sun, Nov 11, 2012 at 1:03 AM, Ayal Baron <abaron(a)redhat.com> wrote:
----- Original Message -----
>
> I install only the 3.1 version on all packages. I dropped the
> firewall and selinux.
>
> I tried to add a new storage with the gui and via cli:
> [oVirt shell (connected)]# create storagedomain --host-name
> local_host --storage-type localfs --storage-path
> /media/ceva2/Ovirt/Storage --name test --type "data"
>
> error:
> status: 400
> reason: Bad Request
> detail: Cannot add Storage. Internal error, Storage Connection
> doesn't exist.
Can you attach the full vdsm.log ?
>
>
> And I have the same exception:
>
> Thread-3280::DEBUG::2012-11-10
> 17:14:27,989::BindingXMLRPC::161::vds::(wrapper) [79.112.99.231]
> Thread-3280::DEBUG::2012-11-10
> 17:14:27,990::task::568::TaskManager.Task::(_updateState)
> Task=`f9dba1a1-365d-4e00-8a54-3dbc70604460`::moving from state init
> -> state preparing
> Thread-3280::INFO::2012-11-10
> 17:14:27,991::logUtils::37::dispatcher::(wrapper) Run and protect:
> validateStorageServerConnection(domType=4,
> spUUID='00000000-0000-0000-0000-000000000000',
> conList=[{'connection': '/media/ceva2/Ovirt/Storage/',
'iqn': '',
> 'portal': '', 'user': '', 'password':
'******', 'id':
> '00000000-0000-0000-0000-000000000000', 'port': ''}],
options=None)
> Thread-3280::INFO::2012-11-10
> 17:14:27,991::logUtils::39::dispatcher::(wrapper) Run and protect:
> validateStorageServerConnection, Return response: {'statuslist':
> [{'status': 0, 'id':
'00000000-0000-0000-0000-000000000000'}]}
> Thread-3280::DEBUG::2012-11-10
> 17:14:27,991::task::1151::TaskManager.Task::(prepare)
> Task=`f9dba1a1-365d-4e00-8a54-3dbc70604460`::finished:
> {'statuslist': [{'status': 0, 'id':
> '00000000-0000-0000-0000-000000000000'}]}
> Thread-3280::DEBUG::2012-11-10
> 17:14:27,991::task::568::TaskManager.Task::(_updateState)
> Task=`f9dba1a1-365d-4e00-8a54-3dbc70604460`::moving from state
> preparing -> state finished
> Thread-3280::DEBUG::2012-11-10
> 17:14:27,991::resourceManager::809::ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
> Thread-3280::DEBUG::2012-11-10
> 17:14:27,991::resourceManager::844::ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-3280::DEBUG::2012-11-10
> 17:14:27,991::task::957::TaskManager.Task::(_decref)
> Task=`f9dba1a1-365d-4e00-8a54-3dbc70604460`::ref 0 aborting False
> Thread-3281::DEBUG::2012-11-10
> 17:14:28,049::BindingXMLRPC::161::vds::(wrapper) [79.112.99.231]
> Thread-3281::DEBUG::2012-11-10
> 17:14:28,049::task::568::TaskManager.Task::(_updateState)
> Task=`430ededc-db77-43cb-a21e-c89cd70772f6`::moving from state init
> -> state preparing
> Thread-3281::INFO::2012-11-10
> 17:14:28,049::logUtils::37::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=4,
> spUUID='00000000-0000-0000-0000-000000000000',
> conList=[{'connection': '/media/ceva2/Ovirt/Storage/',
'iqn': '',
> 'portal': '', 'user': '', 'password':
'******', 'id':
> '00000000-0000-0000-0000-000000000000', 'port': ''}],
options=None)
> Thread-3281::ERROR::2012-11-10
> 17:14:28,164::hsm::2057::Storage.HSM::(connectStorageServer) Could
> not connect to storageServer
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/hsm.py", line 2054, in
> connectStorageServer
> conObj.connect()
> File "/usr/share/vdsm/storage/storageServer.py", line 462, in connect
> if not self.checkTarget():
> File "/usr/share/vdsm/storage/storageServer.py", line 449, in
> checkTarget
> fileSD.validateDirAccess(self._path))
> File "/usr/share/vdsm/storage/fileSD.py", line 51, in
> validateDirAccess
> getProcPool().fileUtils.validateAccess(dirPath)
> File "/usr/share/vdsm/storage/remoteFileHandler.py", line 277, in
> callCrabRPCFunction
> *args, **kwargs)
> File "/usr/share/vdsm/storage/remoteFileHandler.py", line 180, in
> callCrabRPCFunction
> rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)
> File "/usr/share/vdsm/storage/remoteFileHandler.py", line 149, in
> _recvAll
> timeLeft):
> File "/usr/lib64/python2.7/contextlib.py", line 84, in helper
> return GeneratorContextManager(func(*args, **kwds))
> File "/usr/share/vdsm/storage/remoteFileHandler.py", line 136, in
> _poll
> raise Timeout()
> Timeout
> Thread-3281::INFO::2012-11-10
> 17:14:28,165::logUtils::39::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslist': [{'status':
> 100, 'id': '00000000-0000-0000-0000-000000000000'}]}
> Thread-3281::DEBUG::2012-11-10
> 17:14:28,165::task::1151::TaskManager.Task::(prepare)
> Task=`430ededc-db77-43cb-a21e-c89cd70772f6`::finished:
> {'statuslist': [{'status': 100, 'id':
> '00000000-0000-0000-0000-000000000000'}]}
> Thread-3281::DEBUG::2012-11-10
> 17:14:28,165::task::568::TaskManager.Task::(_updateState)
> Task=`430ededc-db77-43cb-a21e-c89cd70772f6`::moving from state
> preparing -> state finished
> Thread-3281::DEBUG::2012-11-10
> 17:14:28,165::resourceManager::809::ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
> Thread-3281::DEBUG::2012-11-10
> 17:14:28,166::resourceManager::844::ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-3281::DEBUG::2012-11-10
> 17:14:28,166::task::957::TaskManager.Task::(_decref)
> Task=`430ededc-db77-43cb-a21e-c89cd70772f6`::ref 0 aborting False
>
>
> Packages installed:
>
> [root@localhost vdsm]# rpm -qa | egrep vdsm\|ovirt-engine | sort
> ovirt-engine-3.1.0-3.20121109.git2dc9b51.fc17.noarch
> ovirt-engine-backend-3.1.0-3.20121109.git2dc9b51.fc17.noarch
> ovirt-engine-cli-3.1.0.6-1.fc17.noarch
> ovirt-engine-config-3.1.0-3.20121109.git2dc9b51.fc17.noarch
> ovirt-engine-dbscripts-3.1.0-3.20121109.git2dc9b51.fc17.noarch
> ovirt-engine-genericapi-3.1.0-3.20121109.git2dc9b51.fc17.noarch
> ovirt-engine-notification-service-3.1.0-3.20121109.git2dc9b51.fc17.noarch
> ovirt-engine-restapi-3.1.0-3.20121109.git2dc9b51.fc17.noarch
> ovirt-engine-sdk-3.1.0.4-1.fc17.noarch
> ovirt-engine-setup-3.1.0-3.20121109.git2dc9b51.fc17.noarch
>
ovirt-engine-setup-plugin-allinone-3.1.0-3.20121109.git2dc9b51.fc17.noarch
> ovirt-engine-tools-common-3.1.0-3.20121109.git2dc9b51.fc17.noarch
> ovirt-engine-userportal-3.1.0-3.20121109.git2dc9b51.fc17.noarch
> ovirt-engine-webadmin-portal-3.1.0-3.20121109.git2dc9b51.fc17.noarch
> vdsm-4.10.1-0.119.git60d7e63.fc17.x86_64
> vdsm-bootstrap-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-cli-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-debuginfo-4.10.1-0.119.git60d7e63.fc17.x86_64
> vdsm-debug-plugin-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-gluster-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-directlun-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-faqemu-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-fileinject-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-floppy-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-hostusb-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-hugepages-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-isolatedprivatevlan-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-numa-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-pincpu-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-promisc-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-qemucmdline-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-qos-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-scratchpad-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-smartcard-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-smbios-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-sriov-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-vhostmd-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-hook-vmdisk-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-python-4.10.1-0.119.git60d7e63.fc17.x86_64
> vdsm-reg-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-rest-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-tests-4.10.1-0.119.git60d7e63.fc17.noarch
> vdsm-xmlrpc-4.10.1-0.119.git60d7e63.fc17.noarch
>
>
>
>
>
> On Fri, Nov 9, 2012 at 6:16 PM, Steve Gordon < sgordon(a)redhat.com >
> wrote:
>
>
>
>
>
> ----- Original Message -----
> > From: "Steve Gordon" < sgordon(a)redhat.com >
> > To: "Moran Goldboim" < mgoldboi(a)redhat.com >
> > Cc: users(a)ovirt.org , "Cristian Falcas" < cristi.falcas(a)gmail.com
>
> > Sent: Friday, November 9, 2012 11:11:23 AM
> > Subject: Re: [Users] allinone setup can't add storage
> >
> > ----- Original Message -----
> > > From: "Moran Goldboim" < mgoldboi(a)redhat.com >
> > > To: "Cristian Falcas" < cristi.falcas(a)gmail.com >
> > > Cc: users(a)ovirt.org
> > > Sent: Wednesday, November 7, 2012 8:47:08 AM
> > > Subject: Re: [Users] allinone setup can't add storage
> > >
> > > you are using ovirt-sdk from stable repo and engine from nightly
> > > repo,
> > > those doesn't work together.
> >
> > Funnily enough, you will find it is actually in Fedora's stable
> > updates repository and you can tell from the dist tag that is where
> > this user (and many others) picked it up from. Refer to:
> >
> >
https://bugzilla.redhat.com/show_bug.cgi?id=851893
> Correction, this is the BZ:
>
https://bugzilla.redhat.com/show_bug.cgi?id=869457
>
>
>
> >
https://admin.fedoraproject.org/updates/FEDORA-2012-14464/ovirt-engine-sd...
> >
> > This has been causing a variety of issues for users since it was
> > pushed in early October. Here are some others from my logs:
> >
> > Oct 16 06:13:04 <gestahlt> AIO: Error: could not create ovirtsdk
> > API
> > object
> > Oct 23 19:58:03 <Rudde> okey here is my Error "Exception: Error:
> > could not create ovirtsdk API object
> > Oct 27 08:13:32 <yaro014> after yum update i'm getting error
"ovirt
> > Error: could not create ovirtsdk API object" while engine-setup,
> > could anyone advice ? cannot find anything in google
> >
> > It breaks installation of oVirt 3.1 AIO on F17 and also I believe
> > some of the tools for "normal" installs. In future it would be
> > better if we did not push incompatible versions to stable Fedora
> > releases, it directly convenes Fedora's updates policy.
> >
> > Thanks,
> >
> > Steve
> >
> > > On 11/07/2012 01:40 PM, Cristian Falcas wrote:
> > > > Hi all,
> > > >
> > > > Can someone help me with this error:
> > > >
> > > > AIO: Adding Local Datacenter and cluster... [
> > > > ERROR ]
> > > > Error: could not create ovirtsdk API object
> > > >
> > > >
> > > > trace from the log file
> > > >
> > > > 2012-11-07 13:34:44::DEBUG::all_in_one_100::220::root::
> > > > Initiating
> > > > the
> > > > API object
> > > > 2012-11-07 13:34:44::ERROR::all_in_one_100::231::root::
> > > > Traceback
> > > > (most recent call last):
> > > > File
> > > >
"/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py",
> > > > line 228, in initAPI
> > > > ca_file=basedefs.FILE_CA_CRT_SRC,
> > > > TypeError: __init__() got an unexpected keyword argument
> > > > 'ca_file'
> > > >
> > > > 2012-11-07 13:34:44::DEBUG::setup_sequences::62::root::
> > > > Traceback
> > > > (most recent call last):
> > > > File "/usr/share/ovirt-engine/scripts/setup_sequences.py",
line
> > > > 60,
> > > > in run
> > > > function()
> > > > File
> > > >
"/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py",
> > > > line 232, in initAPI
> > > > raise Exception(ERROR_CREATE_API_OBJECT)
> > > > Exception: Error: could not create ovirtsdk API object
> > > >
> > > >
> > > > Versions installed:
> > > >
> > > > ovirt-engine-3.1.0-3.20121106.git6891171.fc17.noarch
> > > > ovirt-engine-backend-3.1.0-3.20121106.git6891171.fc17.noarch
> > > > ovirt-engine-cli-3.1.0.6-1.fc17.noarch
> > > > ovirt-engine-config-3.1.0-3.20121106.git6891171.fc17.noarch
> > > > ovirt-engine-dbscripts-3.1.0-3.20121106.git6891171.fc17.noarch
> > > > ovirt-engine-genericapi-3.1.0-3.20121106.git6891171.fc17.noarch
> > > >
ovirt-engine-notification-service-3.1.0-3.20121106.git6891171.fc17.noarch
> > > > ovirt-engine-restapi-3.1.0-3.20121106.git6891171.fc17.noarch
> > > > ovirt-engine-sdk-3.1.0.4-1.fc17.noarch
> > > > ovirt-engine-setup-3.1.0-3.20121106.git6891171.fc17.noarch
> > > >
ovirt-engine-setup-plugin-allinone-3.1.0-3.20121106.git6891171.fc17.noarch
> > > > ovirt-engine-tools-common-3.1.0-3.20121106.git6891171.fc17.noarch
> > > > ovirt-engine-userportal-3.1.0-3.20121106.git6891171.fc17.noarch
> > > >
ovirt-engine-webadmin-portal-3.1.0-3.20121106.git6891171.fc17.noarch
> > > >
> > > >
> > > > _______________________________________________
> > > > Users mailing list
> > > > Users(a)ovirt.org
> > > >
http://lists.ovirt.org/mailman/listinfo/users
> > >
> > > _______________________________________________
> > > Users mailing list
> > > Users(a)ovirt.org
> > >
http://lists.ovirt.org/mailman/listinfo/users
> > >
> >
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>