Re: [Users] Change locale for VNC console
by Itamar Heim
On 01/03/2013 01:38 PM, Frank Wall wrote:
> On Thu, Jan 03, 2013 at 11:20:34AM +0000, Alexandre Santos wrote:
>> Did you shutdown the VM and started it again or just restart the VNC connection?
>
> It was a complete shutdown, have tried this multiple times.
actually, changing a config requires engine restart
11 years, 11 months
[Users] Attaching export domain to dc fails
by Patrick Hurrelmann
Hi list,
in one datacenter I'm facing problems with my export storage. The dc is
of type single host with local storage. On the host I see that the nfs
export domain is still connected, but the engine does not show this and
therefore it cannot be used for exports or detached.
Trying to add attach the export domain again fails. The following is
logged n vdsm:
Thread-1902159::ERROR::2013-01-24
17:11:45,474::task::853::TaskManager.Task::(_setError)
Task=`4bc15024-7917-4599-988f-2784ce43fbe7`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 861, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 960, in attachStorageDomain
pool.attachSD(sdUUID)
File "/usr/share/vdsm/storage/securable.py", line 63, in wrapper
return f(self, *args, **kwargs)
File "/usr/share/vdsm/storage/sp.py", line 924, in attachSD
dom.attach(self.spUUID)
File "/usr/share/vdsm/storage/sd.py", line 442, in attach
raise se.StorageDomainAlreadyAttached(pools[0], self.sdUUID)
StorageDomainAlreadyAttached: Storage domain already attached to pool:
'domain=cd23808b-136a-4b33-a80c-f2581eab022d,
pool=d95c53ca-9cef-4db2-8858-bf4937bd8c14'
It won't let me attach the export domain saying that it is already
attached. Manually umounting the export domain on the host results in
the same error on subsequent attach.
This is on CentOS 6.3 using Dreyou's rpms. Installed versions on host:
vdsm.x86_64 4.10.0-0.44.14.el6
vdsm-cli.noarch 4.10.0-0.44.14.el6
vdsm-python.x86_64 4.10.0-0.44.14.el6
vdsm-xmlrpc.noarch 4.10.0-0.44.14.el6
Engine:
ovirt-engine.noarch 3.1.0-3.19.el6
ovirt-engine-backend.noarch 3.1.0-3.19.el6
ovirt-engine-cli.noarch 3.1.0.7-1.el6
ovirt-engine-config.noarch 3.1.0-3.19.el6
ovirt-engine-dbscripts.noarch 3.1.0-3.19.el6
ovirt-engine-genericapi.noarch 3.1.0-3.19.el6
ovirt-engine-jbossas711.x86_64 1-0
ovirt-engine-notification-service.noarch 3.1.0-3.19.el6
ovirt-engine-restapi.noarch 3.1.0-3.19.el6
ovirt-engine-sdk.noarch 3.1.0.5-1.el6
ovirt-engine-setup.noarch 3.1.0-3.19.el6
ovirt-engine-tools-common.noarch 3.1.0-3.19.el6
ovirt-engine-userportal.noarch 3.1.0-3.19.el6
ovirt-engine-webadmin-portal.noarch 3.1.0-3.19.el6
ovirt-image-uploader.noarch 3.1.0-16.el6
ovirt-iso-uploader.noarch 3.1.0-16.el6
ovirt-log-collector.noarch 3.1.0-16.el6
How can this be recovered to a sane state? If more information is
needed, please do not hesitate to request it.
Thanks and regards
Patrick
--
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg
HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
11 years, 11 months
[Users] storage domain auto re-cover
by Alex Leonhardt
hi,
is it possible to set a storage domain to auto-recover / auto-reactivate ?
e.g. after I restart a host that runs a storage domain, i want ovirt engine
to make that storage domain active after the host has come up.
thanks
alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
11 years, 11 months
[Users] VM migration failed on oVirt Node Hypervisor release 2.5.5 (0.1.fc17) : empty cacert.pem
by Kevin Maziere Aubry
Hi
My concern is about ovirt node oVirt Node Hypervisor release 2.5.5
(0.1.fc17), iso downloaded from ovirt.
I've installed and connect 4 nodes on a manager, and try to migrate a VM
between hypervisors.
I always fail with error :
libvirtError: operation failed: Failed to connect to remote libvirt URI
qemu+tls://172.16.6.3/system
where 172.0.0.1 is the IP of a nde.
I've check on the node and the port 16514 is open.
I also test the virsh command to have a better error message :
virsh -c tls://172.16.6.3/system
error: Unable to import client certificate /etc/pki/CA/cacert.pem
I've checked the cert file on the ovirt node and found it was empty, and
that on all nodes install from the Ovirt ISO it is empty
I also checked /config/etc/pki/CA/cacert.pem, which is also empty.
On a vdsm node install from package on fedora17, it works.
ls -al /etc/pki/CA/cacert.pem
lrwxrwxrwx. 1 root root 30 18 janv. 14:30 /etc/pki/CA/cacert.pem ->
/etc/pki/vdsm/certs/cacert.pem
And the cert is good.
I've seen no bug report regarding the feature...
Kevin
--
Kevin Mazière
Responsable Infrastructure
Alter Way – Hosting
1 rue Royal - 227 Bureaux de la Colline
92213 Saint-Cloud Cedex
Tél : +33 (0)1 41 16 38 41
Mob : +33 (0)7 62 55 57 05
http://www.alterway.fr
11 years, 11 months
Re: [Users] Problems when trying to delete a snapshot
by Eduardo Warszawski
----- Original Message -----
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi,
> I recovered from this error by import my base-image in a new machine
> and make a restore of the backups.
>
> But is it possible "by hand" to merge latest snapshot in to a
> base-image to get a new VM up and running with the old disk image?
>
Looking at your vdsm logs the snapshot should be intact, then it can be
manually restored to the previous state. Please restore the images dirs,
removing the "old" and the "orig" dirs you have.
You need to change the engine db accordingly too.
Later you can retry the merge.
Regards.
> I have tried with qemu-img but have no go with it.
>
> Regards //Ricky
>
>
> On 2012-12-30 16:57, Haim Ateya wrote:
> > Hi Ricky,
> >
> > from going over your logs, it seems like create snapshot failed,
> > its logged clearly in both engine and vdsm logs [1]. did you try to
> > delete this snapshot or was it a different one? if so, not sure its
> > worth debugging.
> >
> > bee7-78e7d1cbc201, vmId=d41b4ebe-3631-4bc1-805c-d762c636ca5a), log
> > id: 46d21393 2012-12-13 10:40:24,372 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> > (pool-5-thread-50) [12561529] Failed in SnapshotVDS method
> > 2012-12-13 10:40:24,372 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> > (pool-5-thread-50) [12561529] Error code SNAPSHOT_FAILED and error
> > message VDSGenericException: VDSErrorException: Failed to
> > SnapshotVDS, error = Snapshot failed 2012-12-13 10:40:24,372 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> > (pool-5-thread-50) [12561529] Command
> > org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand return
> > value Class Name:
> > org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
> >
> >
> mStatus Class Name:
> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
> > mCode 48 mMessage
> > Snapshot failed
> >
> >
> >
> > enter/6d91788c-99d9-11e1-b913-78e7d1cbc201/mastersd/master/tasks/21cbcc25-7672-4704-a414-a44f5e9944ed
> > temp
> > /rhev/data-center/6d91788c-99d9-11e1-b913-78e7d1cbc201/mastersd/maste
> >
> >
> r/tasks/21cbcc25-7672-4704-a414-a44f5e9944ed.temp
> > 21cbcc25-7672-4704-a414-a44f5e9944ed::ERROR::2012-12-14
> > 10:48:41,189::volume::492::Storage.Volume::(create) Unexpected
> > error Traceback (most recent call last): File
> > "/usr/share/vdsm/storage/volume.py", line 475, in create
> > srcVolUUID, imgPath, volPath) File
> > "/usr/share/vdsm/storage/fileVolume.py", line 138, in _create
> > oop.getProcessPool(dom.sdUUID).createSparseFile(volPath,
> > sizeBytes) File "/usr/share/vdsm/storage/remoteFileHandler.py",
> > line 277, in callCrabRPCFunction *args, **kwargs) File
> > "/usr/share/vdsm/storage/remoteFileHandler.py", line 195, in
> > callCrabRPCFunction raise err IOError: [Errno 27] File too large
> >
> > 2012-12-13 10:40:24,372 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> > (pool-5-thread-50) [12561529] Vds: virthost01 2012-12-13
> > 10:40:24,372 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase]
> > (pool-5-thread-50) [12561529] Command SnapshotVDS execution failed.
> > Exception: VDSErrorException: VDSGenericException:
> > VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed
> > 2012-12-13 10:40:24,373 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
> > (pool-5-thread-50) [12561529] FINISH, SnapshotVDSCommand, log id:
> > 46d21393 2012-12-13 10:40:24,373 ERROR
> > [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
> > (pool-5-thread-50) [12561529] Wasnt able to live snpashot due to
> > error: VdcBLLException: VdcBLLException:
> > org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> > VDSGenericException: VDSErrorException: Failed to SnapshotVDS,
> > error = Snapshot failed, rolling back. 2012-12-13 10:40:24,376
> > ERROR [org.ovirt.engine.core.bll.CreateSnapshotCommand]
> > (pool-5-thread-50) [4fd6c4e4] Ending command with failure:
> > org.ovirt.engine.core.bll.CreateSnapshotCommand 2012-12-13 1
> >
> > 21cbcc25-7672-4704-a414-a44f5e9944ed::ERROR::2012-12-14
> > 10:48:41,196::task::833::TaskManager.Task::(_setError)
> > Task=`21cbcc25-7672-4704-a414-a44f5e9944ed`::Unexpected error
> > Traceback (most recent call last): File
> > "/usr/share/vdsm/storage/task.py", line 840, in _run return
> > fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line
> > 307, in run return self.cmd(*self.argslist, **self.argsdict) File
> > "/usr/share/vdsm/storage/securable.py", line 68, in wrapper return
> > f(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line
> > 1903, in createVolume srcImgUUID=srcImgUUID,
> > srcVolUUID=srcVolUUID) File "/usr/share/vdsm/storage/fileSD.py",
> > line 258, in createVolume volUUID, desc, srcImgUUID, srcVolUUID)
> > File "/usr/share/vdsm/storage/volume.py", line 494, in create
> > (volUUID, e)) VolumeCreationError: Error creating a new volume:
> > ('Volume creation 6da02c1e-5ef5-4fab-9ab2-bb081b35e7b3 failed:
> > [Errno 27] File too large',)
> >
> >
> >
> > ----- Original Message -----
> >> From: "Ricky Schneberger" <ricky(a)schneberger.se> To: "Haim Ateya"
> >> <hateya(a)redhat.com> Cc: users(a)ovirt.org Sent: Thursday, December
> >> 20, 2012 5:52:10 PM Subject: Re: [Users] Problems when trying to
> >> delete a snapshot
> >>
> > Hi, The task did not finished but it broked my VM. What I have
> > right now is a VM with a base-image and a snapshot that I need to
> > merge together so I can import the disk in a new VM.
> >
> > I have attached the logs and even the output from the
> > tree-command.
> >
> > Regards //
> >
> > Ricky
> >
> >
> >
> > On 2012-12-16 08:35, Haim Ateya wrote:
> >>>> please attach full engine and vdsm log from SPM machine.
> >>>> also, did the task finished ? please run tree command for
> >>>> /rhev/data-center/.
> >>>>
> >>>> ----- Original Message -----
> >>>>> From: "Ricky Schneberger" <ricky(a)schneberger.se> To:
> >>>>> users(a)ovirt.org Sent: Friday, December 14, 2012 3:16:58 PM
> >>>>> Subject: [Users] Problems when trying to delete a snapshot
> >>>>>
> >>>> I was trying to delete a snapshot from one of my VM and
> >>>> everything started fine.
> >>>>
> >>>> The disk image is a thin provisioned 100GB disk with 8GB
> >>>> data. I just hade one snapshot and it was that one I started
> >>>> to delete. After more than two hours I look in the folder
> >>>> with that VMs disk images and found out that there was i new
> >>>> created file with a size of around 650GB and it was still
> >>>> growing.
> >>>>
> >>>> -rw-rw----. 1 vdsm kvm 8789950464 14 dec 12.23
> >>>> 8ede8e53-1323-442b-84f2-3c94114c64cf -rw-r--r--. 1 vdsm kvm
> >>>> 681499951104 14 dec 14.10
> >>>> 8ede8e53-1323-442b-84f2-3c94114c64cf_MERGE -rw-r--r--. 1 vdsm
> >>>> kvm 272 14 dec 12.24
> >>>> 8ede8e53-1323-442b-84f2-3c94114c64cf.meta -rw-rw----. 1 vdsm
> >>>> kvm 107382439936 6 jun 2012
> >>>> b4a43421-728b-4204-a389-607221d945b7 -rw-r--r--. 1 vdsm kvm
> >>>> 282 14 dec 12.24 b4a43421-728b-4204-a389-607221d945b7.meta
> >>>>
> >>>> Any idea what is happening?
> >>>>
> >>>> Regards
> >>>>>
> >>>>> _______________________________________________ Users
> >>>>> mailing list Users(a)ovirt.org
> >>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>
> >>
>
> - --
> Ricky Schneberger
>
> - ------------------------------------
> "Not using free exhaust energy to help your engine breathe is
> downright
> criminal"
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.11 (GNU/Linux)
> Comment: Using GnuPG with undefined - http://www.enigmail.net/
>
> iEYEARECAAYFAlDhjYYACgkQOap81biMC2NY1gCdHeTHy92dFzMMhwKA360OSauW
> KMIAn1rClC+ZWRgukQJaeCY0g3APw4to
> =G4Bl
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
11 years, 11 months
[Users] Installing a lab setup from scratch using F18
by Joop
As promised on IRC (jvandewege) I'll post my findings of setting up an
ovirt lab environment from scratch using F18.
First some background:
- 2 hosts for testing storage cluster with replicated gluster data and
iso domains (HP ML110 G5)
- 2 hosts for VMs (HP DL360 G5?)
- 1 managment server (HP ML110 G5)
All physical servers have atleast 1Gb connection and they also have 2
10Gb ethernet ports connected to two Arista switches.
Complete setup (except for the managment srv) is redundant. Using
F18-x64 DVD and using minimal server with extra tools, after install the
ovirt.repo and the beta gluster repo is activated.
This serves as a proof of concept for a bigger setup.
Problems sofar:
- looks like F18 uses a different path to access video since using the
defaults leads to garbled video, need to use nomodeset as a kernel option
- upgrading the minimal install (yum upgrade) gives me kernel-3.7.2-204
and the boot process halts with soft locks on different cpus, reverting
to 3.6.10-4.fc18.x86_64 fixes that. Managment is using 3.7.2 kernel
without problems BUT it doesn't use libvirt/qemu-kvm/vdsm, my guess its
related.
- need to disable NetworkManager and enable network (and ifcfg-xxx) to
get network going
- adding the storage hosts from de webui works but after reboot vdsm is
not starting, reason seems to be that network isn't initialised until
after all interfaces are done with their dhcp requests. There are 4
interfaces which use dhcp, setting those to bootprotocol=none seems the
help.
- during deploy their is a warning about 'cannot set tuned profile',
seems harmless but hadn't seen that one until now.
- the deployment script discovers during deployment that the ID of the
second storage server is identical to the first one and abort the
deployment (Blame HP!) shouldn't it generate a unique one using uuidgen??
Things that are OK sofar:
- ovirt-engine setup (no problems with postgresql)
- creating/activating gluster volumes (no more deadlocks)
Adding virt hosts has to wait til tomorrow, got problems getting the dvd
iso onto an usb stick, will probably burn a DVD to keep going.
Joop
11 years, 11 months
[Users] cannot add gluster domain
by T-Sinjon
HI, everyone:
Recently , I newly installed ovirt 3.1 from http://resources.ovirt.org/releases/stable/rpm/Fedora/17/noarch/,
and node use http://resources.ovirt.org/releases/stable/tools/ovirt-node-iso-2.5.5-0.1...
when i add gluster domain via nfs, mount error occurred,
I have do manually mount action on the node but failed if without -o nolock option:
# /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 my-gluster-ip:/gvol02/GlusterDomain /rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain
mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified
blow is the vdsm.log from node and engine.log, any help was appreciated :
vdsm.log
Thread-12717::DEBUG::2013-01-22 09:19:02,261::BindingXMLRPC::156::vds::(wrapper) [my-engine-ip]
Thread-12717::DEBUG::2013-01-22 09:19:02,261::task::588::TaskManager.Task::(_updateState) Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::moving from state init -> state preparing
Thread-12717::INFO::2013-01-22 09:19:02,262::logUtils::37::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': 'my-gluster-ip:/gvol02/GlusterDomain', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None)
Thread-12717::INFO::2013-01-22 09:19:02,262::logUtils::39::dispatcher::(wrapper) Run and protect: validateStorageServerConnection, Return response: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-12717::DEBUG::2013-01-22 09:19:02,262::task::1172::TaskManager.Task::(prepare) Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::finished: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-12717::DEBUG::2013-01-22 09:19:02,262::task::588::TaskManager.Task::(_updateState) Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::moving from state preparing -> state finished
Thread-12717::DEBUG::2013-01-22 09:19:02,262::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-12717::DEBUG::2013-01-22 09:19:02,262::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-12717::DEBUG::2013-01-22 09:19:02,263::task::978::TaskManager.Task::(_decref) Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::ref 0 aborting False
Thread-12718::DEBUG::2013-01-22 09:19:02,307::BindingXMLRPC::156::vds::(wrapper) [my-engine-ip]
Thread-12718::DEBUG::2013-01-22 09:19:02,307::task::588::TaskManager.Task::(_updateState) Task=`c07a075a-a910-4bc3-9a33-b957d05ea270`::moving from state init -> state preparing
Thread-12718::INFO::2013-01-22 09:19:02,307::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': 'my-gluster-ip:/gvol02/GlusterDomain', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '6463ca53-6c57-45f6-bb5c-45505891cae9', 'port': ''}], options=None)
Thread-12718::DEBUG::2013-01-22 09:19:02,467::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 my-gluster-ip:/gvol02/GlusterDomain /rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain' (cwd None)
Thread-12718::ERROR::2013-01-22 09:19:02,486::hsm::1932::Storage.HSM::(connectStorageServer) Could not connect to storageServer
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 1929, in connectStorageServer
File "/usr/share/vdsm/storage/storageServer.py", line 256, in connect
File "/usr/share/vdsm/storage/storageServer.py", line 179, in connect
File "/usr/share/vdsm/storage/mount.py", line 190, in mount
File "/usr/share/vdsm/storage/mount.py", line 206, in _runcmd
MountError: (32, ";mount.nfs: rpc.statd is not running but is required for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or start statd.\nmount.nfs: an incorrect mount option was specified\n")
engine.log:
2013-01-22 17:19:20,073 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand] (ajp--0.0.0.0-8009-7) [25932203] START, ValidateStorageServerConnectionVDSCommand(vdsId = 626e37f4-5ee3-11e2-96fa-0030487c133e, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: null, connection: my-gluster-ip:/gvol02/GlusterDomain };]), log id: 303f4753
2013-01-22 17:19:20,095 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand] (ajp--0.0.0.0-8009-7) [25932203] FINISH, ValidateStorageServerConnectionVDSCommand, return: {00000000-0000-0000-0000-000000000000=0}, log id: 303f4753
2013-01-22 17:19:20,115 INFO [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (ajp--0.0.0.0-8009-7) [25932203] Running command: AddStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System
2013-01-22 17:19:20,117 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-7) [25932203] START, ConnectStorageServerVDSCommand(vdsId = 626e37f4-5ee3-11e2-96fa-0030487c133e, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: 6463ca53-6c57-45f6-bb5c-45505891cae9, connection: my-gluster-ip:/gvol02/GlusterDomain };]), log id: 198f3eb4
2013-01-22 17:19:20,323 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-7) [25932203] FINISH, ConnectStorageServerVDSCommand, return: {6463ca53-6c57-45f6-bb5c-45505891cae9=477}, log id: 198f3eb4
2013-01-22 17:19:20,325 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (ajp--0.0.0.0-8009-7) [25932203] The connection with details my-gluster-ip:/gvol02/GlusterDomain failed because of error code 477 and error message is: 477
2013-01-22 17:19:20,415 INFO [org.ovirt.engine.core.bll.storage.AddNFSStorageDomainCommand] (ajp--0.0.0.0-8009-6) [6641b9e1] Running command: AddNFSStorageDomainCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System
2013-01-22 17:19:20,425 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--0.0.0.0-8009-6) [6641b9e1] START, CreateStorageDomainVDSCommand(vdsId = 626e37f4-5ee3-11e2-96fa-0030487c133e, storageDomain=org.ovirt.engine.core.common.businessentities.storage_domain_static@8e25c6bc, args=my-gluster-ip:/gvol02/GlusterDomain), log id: 675539c4
2013-01-22 17:19:21,064 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-6) [6641b9e1] Failed in CreateStorageDomainVDS method
2013-01-22 17:19:21,065 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-6) [6641b9e1] Error code StorageDomainFSNotMounted and error message VDSGenericException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Storage domain remote path not mounted: ('/rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain',)
2013-01-22 17:19:21,066 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-6) [6641b9e1] Command org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand return value
Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode 360
mMessage Storage domain remote path not mounted: ('/rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain',)
11 years, 11 months
[Users] KVM version not showing in Ovirt Manager
by Tom Brown
Hi
I have just added another HV to a cluster and its up and running fine. I can run VM's on it and migrate fro other HV's onto it. I do note however that in the manager there is no KVM version listed as installed however on other HV's in the cluster there is a version present.
I see that the KVM version is slightly different on this new host but as i said apart from this visual issue everything appear to be running fine. These HV's are CentOS 6.3 using dreyou 3.1
Node where KVM version not showing in the manager
node003 ~]# rpm -qa | grep kvm
qemu-kvm-rhev-0.12.1.2-2.295.el6.10.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.295.el6.10.x86_64
Node where KVM version is showing in the manager
node002 ~]# rpm -qa | grep kvm
qemu-kvm-tools-0.12.1.2-2.295.el6_3.8.x86_64
qemu-kvm-0.12.1.2-2.295.el6_3.8.x86_64
thanks
11 years, 11 months