[Users] oVirt/RHEV Android client (Opaque) available for beta testing
by i iordanov
Hello,
We invite any interested oVirt/RHEV developers and administrators to
beta-test Opaque, a new Android oVirt/RHEV client application.
To opt in, please reply to this message with an email address associated
with a Google Account, because joining the beta-test group is based on
membership to a Google Plus community. If you don't want that email address
posted to the mailing list, don't include it in your reply!
Itamar or I will add you to the community and let you know that you can
proceed to the following two steps:
1) Please visit this page here to accept the invitation:
https://plus.google.com/communities/116099119712127782216
2) Once you've become a member of the Google+ group, to opt-in, visit:
https://play.google.com/apps/testing/com.undatech.opaquebeta
You will be able to download Opaque from Google Play by following the link
at the bottom of the opt-in page.
Please share your experiences with Opaque to the mailing list!
Cheers,
iordan
--
The conscious mind has only one thread of execution.
10 years, 8 months
[Users] Nodes lose storage at random
by Johan Kooijman
Hi All,
We're seeing some weird issues in our ovirt setup. We have 4 nodes
connected and an NFS (v3) filestore (FreeBSD/ZFS).
Once in a while, it seems at random, a node loses their connection to
storage, recovers it a minute later. The other nodes usually don't lose
their storage at that moment. Just one, or two at a time.
We've setup extra tooling to verify the storage performance at those
moments and the availability for other systems. It's always online, just
the nodes don't think so.
The engine tells me this:
2014-02-18 11:48:03,598 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(pool-6-thread-48) domain d88764c8-ecc3-4f22-967e-2ce225ac4498:Export in
problem. vds: hv5
2014-02-18 11:48:18,909 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(pool-6-thread-48) domain e9f70496-f181-4c9b-9ecb-d7f780772b04:Data in
problem. vds: hv5
2014-02-18 11:48:45,021 WARN [org.ovirt.engine.core.vdsbroker.VdsManager]
(DefaultQuartzScheduler_Worker-18) [46683672] Failed to refresh VDS , vds =
66e6aace-e51d-4006-bb2f-d85c2f1fd8d2 : hv5, VDS Network Error, continuing.
2014-02-18 11:48:45,070 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-41) [2ef1a894] Correlation ID: 2ef1a894,
Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data
Center GS. Setting Data Center status to Non Responsive (On host hv5,
Error: Network error during communication with the Host.).
The export and data domain live over NFS. There's another domain, ISO, that
lives on the engine machine, also shared over NFS. That domain doesn't have
any issue at all.
Attached are the logfiles for the relevant time period for both the engine
server and the node. The node by the way, is a deployment of the node ISO,
not a full blown installation.
Any clues on where to begin searching? The NFS server shows no issues nor
anything in the logs. I did notice that the statd and lockd daemons were
not running, but I wonder if that can have anything to do with the issue.
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
mail(a)johankooijman.com
10 years, 9 months
[Users] Disk Migration Permissions
by Maurice James
--_f6cbe220-bf5e-4549-9a1f-326f3df52a07_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
What permissions do I need to allow a user to be able to live migrate a dis=
k from one storage domain to another? I have a group with the Power User an=
d Super user permissions on the cluster and they get the following error wh=
en attempting to migrate a disk
Error while executing action: User is not authorized to perform this action=
.
3.3.3-2.el6 =
--_f6cbe220-bf5e-4549-9a1f-326f3df52a07_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'>What permissions do I need to al=
low a user to be able to live migrate a disk from one storage domain to ano=
ther? I have a group with the Power User and Super user permissions on the =
cluster and they get the following error when attempting to migrate a disk<=
div><br></div><div><span style=3D"font-family: 'Arial Unicode MS'=2C Arial=
=2C sans-serif=3B font-size: 10pt=3B">Error while executing action: User is=
not authorized to perform this action.</span></div><div><span style=3D"fon=
t-family: 'Arial Unicode MS'=2C Arial=2C sans-serif=3B font-size: 10pt=3B">=
<br></span></div><div><span style=3D"font-family: 'Arial Unicode MS'=2C Ari=
al=2C sans-serif=3B font-size: 10pt=3B"><br></span></div><div><span style=
=3D"font-family: 'Arial Unicode MS'=2C Arial=2C sans-serif=3B font-size: 10=
pt=3B text-align: -webkit-center=3B">3.3.3-2.el6</span></div> </=
div></body>
</html>=
--_f6cbe220-bf5e-4549-9a1f-326f3df52a07_--
10 years, 9 months
[Users] Foreman and oVirt 3.4 fails
by Andrew Lau
Hi,
Is anyone using Foreman with oVirt 3.4?
Post upgrade, I appear to be having some issues with Foreman reporting "
undefined method `text' for nil:NilClass"
I'm not sure if this ends up being a Foreman issue or oVirt, or it could
just be my install gone bad.. however last time it was the oVirt API had
been updated which caused issues with Foreman so I thought I'd post here
first.
Thanks,
Andrew
10 years, 9 months
[Users] Confused about Gluster usage
by Martijn Grendelman
Hi,
I have been running oVirt 3.3.3 with a single node in a NFS type data
center. Now, I would like to set up a second node. Both nodes have
plenty of storage, but they're only connected to each other over 1 Gbit.
I'm running nodes on CentOS 6.5.
What I would like to accomplish is:
* use a Gluster-backed DATA domain on my existing NFS datacenter
* load balancing by even spread of VMs over the two nodes
* leveraging the speed of local storage, so running a VM over NFS to the
other node is undesireable
So I was thinking I want the storage to be replicated, so that I can
take a node down for maintenance without having to migrate all the
storage to another node.
I was thinking: GlusterFS.
But I am confused on how to set it up. I understand I cannot use the
libgfapi native integration due to dependency problems on CentOS. I have
set up a replicated Gluster volume manually.
How can I use my two nodes with this Gluster volume? What are the
necessary steps?
I did try a couple of things; most notably I was able to create a 2nd
data center with POSIX storage, and mount the Gluster volume there, but
that doesn't work for the first node.
Alternatively, it would also be fine to migrate all existing VMs to the
POSIX datacenter and then move the existing node from the old NFS data
center to the new POSIX data center. Is that possible without
exporting/importing all the VMs?
Cheers,
Martijn.
10 years, 9 months
[Users] Problem with DWH installation
by Michael Wagenknecht
Hi,
I cannot install the Ovirt DWH.
Here is the logfile:
2013-11-05 15:00:12::DEBUG::ovirt-engine-dwh-setup::250::root:: starting
main()
2013-11-05 15:00:12::DEBUG::common_utils::415::root:: found existing
pgpass file, fetching DB host value
2013-11-05 15:00:12::DEBUG::common_utils::415::root:: found existing
pgpass file, fetching DB port value
2013-11-05 15:00:12::DEBUG::common_utils::415::root:: found existing
pgpass file, fetching DB admin value
2013-11-05 15:00:12::DEBUG::common_utils::415::root:: found existing
pgpass file, fetching DB admin value
2013-11-05 15:00:12::DEBUG::common_utils::448::root:: getting DB
password for postgres
2013-11-05 15:00:12::DEBUG::common_utils::457::root:: found password for
username postgres
2013-11-05 15:00:12::DEBUG::common_utils::58::root:: getting vdc option
MinimalETLVersion
2013-11-05 15:00:12::DEBUG::common_utils::512::root:: Executing command
--> '['/usr/bin/engine-config', '-g', 'MinimalETLVersion',
'--cver=general', '-p',
'/usr/share/ovirt-engine/conf/engine-config-install.properties']'
2013-11-05 15:00:13::DEBUG::common_utils::551::root:: output =
2013-11-05 15:00:13::DEBUG::common_utils::552::root:: stderr = Files
/usr/share/ovirt-engine/conf/engine-config-install.properties does not exist
2013-11-05 15:00:13::DEBUG::common_utils::553::root:: retcode = 1
2013-11-05 15:00:13::ERROR::ovirt-engine-dwh-setup::294::root::
Exception caught!
2013-11-05 15:00:13::ERROR::ovirt-engine-dwh-setup::295::root::
Traceback (most recent call last):
File "/usr/bin/ovirt-engine-dwh-setup", line 255, in main
minimalVersion = utils.getVDCOption("MinimalETLVersion")
File "/usr/share/ovirt-engine-dwh/common_utils.py", line 60, in
getVDCOption
output, rc = execCmd(cmdList=cmd, failOnError=True, msg="Error:
failed fetching configuration field %s" % key)
File "/usr/share/ovirt-engine-dwh/common_utils.py", line 556, in execCmd
raise Exception(msg)
Exception: Error: failed fetching configuration field MinimalETLVersion
This are the installed packages:
ovirt-engine-dwh-3.2.0-1.el6.noarch
ovirt-release-el6-8-1.noarch
ovirt-engine-sdk-python-3.3.0.6-1.el6.noarch
ovirt-host-deploy-java-1.1.1-1.el6.noarch
ovirt-engine-dbscripts-3.3.0.1-1.el6.noarch
ovirt-engine-reports-3.2.0-2.el6.noarch
ovirt-engine-lib-3.3.0.1-1.el6.noarch
ovirt-engine-setup-3.3.0.1-1.el6.noarch
ovirt-log-collector-3.3.1-1.el6.noarch
ovirt-image-uploader-3.3.1-1.el6.noarch
ovirt-host-deploy-1.1.1-1.el6.noarch
ovirt-engine-webadmin-portal-3.3.0.1-1.el6.noarch
ovirt-engine-restapi-3.3.0.1-1.el6.noarch
ovirt-engine-tools-3.3.0.1-1.el6.noarch
ovirt-engine-backend-3.3.0.1-1.el6.noarch
ovirt-engine-cli-3.3.0.4-1.el6.noarch
ovirt-iso-uploader-3.3.1-1.el6.noarch
ovirt-engine-userportal-3.3.0.1-1.el6.noarch
ovirt-engine-3.3.0.1-1.el6.noarch
Can you help me?
Best regards,
Michael
10 years, 9 months
[Users] SD Disk's Logical Volume not visible/activated on some nodes
by Boyan Tabakov
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--kNwttSRPs68koQxvdSI5KDEeiPmQG9f68
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Hello,
I have ovirt 3.3 installed on on FC 19 hosts with vdsm 4.13.3-2.fc19.
One of the hosts (host1) is engine + node + SPM and the other host2 is
just a node. I have an iSCSI storage domain configured and accessible
from both nodes.
When creating a new disk in the SD, the underlying logical volume gets
properly created (seen in vgdisplay output on host1), but doesn't seem
to be automatically picked by host2. Consequently, when creating/booting
a VM with the said disk attached, the VM fails to start on host2,
because host2 can't see the LV. Similarly, if the VM is started on
host1, it fails to migrate to host2. Extract from host2 log is in the
end. The LV in question is 6b35673e-7062-4716-a6c8-d5bf72fe3280.
As far as I could track quickly the vdsm code, there is only call to lvs
and not to lvscan or lvchange so the host2 LVM doesn't fully refresh.
The only workaround so far has been to restart VDSM on host2, which
makes it refresh all LVM data properly.
When is host2 supposed to pick up any newly created LVs in the SD VG?
Any suggestions where the problem might be?
Thanks!
Thread-998::DEBUG::2014-02-18
14:49:15,399::BindingXMLRPC::965::vds::(wrapper) client
[10.10.0.10]::call vmCreate with ({'acpiEnable': 'true',
'emulatedMachine': 'pc-1.0', 'tabletEnable': 'true', 'vmId':
'4669e4ad-7b76-4531-a16b-2b85345593a3', 'memGuaranteedSize': 1024,
'spiceSslCipherSuite': 'DEFAULT', 'timeOffset': '0', 'cpuType':
'Penryn', 'custom': {}, 'smp': '1', 'vmType': 'kvm', 'memSize': 1024,
'smpCoresPerSocket': '1', 'vmName': 'testvm', 'nice': '0',
'smartcardEnable': 'false', 'keyboardLayout': 'en-us', 'kvmEnable':
'true', 'pitReinjection': 'false', 'transparentHugePages': 'true',
'devices': [{'device': 'cirrus', 'specParams': {'vram': '32768',
'heads': '1'}, 'type': 'video', 'deviceId':
'9b769f44-db37-4e42-a343-408222e1422f'}, {'index': '2', 'iface': 'ide',
'specParams': {'path': ''}, 'readonly': 'true', 'deviceId':
'a61b291c-94de-4fd4-922e-1d51f4d2760d', 'path': '', 'device': 'cdrom',
'shared': 'false', 'type': 'disk'}, {'index': 0, 'iface': 'virtio',
'format': 'raw', 'bootOrder': '1', 'volumeID':
'6b35673e-7062-4716-a6c8-d5bf72fe3280', 'imageID':
'3738d400-a62e-4ded-b97f-1b4028e5f45b', 'specParams': {}, 'readonly':
'false', 'domainID': '3307f6fa-dd58-43db-ab23-b1fb299006c7', 'optional':
'false', 'deviceId': '3738d400-a62e-4ded-b97f-1b4028e5f45b', 'poolID':
'61f15cc0-8bba-482d-8a81-cd636a581b58', 'device': 'disk', 'shared':
'false', 'propagateErrors': 'off', 'type': 'disk'}, {'device':
'memballoon', 'specParams': {'model': 'virtio'}, 'type': 'balloon',
'deviceId': '4e05c1d1-2ac3-4885-8c1d-92ccd1388f0d'}, {'device': 'scsi',
'specParams': {}, 'model': 'virtio-scsi', 'type': 'controller',
'deviceId': 'fa3c223a-0cfc-4adf-90bc-cb6073a4b212'}],
'spiceSecureChannels':
'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard',
'display': 'vnc'},) {} flowID [20468793]
Thread-998::INFO::2014-02-18
14:49:15,406::API::642::vds::(_getNetworkIp) network None: using 0
Thread-998::INFO::2014-02-18
14:49:15,406::clientIF::394::vds::(createVm) vmContainerLock acquired by
vm 4669e4ad-7b76-4531-a16b-2b85345593a3
Thread-999::DEBUG::2014-02-18
14:49:15,409::vm::2091::vm.Vm::(_startUnderlyingVm)
vmId=3D`4669e4ad-7b76-4531-a16b-2b85345593a3`::Start
Thread-998::DEBUG::2014-02-18
14:49:15,409::clientIF::407::vds::(createVm) Total desktops after
creation of 4669e4ad-7b76-4531-a16b-2b85345593a3 is 1
Thread-999::DEBUG::2014-02-18
14:49:15,409::vm::2095::vm.Vm::(_startUnderlyingVm)
vmId=3D`4669e4ad-7b76-4531-a16b-2b85345593a3`::_ongoingCreations acquired=
Thread-998::DEBUG::2014-02-18
14:49:15,410::BindingXMLRPC::972::vds::(wrapper) return vmCreate with
{'status': {'message': 'Done', 'code': 0}, 'vmList': {'status':
'WaitForLaunch', 'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0',
'tabletEnable': 'true', 'pid': '0', 'memGuaranteedSize': 1024,
'timeOffset': '0', 'keyboardLayout': 'en-us', 'displayPort': '-1',
'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT', 'cpuType':
'Penryn', 'smp': '1', 'clientIp': '', 'nicModel': 'rtl8139,pv',
'smartcardEnable': 'false', 'kvmEnable': 'true', 'pitReinjection':
'false', 'vmId': '4669e4ad-7b76-4531-a16b-2b85345593a3',
'transparentHugePages': 'true', 'devices': [{'device': 'cirrus',
'specParams': {'vram': '32768', 'heads': '1'}, 'type': 'video',
'deviceId': '9b769f44-db37-4e42-a343-408222e1422f'}, {'index': '2',
'iface': 'ide', 'specParams': {'path': ''}, 'readonly': 'true',
'deviceId': 'a61b291c-94de-4fd4-922e-1d51f4d2760d', 'path': '',
'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'index': 0,
'iface': 'virtio', 'format': 'raw', 'bootOrder': '1', 'volumeID':
'6b35673e-7062-4716-a6c8-d5bf72fe3280', 'imageID':
'3738d400-a62e-4ded-b97f-1b4028e5f45b', 'specParams': {}, 'readonly':
'false', 'domainID': '3307f6fa-dd58-43db-ab23-b1fb299006c7', 'optional':
'false', 'deviceId': '3738d400-a62e-4ded-b97f-1b4028e5f45b', 'poolID':
'61f15cc0-8bba-482d-8a81-cd636a581b58', 'device': 'disk', 'shared':
'false', 'propagateErrors': 'off', 'type': 'disk'}, {'device':
'memballoon', 'specParams': {'model': 'virtio'}, 'type': 'balloon',
'deviceId': '4e05c1d1-2ac3-4885-8c1d-92ccd1388f0d'}, {'device': 'scsi',
'specParams': {}, 'model': 'virtio-scsi', 'type': 'controller',
'deviceId': 'fa3c223a-0cfc-4adf-90bc-cb6073a4b212'}], 'custom': {},
'vmType': 'kvm', 'memSize': 1024, 'displayIp': '0',
'spiceSecureChannels':
'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard',
'smpCoresPerSocket': '1', 'vmName': 'testvm', 'display': 'vnc', 'nice':
'0'}}
Thread-999::INFO::2014-02-18 14:49:15,410::vm::2926::vm.Vm::(_run)
vmId=3D`4669e4ad-7b76-4531-a16b-2b85345593a3`::VM wrapper has started
Thread-999::DEBUG::2014-02-18
14:49:15,412::task::579::TaskManager.Task::(_updateState)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::moving from state init ->
state preparing
Thread-999::INFO::2014-02-18
14:49:15,413::logUtils::44::dispatcher::(wrapper) Run and protect:
getVolumeSize(sdUUID=3D'3307f6fa-dd58-43db-ab23-b1fb299006c7',
spUUID=3D'61f15cc0-8bba-482d-8a81-cd636a581b58',
imgUUID=3D'3738d400-a62e-4ded-b97f-1b4028e5f45b',
volUUID=3D'6b35673e-7062-4716-a6c8-d5bf72fe3280', options=3DNone)
Thread-999::DEBUG::2014-02-18
14:49:15,413::lvm::440::OperationMutex::(_reloadlvs) Operation 'lvm
reload operation' got the operation mutex
Thread-999::DEBUG::2014-02-18
14:49:15,413::lvm::309::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n
/sbin/lvm lvs --config " devices { preferred_names =3D
[\\"^/dev/mapper/\\"] ignore_suspended_devices=3D1 write_cache_state=3D0
disable_after_error_count=3D3 obtain_device_list_from_udev=3D0 filter =3D=
[
\'a|/dev/mapper/36090a098103b1821532d057a5a0120d4|/dev/mapper/36090a09810=
bb99fefb2da57b94332027|\',
\'r|.*|\' ] } global { locking_type=3D1 prioritise_write_locks=3D1
wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } "
--noheadings --units b --nosuffix --separator | -o
uuid,name,vg_name,attr,size,seg_start_pe,devices,tags
3307f6fa-dd58-43db-ab23-b1fb299006c7' (cwd None)
Thread-999::DEBUG::2014-02-18
14:49:15,466::lvm::309::Storage.Misc.excCmd::(cmd) SUCCESS: <err> =3D '';=
<rc> =3D 0
Thread-999::DEBUG::2014-02-18
14:49:15,481::lvm::475::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-999::DEBUG::2014-02-18
14:49:15,481::lvm::475::OperationMutex::(_reloadlvs) Operation 'lvm
reload operation' released the operation mutex
Thread-999::WARNING::2014-02-18
14:49:15,482::lvm::621::Storage.LVM::(getLv) lv:
6b35673e-7062-4716-a6c8-d5bf72fe3280 not found in lvs vg:
3307f6fa-dd58-43db-ab23-b1fb299006c7 response
Thread-999::ERROR::2014-02-18
14:49:15,482::task::850::TaskManager.Task::(_setError)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 857, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res =3D f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 3059, in getVolumeSize
apparentsize =3D str(volClass.getVSize(dom, imgUUID, volUUID, bs=3D1)=
)
File "/usr/share/vdsm/storage/blockVolume.py", line 111, in getVSize
size =3D int(int(lvm.getLV(sdobj.sdUUID, volUUID).size) / bs)
File "/usr/share/vdsm/storage/lvm.py", line 914, in getLV
raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName))
LogicalVolumeDoesNotExistError: Logical volume does not exist:
('3307f6fa-dd58-43db-ab23-b1fb299006c7/6b35673e-7062-4716-a6c8-d5bf72fe32=
80',)
Thread-999::DEBUG::2014-02-18
14:49:15,485::task::869::TaskManager.Task::(_run)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::Task._run:
07adf25e-b9fe-44d0-adf4-9159ac0f1f4d
('3307f6fa-dd58-43db-ab23-b1fb299006c7',
'61f15cc0-8bba-482d-8a81-cd636a581b58',
'3738d400-a62e-4ded-b97f-1b4028e5f45b',
'6b35673e-7062-4716-a6c8-d5bf72fe3280') {} failed - stopping task
Thread-999::DEBUG::2014-02-18
14:49:15,485::task::1194::TaskManager.Task::(stop)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::stopping in state preparin=
g
(force False)
Thread-999::DEBUG::2014-02-18
14:49:15,486::task::974::TaskManager.Task::(_decref)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::ref 1 aborting True
Thread-999::INFO::2014-02-18
14:49:15,486::task::1151::TaskManager.Task::(prepare)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::aborting: Task is aborted:=
'Logical volume does not exist' - code 610
Thread-999::DEBUG::2014-02-18
14:49:15,486::task::1156::TaskManager.Task::(prepare)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::Prepare: aborted: Logical
volume does not exist
Thread-999::DEBUG::2014-02-18
14:49:15,486::task::974::TaskManager.Task::(_decref)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::ref 0 aborting True
Thread-999::DEBUG::2014-02-18
14:49:15,486::task::909::TaskManager.Task::(_doAbort)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::Task._doAbort: force False=
Thread-999::DEBUG::2014-02-18
14:49:15,487::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-999::DEBUG::2014-02-18
14:49:15,487::task::579::TaskManager.Task::(_updateState)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::moving from state preparin=
g
-> state aborting
Thread-999::DEBUG::2014-02-18
14:49:15,487::task::534::TaskManager.Task::(__state_aborting)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::_aborting: recover policy =
none
Thread-999::DEBUG::2014-02-18
14:49:15,487::task::579::TaskManager.Task::(_updateState)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::moving from state aborting=
-> state failed
Thread-999::DEBUG::2014-02-18
14:49:15,487::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-999::DEBUG::2014-02-18
14:49:15,488::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-999::ERROR::2014-02-18
14:49:15,488::dispatcher::67::Storage.Dispatcher.Protect::(run)
{'status': {'message': "Logical volume does not exist:
('3307f6fa-dd58-43db-ab23-b1fb299006c7/6b35673e-7062-4716-a6c8-d5bf72fe32=
80',)",
'code': 610}}
Thread-999::ERROR::2014-02-18
14:49:15,488::vm::1826::vm.Vm::(_normalizeVdsmImg)
vmId=3D`4669e4ad-7b76-4531-a16b-2b85345593a3`::Unable to get volume size
for 6b35673e-7062-4716-a6c8-d5bf72fe3280
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 1822, in _normalizeVdsmImg
drv['truesize'] =3D res['truesize']
KeyError: 'truesize'
Thread-999::DEBUG::2014-02-18
14:49:15,489::vm::2112::vm.Vm::(_startUnderlyingVm)
vmId=3D`4669e4ad-7b76-4531-a16b-2b85345593a3`::_ongoingCreations released=
Thread-999::ERROR::2014-02-18
14:49:15,489::vm::2138::vm.Vm::(_startUnderlyingVm)
vmId=3D`4669e4ad-7b76-4531-a16b-2b85345593a3`::The vm start process faile=
d
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 2098, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/vm.py", line 2930, in _run
devices =3D self.buildConfDevices()
File "/usr/share/vdsm/vm.py", line 1935, in buildConfDevices
self._normalizeVdsmImg(drv)
File "/usr/share/vdsm/vm.py", line 1828, in _normalizeVdsmImg
drv['volumeID'])
RuntimeError: Volume 6b35673e-7062-4716-a6c8-d5bf72fe3280 is corrupted
or missing
Best regards,
Boyan Tabakov
--kNwttSRPs68koQxvdSI5KDEeiPmQG9f68
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iEYEARECAAYFAlMDYX8ACgkQXOXFG4fgV77amACfdaldO8NlpTry88hrRrdH6eGk
umQAoMNVB+zEV4gO+SunFLjuyq8J0pL0
=m0Qx
-----END PGP SIGNATURE-----
--kNwttSRPs68koQxvdSI5KDEeiPmQG9f68--
10 years, 9 months
Re: [Users] oVirt 3.5 planning
by Juan Pablo Lorier
Hi,
I'm kind of out of date at this time, but I'd like to propose something
that was meant for 3.4 and I don't know it it made into it; use any nfs
share as either a iso or export so you can just copy into the share and
then update in some way the db.
Also, make export domain able to be shared among dcs as is iso domain,
that is an rfe from long time ago and a useful one.
Attaching and dettaching domains is both time consuming and boring.
Also using tagged and untagged networks on top of the same nic.
Everybody does that except for ovirt.
I also like to say that tough I have a huge enthusiasm for ovirt's fast
evolution, I think that you may need to slow down with adding new
features until most of the rfes that have over a year are done, because
other way it's kind of disappointing opening a rfe just to see it
sleeping too much time. Don't take this wrong, I've been listened and
helped by the team everytime I needed and I'm thankful for that.
Regards,
10 years, 9 months
[Users] [ANN] oVirt 3.4.0 Release Candidate is now available
by Sandro Bonazzola
The oVirt team is pleased to announce that the 3.4.0 Release Candidate is now available for testing.
Release notes and information on the changes for this update are still being worked on and will be available soon on the wiki[1].
Please ensure to follow install instruction from release notes if you're going to test it.
The existing repository ovirt-3.4.0-prerelease has been updated for delivering this release candidate and future refreshes until final release.
An oVirt Node iso is already available, unchanged from third beta.
You're welcome to join us testing this release candidate in next week test day [2] scheduled for 2014-03-06!
[1] http://www.ovirt.org/OVirt_3.4.0_release_notes
[2] http://www.ovirt.org/OVirt_3.4_Test_Day
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 9 months
[Users] Host requirements for 3.4 compatibility
by Darren Evenson
--_000_95F206E8E9D71348930EC3BF6DA3BFD6490EC02264CLICK03clicki_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
I have updated my engine to 3.4 rc.
I created a new cluster with 3.4 compatibility version, and then I moved a =
host I had in maintenance mode to the new cluster.
When I activate it, I get the error "Host kvmhost2 is compatible with versi=
ons (3.0,3.1,3.2,3.3) and cannot join Cluster Cluster_new which is set to v=
ersion 3.4."
My host was Fedora 20 with the latest updates:
Kernel Version: 3.13.4 - 200.fc20.x86_64
KVM Version: 1.6.1 - 3.fc20
LIBVIRT Version: libvirt-1.1.3.3-5.fc20
VDSM Version: vdsm-4.13.3-3.fc20
So I enabled fedora-virt-preview and updated, but I still get the same erro=
r, even now with libvirt 1.2.1:
Kernel Version: 3.13.4 - 200.fc20.x86_64
KVM Version: 1.7.0 - 5.fc20
LIBVIRT Version: libvirt-1.2.1-3.fc20
VDSM Version: vdsm-4.13.3-3.fc20
What am I missing?
- Darren
--_000_95F206E8E9D71348930EC3BF6DA3BFD6490EC02264CLICK03clicki_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40"><head><meta http-equiv=3DContent-Type content=
=3D"text/html; charset=3Dus-ascii"><meta name=3DGenerator content=3D"Micros=
oft Word 14 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ligatures:standard;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";
mso-ligatures:standard;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3D"#0563C1=
" vlink=3D"#954F72"><div class=3DWordSection1><p class=3DMsoNormal>I have u=
pdated my engine to 3.4 rc.<o:p></o:p></p><p class=3DMsoNormal><o:p> <=
/o:p></p><p class=3DMsoNormal>I created a new cluster with 3.4 compatibilit=
y version, and then I moved a host I had in maintenance mode to the new clu=
ster.<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMs=
oNormal>When I activate it, I get the error “Host kvmhost2 is compati=
ble with versions (3.0,3.1,3.2,3.3) and cannot join Cluster Cluster_new whi=
ch is set to version 3.4.”<o:p></o:p></p><p class=3DMsoNormal><o:p>&n=
bsp;</o:p></p><p class=3DMsoNormal>My host was Fedora 20 with the latest up=
dates:<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DM=
soNormal>Kernel Version: 3.13.4 - 200.fc20.x86_64<o:p></o:p></p><p class=3D=
MsoNormal>KVM Version: 1.6.1 - 3.fc20<o:p></o:p></p><p class=3DMsoNormal>LI=
BVIRT Version: libvirt-1.1.3.3-5.fc20<o:p></o:p></p><p class=3DMsoNormal>VD=
SM Version: vdsm-4.13.3-3.fc20<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbs=
p;</o:p></p><p class=3DMsoNormal>So I enabled fedora-virt-preview and updat=
ed, but I still get the same error, even now with libvirt 1.2.1:<o:p></o:p>=
</p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>Kernel V=
ersion: 3.13.4 - 200.fc20.x86_64<o:p></o:p></p><p class=3DMsoNormal>KVM Ver=
sion: 1.7.0 - 5.fc20<o:p></o:p></p><p class=3DMsoNormal>LIBVIRT Version: li=
bvirt-1.2.1-3.fc20<o:p></o:p></p><p class=3DMsoNormal>VDSM Version: vdsm-4.=
13.3-3.fc20<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p clas=
s=3DMsoNormal>What am I missing?<o:p></o:p></p><p class=3DMsoNormal><o:p>&n=
bsp;</o:p></p><p class=3DMsoNormal><span style=3D'font-size:10.0pt;font-fam=
ily:"Tahoma","sans-serif"'>- Darren<o:p></o:p></span></p><p class=3DMsoNorm=
al><o:p> </o:p></p></div></body></html>=
--_000_95F206E8E9D71348930EC3BF6DA3BFD6490EC02264CLICK03clicki_--
10 years, 9 months