[Users] [QE] 3.3.2 Release tracker
by Sandro Bonazzola
Hi,
as you may know, we're planning to build oVirt 3.3.2 beta really soon and release 3.3.2 by early December.
I have created a tracker bug (https://bugzilla.redhat.com/show_bug.cgi?id=1027349) for this release.
Beta build will be composed in 2 weeks.
The following is a list of the bugs with target 3.3.2 or 3.3:
Whiteboard Bug ID Summary
infra 987982 When adding a host through the REST API, the error message says that "rootPassword" is required...
infra 1017267 Plaintext user passwords in async_tasks database
infra 992883 VdsManager.OnTimer loads VDS instead of VdsDynamic
infra 1020344 Power Managent with cisco_ucs problem
infra 1009899 exportDbSchema scripts generates output file with wrong name
integration 1022440 AIO - configure the AIO host to be a gluster cluster/host
integration 1026926 engine-cleanup (and possible engine-setup) does not affect runtime value of shmmax
integration 902979 ovirt-live - firefox doesn't trust the installed engine
integration 1021805 oVirt Live - use motd to show the admin password
network 1019818 Support OpenStack Havana layer 2 agent integration
network 988002 [oVirt] [network] Add button shouldn't appear on specific network
network 987916 [oVirt] [provider] Dialog doesn't update unless focus lost
network 987999 [oVirt] [provider] Add button shouldn't appear on specific provider
storage 915753 Deadlock detected during creation vms in pool
storage 987917 [oVirt] [glance] API version not specified in provider dialog
storage 1024449 Check the performance with lvm-2.02.100-7
virt 1007940 Cannot clone from snapshot while using GlusterFS as POSIX Storage Domain
? 991267 [RFE] Add TUI information to log file.
Please set the target to 3.3.2 and add the bug to the tracker if you think that 3.3.2 should not be released without it fixed.
Please also update the target to 3.3.3 or any next release for bugs that won't be in 3.3.2: it will ease gathering the blocking bugs for next releases.
For those who want to help testing the bugs, I suggest to add yourself as QA contact for the bug.
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
11 years, 1 month
[Users] Live storage migration fails on CentOS 6.4 + ovirt3.3 cluster
by Sander Grendelman
Moving the storage of a (running) VM to a different (FC) storage domain fails.
Steps to reproduce:
1) Create new VM
2) Start VM
3) Start move of the VM to a different storage domain
When I look at the logs it seems that vdsm/libvirt tries to use an
option that is unsupported by libvirt or the qemu-kvm version on
CentOS 6.4:
"libvirtError: unsupported configuration: reuse is not supported with
this QEMU binary"
Information in the "Events" section of the oVirt engine manager:
2013-Nov-04, 14:45 VM migratest powered off by grendelmans (Host: gnkvm01).
2013-Nov-04, 14:05 User grendelmans moving disk migratest_Disk1 to
domain gneva03_vmdisk02.
2013-Nov-04, 14:04 Snapshot 'Auto-generated for Live Storage
Migration' creation for VM 'migratest' has been completed.
2013-Nov-04, 14:04 Failed to create live snapshot 'Auto-generated for
Live Storage Migration' for VM 'migratest'. VM restart is recommended.
2013-Nov-04, 14:04 Snapshot 'Auto-generated for Live Storage
Migration' creation for VM 'migratest' was initiated by grendelmans.
2013-Nov-04, 14:04 VM migratest started on Host gnkvm01
2013-Nov-04, 14:03 VM migratest was started by grendelmans (Host: gnkvm01).
Information from the vdsm log:
Thread-100903::DEBUG::2013-11-04
14:04:56,548::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS: <err> =
''; <rc> = 0
Thread-100903::DEBUG::2013-11-04
14:04:56,615::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm
reload operation' released the operation mutex
Thread-100903::DEBUG::2013-11-04
14:04:56,622::blockVolume::588::Storage.Misc.excCmd::(getMetadata)
'/bin/dd iflag=direct skip=38 bs=512
if=/dev/dfbbc8dd-bfae-44e1-8876-2bb82921565a/metadata count=1' (cwd
None)
Thread-100903::DEBUG::2013-11-04
14:04:56,642::blockVolume::588::Storage.Misc.excCmd::(getMetadata)
SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B)
copied, 0.000208694 s, 2.5 MB/s\n'; <rc> = 0
Thread-100903::DEBUG::2013-11-04
14:04:56,643::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0
records in', '1+0 records out', '512 bytes (512 B) copied, 0.000208694
s, 2.5 MB/s'], size: 512
Thread-100903::INFO::2013-11-04
14:04:56,644::logUtils::47::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'volType': 'path'}, 'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'chain': [{'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04',
'domainID': 'dfbbc8dd-bfae-44e1-8876-2bb82921565a', 'vmVolInfo':
{'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04',
'volType': 'path'}, 'volumeID':
'7af63c13-c44b-4418-a1d4-e0e092ee7f04', 'imageID':
'57ff3040-0cbd-4659-bd21-f07036d84dd8'}, {'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'domainID': 'dfbbc8dd-bfae-44e1-8876-2bb82921565a', 'vmVolInfo':
{'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'volType': 'path'}, 'volumeID':
'4d05730d-433c-40d9-8600-6fb0eb5af821', 'imageID':
'57ff3040-0cbd-4659-bd21-f07036d84dd8'}]}
Thread-100903::DEBUG::2013-11-04
14:04:56,644::task::1168::TaskManager.Task::(prepare)
Task=`0f953aa3-e2b9-4008-84ad-f271136d8d23`::finished: {'info':
{'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'volType': 'path'}, 'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'chain': [{'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04',
'domainID': 'dfbbc8dd-bfae-44e1-8876-2bb82921565a', 'vmVolInfo':
{'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04',
'volType': 'path'}, 'volumeID':
'7af63c13-c44b-4418-a1d4-e0e092ee7f04', 'imageID':
'57ff3040-0cbd-4659-bd21-f07036d84dd8'}, {'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'domainID': 'dfbbc8dd-bfae-44e1-8876-2bb82921565a', 'vmVolInfo':
{'path': '/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'volType': 'path'}, 'volumeID':
'4d05730d-433c-40d9-8600-6fb0eb5af821', 'imageID':
'57ff3040-0cbd-4659-bd21-f07036d84dd8'}]}
Thread-100903::DEBUG::2013-11-04
14:04:56,644::task::579::TaskManager.Task::(_updateState)
Task=`0f953aa3-e2b9-4008-84ad-f271136d8d23`::moving from state
preparing -> state finished
Thread-100903::DEBUG::2013-11-04
14:04:56,645::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.dfbbc8dd-bfae-44e1-8876-2bb82921565a': < ResourceRef
'Storage.dfbbc8dd-bfae-44e1-8876-2bb82921565a', isValid: 'True' obj:
'None'>}
Thread-100903::DEBUG::2013-11-04
14:04:56,645::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-100903::DEBUG::2013-11-04
14:04:56,646::resourceManager::615::ResourceManager::(releaseResource)
Trying to release resource
'Storage.dfbbc8dd-bfae-44e1-8876-2bb82921565a'
Thread-100903::DEBUG::2013-11-04
14:04:56,646::resourceManager::634::ResourceManager::(releaseResource)
Released resource 'Storage.dfbbc8dd-bfae-44e1-8876-2bb82921565a' (0
active users)
Thread-100903::DEBUG::2013-11-04
14:04:56,647::resourceManager::640::ResourceManager::(releaseResource)
Resource 'Storage.dfbbc8dd-bfae-44e1-8876-2bb82921565a' is free,
finding out if anyone is waiting for it.
Thread-100903::DEBUG::2013-11-04
14:04:56,647::resourceManager::648::ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.dfbbc8dd-bfae-44e1-8876-2bb82921565a', Clearing records.
Thread-100903::DEBUG::2013-11-04
14:04:56,648::task::974::TaskManager.Task::(_decref)
Task=`0f953aa3-e2b9-4008-84ad-f271136d8d23`::ref 0 aborting False
Thread-100903::INFO::2013-11-04
14:04:56,648::clientIF::325::vds::(prepareVolumePath) prepared volume
path: /rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821
Thread-100903::DEBUG::2013-11-04
14:04:56,649::vm::3619::vm.Vm::(snapshot)
vmId=`2147dd59-6794-4be6-98b9-948636a31159`::<domainsnapshot>
<disks>
<disk name="vda" snapshot="external">
<source
file="/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821"/>
</disk>
</disks>
</domainsnapshot>
Thread-100903::DEBUG::2013-11-04
14:04:56,659::libvirtconnection::101::libvirtconnection::(wrapper)
Unknown libvirterror: ecode: 67 edom: 10 level: 2 message: unsupported
configuration: reuse is not supported with this QEMU binary
Thread-100903::DEBUG::2013-11-04
14:04:56,659::vm::3640::vm.Vm::(snapshot)
vmId=`2147dd59-6794-4be6-98b9-948636a31159`::Snapshot failed using the
quiesce flag, trying again without it (unsupported configuration:
reuse is not supported with this QEMU binary)
Thread-100903::DEBUG::2013-11-04
14:04:56,668::libvirtconnection::101::libvirtconnection::(wrapper)
Unknown libvirterror: ecode: 67 edom: 10 level: 2 message: unsupported
configuration: reuse is not supported with this QEMU binary
Thread-100903::ERROR::2013-11-04
14:04:56,668::vm::3644::vm.Vm::(snapshot)
vmId=`2147dd59-6794-4be6-98b9-948636a31159`::Unable to take snapshot
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 3642, in snapshot
self._dom.snapshotCreateXML(snapxml, snapFlags)
File "/usr/share/vdsm/vm.py", line 826, in f
ret = attr(*args, **kwargs)
File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py",
line 76, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1623, in
snapshotCreateXML
if ret is None:raise libvirtError('virDomainSnapshotCreateXML()
failed', dom=self)
libvirtError: unsupported configuration: reuse is not supported with
this QEMU binary
Thread-100903::DEBUG::2013-11-04
14:04:56,670::BindingXMLRPC::986::vds::(wrapper) return vmSnapshot
with {'status': {'message': 'Snapshot failed', 'code': 48}}
Version information:
oVirt Engine server (CentOS 6.4 + updates, ovirt 3.3 stable):
[root@gnovirt01 ~]# rpm -qa '*ovirt*' '*vdsm*' '*libvirt*' '*kvm*'
ovirt-host-deploy-1.1.1-1.el6.noarch
ovirt-engine-lib-3.3.0.1-1.el6.noarch
ovirt-engine-webadmin-portal-3.3.0.1-1.el6.noarch
ovirt-engine-dbscripts-3.3.0.1-1.el6.noarch
libvirt-client-0.10.2-18.el6_4.14.x86_64
qemu-kvm-0.12.1.2-2.355.0.1.el6_4.9.x86_64
ovirt-engine-sdk-python-3.3.0.6-1.el6.noarch
ovirt-release-el6-8-1.noarch
ovirt-host-deploy-java-1.1.1-1.el6.noarch
ovirt-engine-websocket-proxy-3.3.0.1-1.el6.noarch
ovirt-image-uploader-3.3.1-1.el6.noarch
ovirt-log-collector-3.3.1-1.el6.noarch
ovirt-engine-userportal-3.3.0.1-1.el6.noarch
ovirt-engine-restapi-3.3.0.1-1.el6.noarch
ovirt-engine-backend-3.3.0.1-1.el6.noarch
ovirt-engine-setup-3.3.0.1-1.el6.noarch
libvirt-0.10.2-18.el6_4.14.x86_64
ovirt-engine-cli-3.3.0.4-1.el6.noarch
ovirt-node-iso-3.0.1-1.0.2.vdsm.el6.noarch
ovirt-iso-uploader-3.3.1-1.el6.noarch
ovirt-engine-tools-3.3.0.1-1.el6.noarch
ovirt-engine-3.3.0.1-1.el6.noarch
oVirt hypervisor servers (CentOS 6.4 + updates, ovirt 3.3 stable):
[root@gnkvm01 vdsm]# rpm -qa '*ovirt*' '*vdsm*' '*libvirt*' '*kvm*'
libvirt-client-0.10.2-18.el6_4.14.x86_64
libvirt-lock-sanlock-0.10.2-18.el6_4.14.x86_64
qemu-kvm-tools-0.12.1.2-2.355.0.1.el6_4.9.x86_64
libvirt-python-0.10.2-18.el6_4.14.x86_64
libvirt-0.10.2-18.el6_4.14.x86_64
vdsm-python-4.12.1-4.el6.x86_64
vdsm-xmlrpc-4.12.1-4.el6.noarch
vdsm-4.12.1-4.el6.x86_64
qemu-kvm-0.12.1.2-2.355.0.1.el6_4.9.x86_64
vdsm-python-cpopen-4.12.1-4.el6.x86_64
vdsm-cli-4.12.1-4.el6.noarch
[root@gnkvm01 vdsm]#
11 years, 1 month
[Users] VM Disk corruption
by Usman Aslam
I have a few VM's that have their root LVM file system corrupted because of
improper shutdown. I experienced an issue described here which cause a node
to restart
http://lists.ovirt.org/pipermail/users/2013-July/015247.html
But now I'm experiencing the same problem with other VM's. A DB VM was in
single user mode when it was powered off. I created snapshot to clone it.
Powered the CentOS 6 VM back on and it cannot execute the mount command on
boot and dropped me to a root maintenance prompt.
Running fsck comes back with way too many errors, many of them about files
that shouldnt have even been open. Countless Innode issues and clone
multiply-claimed blocks. Running fsck -y seems to fix the file system and
it comes back as clean but upon restart, the VM load the files system as
read only and trying to remount it as rw, the mount command throws a
segmentation fault.
I dont know if its the new version of ovirt engine. I'm afraid to shutdown
any VM using the ovirt UI as they dont always come backup. I have tried
repairing the file system with the live CD and such to no avail. Given the
fact the alot of the corrupt files are plain static html, or archived tar
files, I assume it has something to do with ovirt. The only corrupting for
data should be and live application data (open files).
Please advise on how to proceed. I can provide whatever logs you may
requires.
Thanks,
--
Usman Aslam
401.6.99.66.55
usman(a)linkshift.com
11 years, 1 month
[Users] Problem in 3.3 with shmmax configuration
by Bob Doolittle
Hi,
I'm setting up Engine for the 2nd time - the first time I answered a
configuration question wrong. So I did:
engine-setup
engine-cleanup
engine-setup
Things worked, until I rebooted the system. I found that postgresql
would not startup, and was failing with "could not create shared memory
segment: Invalid Argument".
I resolved this issue by creating a file /etc/sysctl.d/10-shmmax.conf,
containing the line:
kernel.shmmax = 1000000000
(I read somewhere that postgresql recommends setting shmmax to 1/4 of
physical memory, and I have 4GB)
1. Is this a known bug? If not, should I file one? If so, how do I do
that? :)
2. Is there a better fix than the one I settled on? Does the normal
configuration wind up increasing shmmax, or reducing postgresql's
limits? What are the default values for this in a normal engine
configuration?
Thanks,
Bob
11 years, 1 month
[Users] ISO_DOMAIN won't attach
by Bob Doolittle
I have a fresh, default oVirt 3.3 setup, with F19 on the engine and the
node (after applying the shmmax workaround discussed in separate
thread). I followed the Quick Start Guide.
I've added the node, and a storage domain on the node, so the Datacenter
is now initialized.
However, my ISO_DOMAIN remains unattached. If I select it, select its
Data Center tab, and click on Attach, it times out.
The following messages are are seen in engine.log when the operation is
attempted:
http://pastebin.com/WguQJFRu
I can loopback mount the ISO_DOMAIN directory manually, so I'm not sure
why anything is timing out?
-Bob
P.S. I also note that I have to do a "Shift-Refresh" on the Storage page
before ISO_DOMAIN shows up. This is consistently reproducible. Firefox 25.
11 years, 1 month
[Users] [3.2-el6] corrupted disk after snapshot removal
by Yuriy Demchenko
Hi,
I've run into some disturbing situation recently with my production
ovirt-3.2-el6.
Planned an upgrade for one of my vm's app, so i've took live snapshot,
did application upgrade. Upgrade went fine, so I had no need in taken
snapshot - I've shutdown vm and deleted snapshot via admin interface.
When task was completed (took about 40mins) I've tried to start vm - but
its OS didnt start - "no boot disk found". Disks tab in admin interface
shows 'actual size' = 1Gb, fdisk from livecd shows no partitions at all.
It seems snapshot removal somehow corrupted vms disk.
What went wrong and how to avoid this in future? logs in attach
ovirt-engine.noarch 3.2.2-1.1.el6
vdsm.x86_64 4.10.3-16.el6
Storage domain type is FC
--
Yuriy Demchenko
11 years, 1 month
[Users] Host local_host running without virtualization hardware acceleration
by Sandro Bonazzola
Hi,
I had to reinstall ovirt yesterday and now it seems that it doesn't work anymore.
I'm running nightly on Fedora 18.
kernel-3.11.4-101.fc18.x86_64
sanlock-2.8-1.fc18.x86_64
libvirt-1.1.4-1.fc18.x86_64
qemu-1.5.1-1.fc18.x86_64
vdsm-4.13.0-93.gitea8c8f0.fc18.x86_64
ovirt-engine-3.4.0-0.2.master.20131104192919.git3b65870.fc18.noarch
engine-setup with all-in-one detects hardware virtualization and allow me to configure the system.
(it fails detecting engine health status due probably to recent changes in its URL, I'm already looking into it)
Once added localhost to the engine, it has been moved to non operational mode saying
I don't have virtualization hardware acceleration anymore.
I've found that:
# modinfo kvm
filename: /lib/modules/3.11.4-101.fc18.x86_64/kernel/arch/x86/kvm/kvm.ko
license: GPL
author: Qumranet
depends:
intree: Y
vermagic: 3.11.4-101.fc18.x86_64 SMP mod_unload
parm: min_timer_period_us:uint
parm: ignore_msrs:bool
parm: tsc_tolerance_ppm:uint
parm: allow_unsafe_assigned_interrupts:Enable device assignment on platforms without interrupt remapping support. (bool)
# /usr/bin/qemu-kvm
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
looking at strace:
open("/dev/kvm", O_RDWR|O_CLOEXEC) = -1 ENOENT (No such file or directory)
Any clue on what may be happened?
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
11 years, 1 month