[Users] [QE] oVirt 3.3.4 beta status
by Sandro Bonazzola
Hi,
we're going to branch and build oVirt 3.3.4 beta on Tue 2014-02-18 09:00 UTC.
Repository composition will follow as soon as all packages will be built.
A bug tracker is available at [1] and it shows no bugs blocking the release
The following is a list of the non-blocking bugs still open with target 3.3.4:
Whiteboard Bug ID Summary
infra 1053576 [abrt] vdsm-python: libvirt.py:102:openAuth:libvirtError: Failed to connect socket to '/var/run/libvirt/libvirt-sock'...
integration 1026930 Package virtio-win and put it in ovirt repositories
integration 1026933 pre-populate ISO domain with virtio-win ISO
network 997197 Some AppErrors messages are grammatically incorrect (singular vs plural)
network 1064489 do not report negative rx/tx rate when Linux counters wrap
node 923049 ovirt-node fails to boot from local disk under UEFI mode
node 965583 [RFE] add shortcut key on TUI
node 976675 [wiki] Update contribution page
node 979350 Changes admin password in the first time when log in is failed while finished auto-install
node 979390 [RFE] Split defaults.py into smaller pieces
node 982232 performance page takes >1sec to load (on first load)
node 984441 kdump page crashed before configuring the network after ovirt-node intalled
node 986285 UI crashes when no bond name is given
node 991267 [RFE] Add TUI information to log file.
node 1018374 ovirt-node-iso-3.0.1-1.0.2.vdsm.fc19: Failed on Auto-install
node 1018710 [RFE] Enhance API documentation
node 1032035 [RFE]re-write auto install function for the cim plugin
node 1033286 ovirt-node-plugin-vdsm can not be added to ovirt node el6 base image
storage 987917 [oVirt] [glance] API version not specified in provider dialog
virt 1007940 Cannot clone from snapshot while using GlusterFS as POSIX Storage Domain
Maintainers / Assignee:
Please add the bugs to the tracker if you think that 3.3.4 should not be released without them fixed.
Please re-target all bugs you don't think that should block 3.3.4.
For those who want to help testing the bugs, I suggest to add yourself to the testing page [2].
Maintainers are welcomed to start filling release notes, the page has been created here [3]
[1] http://bugzilla.redhat.com/1064462
[2] http://www.ovirt.org/Testing/Ovirt_3.3.4_testing
[3] http://www.ovirt.org/OVirt_3.3.4_release_notes
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 10 months
[Users] virt-v2v fail - GuestfsHandle.pm error + Redhat.pm error
by Nicolas Ecarnot
Hi,
After having successfully migrated Debian, XP, 2003, 2008 VMs, I'm stuck
with a migration I was expecting to be easy : RHAS3.
Here is the error log I get :
> # virt-v2v -i libvirt -ic qemu+ssh://xxxx@xxxx/system -o rhev -os xxxx:/data/vmexport -of qcow2 -oa sparse -n ovirtmgmt serv-rhas3-vm1
> serv-rhas3-vm1_copy.raw: 100% [=====================================================================================================]D 0h10m14s
> virt-v2v: Pas de capability dans la configuration correspondant à os='linux' name='virtio' distro='rhel' major='3' minor='0'
> virt-v2v: Pas de capability dans la configuration correspondant à os='linux' name='cirrus' distro='rhel' major='3' minor='0'
> virt-v2v: WARNING: Le pilote d'affichage a été modifié en cirrus, mais il est impossible d'installer le pilote cirrus. X pourrait ne pas fonctionner correctement
> virt-v2v: WARNING: /boot/grub/device.map fait référence à un périphérique /dev/fd0 inconnu. Cette entrée doit être corrigée manuellement après la conversion.
> virt-v2v: WARNING: /boot/grub/device.map fait référence à un périphérique /dev/sda inconnu. Cette entrée doit être corrigée manuellement après la conversion.
> sh: sh: at /usr/share/perl5/vendor_perl/Sys/VirtConvert/GuestfsHandle.pm line 200.
> at /usr/share/perl5/vendor_perl/Sys/VirtConvert/Converter/RedHat.pm line 2321
I don't mind the warnings and I also had such errors I was able to
correct manually.
But here, the last two lines are lethal.
It seems oVirt tries to guess which OS is imported, and tries to do
specific actions, and do them bad.
Either there's a way o prevent oVirt from guessing, either there's a way
to correct the actions oVirt is failing to do...
Googling was not that helpful about this issue.
Regards.
--
Nicolas Ecarnot
10 years, 10 months
[Users] issues with live snapshot
by andreas.ewert@cbc.de
Hi,
I want to create a live snapshot, but it fails at the finalizing task. There are 3 events:
- Snapshot 'test' creation for VM 'snaptest' was initiated by EwertA
- Failed to create live snapshot 'test' for VM 'snaptest'. VM restart is recommended.
- Failed to complete snapshot 'test' creation for VM 'snaptest‘.
Thread-338209::DEBUG::2014-02-13 08:40:19,672::BindingXMLRPC::965::vds::(wrapper) client [10.98.229.5]::call vmSnapshot with ('31c185ce-cc2e-4246-bf46-fcd96cd30050', [{'baseVolumeID': 'b9448428-b787-4286-b54e-aa54a8f8bb17', 'domainID': '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volumeID': 'c677d01e-dc50-486b-a532-f88a71666d2c', 'imageID': 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}], '') {}
Thread-338209::DEBUG::2014-02-13 08:40:19,672::task::579::TaskManager.Task::(_updateState) Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::moving from state init -> state preparing
Thread-338209::INFO::2014-02-13 08:40:19,672::logUtils::44::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='54f86ad7-2c12-4322-b2d1-f129f3d20e57', spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', imgUUID='db6faf9e-2cc8-4106-954b-fef7e4b1bd1b', leafUUID='c677d01e-dc50-486b-a532-f88a71666d2c')
Thread-338209::DEBUG::2014-02-13 08:40:19,673::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57`ReqID=`630a701e-bd44-49ef-8a14-f657b8653a33`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3236' at 'prepareImage'
Thread-338209::DEBUG::2014-02-13 08:40:19,673::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57' for lock type 'shared'
Thread-338209::DEBUG::2014-02-13 08:40:19,673::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57' is free. Now locking as 'shared' (1 active user)
Thread-338209::DEBUG::2014-02-13 08:40:19,673::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57`ReqID=`630a701e-bd44-49ef-8a14-f657b8653a33`::Granted request
Thread-338209::DEBUG::2014-02-13 08:40:19,674::task::811::TaskManager.Task::(resourceAcquired) Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::_resourcesAcquired: Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57 (shared)
Thread-338209::DEBUG::2014-02-13 08:40:19,675::task::974::TaskManager.Task::(_decref) Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::ref 1 aborting False
Thread-338209::DEBUG::2014-02-13 08:40:19,675::lvm::440::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
Thread-338209::DEBUG::2014-02-13 08:40:19,675::lvm::309::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ \'a|/dev/mapper/36000d7710000ec7c7d5beda78691839c|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 54f86ad7-2c12-4322-b2d1-f129f3d20e57' (cwd None)
Thread-338209::DEBUG::2014-02-13 08:40:19,715::lvm::309::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0
Thread-338209::DEBUG::2014-02-13 08:40:19,739::lvm::475::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-338209::DEBUG::2014-02-13 08:40:19,740::lvm::475::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex
Thread-338209::DEBUG::2014-02-13 08:40:19,741::lvm::309::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ \'a|/dev/mapper/36000d7710000ec7c7d5beda78691839c|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --available y 54f86ad7-2c12-4322-b2d1-f129f3d20e57/c677d01e-dc50-486b-a532-f88a71666d2c' (cwd None)
Thread-338209::DEBUG::2014-02-13 08:40:19,800::lvm::309::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0
Thread-338209::DEBUG::2014-02-13 08:40:19,801::lvm::526::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-338209::DEBUG::2014-02-13 08:40:19,801::lvm::538::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-338209::WARNING::2014-02-13 08:40:19,801::fileUtils::167::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage/54f86ad7-2c12-4322-b2d1-f129f3d20e57/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b already exists
Thread-338209::DEBUG::2014-02-13 08:40:19,801::blockSD::1068::Storage.StorageDomain::(createImageLinks) img run vol already exists: /var/run/vdsm/storage/54f86ad7-2c12-4322-b2d1-f129f3d20e57/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/b9448428-b787-4286-b54e-aa54a8f8bb17
Thread-338209::DEBUG::2014-02-13 08:40:19,802::blockSD::1068::Storage.StorageDomain::(createImageLinks) img run vol already exists: /var/run/vdsm/storage/54f86ad7-2c12-4322-b2d1-f129f3d20e57/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/bac74b7e-94f0-48d2-a5e5-8b2e846411e8
Thread-338209::DEBUG::2014-02-13 08:40:19,802::blockSD::1040::Storage.StorageDomain::(linkBCImage) path to image directory already exists: /rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b
Thread-338209::DEBUG::2014-02-13 08:40:19,802::lvm::440::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
Thread-338209::DEBUG::2014-02-13 08:40:19,803::lvm::309::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ \'a|/dev/mapper/36000d7710000ec7c7d5beda78691839c|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 54f86ad7-2c12-4322-b2d1-f129f3d20e57' (cwd None)
Thread-338209::DEBUG::2014-02-13 08:40:19,831::lvm::309::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0
Thread-338209::DEBUG::2014-02-13 08:40:19,849::lvm::475::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-338209::DEBUG::2014-02-13 08:40:19,849::lvm::475::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex
Thread-338209::INFO::2014-02-13 08:40:19,850::logUtils::47::dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'info': {'domainID': '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset': 128974848, 'path': '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f88a71666d2c', 'volumeID': 'c677d01e-dc50-486b-a532-f88a71666d2c', 'leasePath': '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID': 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, 'path': '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f88a71666d2c', 'imgVolumesInfo': [{'domainID': '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset': 128974848, 'path': '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f88a71666d2c', 'volumeID': 'c677d01e-dc50-486b-a532-f88a71666d2c', 'leasePath': '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID': 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, {'domainID': '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset': 127926272, 'path': '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/b9448428-b787-4286-b54e-aa54a8f8bb17', 'volumeID': 'b9448428-b787-4286-b54e-aa54a8f8bb17', 'leasePath': '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID': 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, {'domainID': '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset': 111149056, 'path': '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/bac74b7e-94f0-48d2-a5e5-8b2e846411e8', 'volumeID': 'bac74b7e-94f0-48d2-a5e5-8b2e846411e8', 'leasePath': '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID': 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}]}
Thread-338209::DEBUG::2014-02-13 08:40:19,850::task::1168::TaskManager.Task::(prepare) Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::finished: {'info': {'domainID': '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset': 128974848, 'path': '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f88a71666d2c', 'volumeID': 'c677d01e-dc50-486b-a532-f88a71666d2c', 'leasePath': '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID': 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, 'path': '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f88a71666d2c', 'imgVolumesInfo': [{'domainID': '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset': 128974848, 'path': '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f88a71666d2c', 'volumeID': 'c677d01e-dc50-486b-a532-f88a71666d2c', 'leasePath': '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID': 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, {'domainID': '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset': 127926272, 'path': '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/b9448428-b787-4286-b54e-aa54a8f8bb17', 'volumeID': 'b9448428-b787-4286-b54e-aa54a8f8bb17', 'leasePath': '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID': 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}, {'domainID': '54f86ad7-2c12-4322-b2d1-f129f3d20e57', 'volType': 'path', 'leaseOffset': 111149056, 'path': '/rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/bac74b7e-94f0-48d2-a5e5-8b2e846411e8', 'volumeID': 'bac74b7e-94f0-48d2-a5e5-8b2e846411e8', 'leasePath': '/dev/54f86ad7-2c12-4322-b2d1-f129f3d20e57/leases', 'imageID': 'db6faf9e-2cc8-4106-954b-fef7e4b1bd1b'}]}
Thread-338209::DEBUG::2014-02-13 08:40:19,850::task::579::TaskManager.Task::(_updateState) Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::moving from state preparing -> state finished
Thread-338209::DEBUG::2014-02-13 08:40:19,850::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57': < ResourceRef 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57', isValid: 'True' obj: 'None'>}
Thread-338209::DEBUG::2014-02-13 08:40:19,851::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-338209::DEBUG::2014-02-13 08:40:19,851::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57'
Thread-338209::DEBUG::2014-02-13 08:40:19,851::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57' (0 active users)
Thread-338209::DEBUG::2014-02-13 08:40:19,851::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57' is free, finding out if anyone is waiting for it.
Thread-338209::DEBUG::2014-02-13 08:40:19,851::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.54f86ad7-2c12-4322-b2d1-f129f3d20e57', Clearing records.
Thread-338209::DEBUG::2014-02-13 08:40:19,852::task::974::TaskManager.Task::(_decref) Task=`8675b6b0-3216-46a8-8d9a-d0feb02d5b49`::ref 0 aborting False
Thread-338209::INFO::2014-02-13 08:40:19,852::clientIF::353::vds::(prepareVolumePath) prepared volume path: /rhev/data-center/mnt/blockSD/54f86ad7-2c12-4322-b2d1-f129f3d20e57/images/db6faf9e-2cc8-4106-954b-fef7e4b1bd1b/c677d01e-dc50-486b-a532-f88a71666d2c
Thread-338209::DEBUG::2014-02-13 08:40:19,852::vm::3743::vm.Vm::(snapshot) vmId=`31c185ce-cc2e-4246-bf46-fcd96cd30050`::<domainsnapshot>
Thread-338209::DEBUG::2014-02-13 08:40:19,865::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 67 edom: 10 level: 2 message: unsupported configuration: reuse is not supported with this QEMU binary
Thread-338209::DEBUG::2014-02-13 08:40:19,865::vm::3764::vm.Vm::(snapshot) vmId=`31c185ce-cc2e-4246-bf46-fcd96cd30050`::Snapshot failed using the quiesce flag, trying again without it (unsupported configuration: reuse is not supported with this QEMU binary)
Thread-338209::DEBUG::2014-02-13 08:40:19,869::libvirtconnection::108::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 67 edom: 10 level: 2 message: unsupported configuration: reuse is not supported with this QEMU binary
Thread-338209::ERROR::2014-02-13 08:40:19,869::vm::3768::vm.Vm::(snapshot) vmId=`31c185ce-cc2e-4246-bf46-fcd96cd30050`::Unable to take snapshot
Thread-338209::DEBUG::2014-02-13 08:40:19,870::BindingXMLRPC::972::vds::(wrapper) return vmSnapshot with {'status': {'message': 'Snapshot failed', 'code': 48}}
What can I do to fix this?
best regards
Andreas
10 years, 10 months
[Users] Opaque project on wiki?
by i iordanov
Hey guys,
I was wondering if you're interested in featuring the Opaque project
somewhere on the wiki? If so, where?
It's now in production, and the source code (GPLv3) is released in my
repository housing all my remote desktop client software here:
https://github.com/iiordanov/remote-desktop-clients
Thanks!
iordan
--
The conscious mind has only one thread of execution.
10 years, 10 months
[Users] Call for Participation: FISL
by Brian Proffitt
Leonardo Vaz is organizing a participation of Open Source projects sponsored by Red Hat at FISL, the largest FOSS conference in LATAM which will happen from
7th to 10th of May in Porto Alegre, Brazil:
http://softwarelivre.org/fisl15
The call for papers for the conference ends next Monday, February 17th and the talks can be submitted in the URL below:
http://papers.softwarelivre.org/papers_ng
If you have any question about the conference, feel free to contact Leo at lvaz(a)redhat.com.
Peace,
Brian
--
Brian Proffitt - oVirt Community Manager
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 312 477 4320 / Cell: +1 574 383 9BKP
IRC: bkp
10 years, 10 months
[Users] Ovirt Gluster problems
by Juan Pablo Lorier
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--K1RorUOWmjDsov5Mv5auigJcOlmISEK1x
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Hi,
I had some issues with a gluster cluster and after some time trying to
get the storage domain up or delete it (I opened a BZ about a deadlock
in the process of removing the domain) I gave up and destroyed the DC.
The thing is that I want to add the hosts that where part of the DC and
now I get that I can't as they have the volume. I try to stop the volume
but I can't as no host is running in the deleted cluster and for some
reason, ovirt needs that.
I can't delete the hosts either as they have the volume... so I'm back
in another chicken and egg problem.
Any hints??
PD: I can't nuke the hole ovirt plataform as I have another DC in
production.... otherwise I would :-)
Regards,
--K1RorUOWmjDsov5Mv5auigJcOlmISEK1x
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQEcBAEBAgAGBQJS5+vaAAoJEC5KDDg2+NMMHR4H/1kf5+MMK9VMbNn0cNLXOsDL
pLcFHMAqq5+C5XvA0CrKgsEpTNvkjH49X3hB/ulhwDtXDJBLcoHVC+mAoqMmbkeQ
1bJaUCNQtst9BYOtJ4eq5ZXCGpB7rvEe5wZGuBeJqZUFnJ1FekVc3F1RXbfBmVeE
ELQlJ0rp4cufZod8TwNh1O+wSx0nVHC9TJPKPSPA/qKMqrG2IIiR/3wGlFpeu3QJ
eBcvixk7wf8YyxGvOwBDYVoc4+tYCHmqHiGAj6E6lDAVDySgvRTg/wH5OkoYGpn1
tSRiLWlSzd1A+XpKveOabPdBfOKqSoSUTfgq+NiwTD/9cKJuElb2g5F6UycjPAw=
=oHX+
-----END PGP SIGNATURE-----
--K1RorUOWmjDsov5Mv5auigJcOlmISEK1x--
10 years, 10 months
[Users] possible to attach cloud-init data to vm started from cd rom?
by Sven Kieske
Hi,
I would like to know if the following scenario is supported in ovirt
(currently testing 3.3.2).
You have a vm with an attached disk.
You want to start this vm with an attached iso image via cd-rom
you want to pass to this started system cloud-init metadata via
"run once" or REST.
I know that ovirt passes the metadata by creating an ISO and
attaching this ISO to the VM as an CD-ROM itself.
My tests so far included the following:
1. shut down the vm
2. click run once
3. switch boot order to boot from cd-rom, attach an ISO
4. activate cloud-init metadata and pass some data over it
actual result: the system boots from hard disk, not from
the attached iso.
Second test:
1. shut down the vm
2. edit the vm, change boot order to boot from cd-rom, attach iso
2. click run once
4. activate cloud-init metadata and pass some data over it
actual result: the system boots from hard disk, not from
the attached iso.
Is this not possible or am I doing it wrong?
Any hints would be appreciated!
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
10 years, 10 months
[Users] VNC support in Opaque
by i iordanov
Hi Sven,
I would like to separate the discussion of features in Opaque from my
question about the Wiki, so I'm sending a separate email regarding your
question.
> Does it support the novnc connection to the vm too, or just spice?
> This would be _huge_ !
Do you mean supporting VMs which has VNC set as the console type? Then that
is a planned feature, but I didn't know how to prioritize it. A very fast
an capable Android VNC client (bVNC) is also part of my
remote-desktop-clients repository, and I was planning to make use of it in
order to give Opaque the capability.
Do we have a consensus on how important such a feature would be? Can I hear
from others too?
Thanks!
iordan
--
The conscious mind has only one thread of execution.
10 years, 10 months
[Users] oVirt 3.4 2nd test day statsistics
by Doron Fediuck
Hi all,
thanks for joining oVirt 3.4 2nd test day!
Thanks to your efforts we 'earned' 66 new BZs ;)
Some specific data below;
1. BZ list by group:
https://hurl.corp.redhat.com/6de0f3a
""Whiteboard" "Number of bugs"
" " 5
"gluster" 1
"i18n" 1
"infra" 14
"integration" 7
"network" 12
"storage" 16
"storage virt" 1
"ux" 2
"virt" 7
2. BZ list by reporter:
https://hurl.corp.redhat.com/121d9b8
Top 5 reporters
"Reporter" "Number of bugs"
"dron(a)redhat.com" 8
"mpavlik(a)redhat.com" 8
"ahadas(a)redhat.com" 4
"jbelka(a)redhat.com" 3
"myakove(a)redhat.com" 3
"sbonazzo(a)redhat.com" 3
"tjelinek(a)redhat.com" 3
3. IRC activity (top 5)
112 jbrooks
91 fabiand
79 didi
78 brad_mssw
73 OaaSvc
Join us next week for the 3rd (and final!) 3.4 test day.
Doron
10 years, 10 months