oVirt 3.5 Test Day 1 Results
by Martin Perina
Hi,
I tested these features:
1073453 - OVIRT35 - [RFE] add Debian 7 to the list of operating systems when creating a new vm
Info: Debian 7 is listed in OS in new VM dialog
Result: success
1047624 - OVIRT35 - [RFE] support BIOS boot device menu
Info: Boot menu has to be enabled in Edit VM dialog Boot options tab Enable boot menu. Once enabled,
user can press F12 and select boot device in the same way as in standard BIOS
Result: success
During test I found these issues:
1) Engine installation problem on Centos 6.5
Package ovirt-engine-userportal-3.5.0-0.0.master.20140629172257.git0b16ed7.el6.noarch.rpm is not signed
After disabling GPG signature check in /etc/yum.repos.d/ovirt-3.5.repo, installation continues fine.
2) Engine installation problem on Centos 6.5
Engine indirectly depends on batik packaged, but xmlgraphics-batik is installed instead of it.
I created a bug [1]
3) Packages ioprocess and python-ioprocess are not available in oVirt repository for 3.5 beta (even they are
available in master-snapshot-static repository).
Created a ticket for infra https://fedorahosted.org/ovirt/ticket/205
Martin
[1] https://bugzilla.redhat.com/1114921
10 years, 4 months
ovirt-engine 3.5 branched
by Yedidyah Bar David
Hi all,
ovirt-engine-3.5 was branched from master.
The commit used was 0b16ed7a76d3fbe106e15263211f1a64f075df0c :
core: validation error on edit instance type
This is the same commit used to build the beta build that is used in the test day
that we are having today.
Developers: Note that since this commit, new changes were committed to master.
Please cherry-pick/push to 3.5 changes that should be there.
Best regards,
--
Didi
10 years, 4 months
ovirt-engine-3.5 branch is way too old
by Alon Bar-Lev
Hi,
The following backlog is post branching, as branching was done at random point in effort.
As far as I can see all these should go into 3.5 anyway, if someone can do us the service and just remove on top of master it will reduce effort of each individual developer.
Next time branch should be created after at least one bug day is over and major issues found are in.
Beta is a tag in time not a branch in time.
Thanks,
Alon
549d9e6 engine: NetworkValidator uses new validation syntax
9f0310b engine: Clear syntax for writing validations
52c6b35 host-deploy: appropriate message for kdump detection
375c554 core: Use force detach only on Data SD
8f02a74 engine: no need to save vm_static on run once
c6851e4 ui: remove Escape characters for TextBoxLabel
5e37215 ui: improve hot plug cpu wording
028c175 engine: Rename providerId to networkProviderId in add/update host actions
5b4d20c engine: Configure unique host name on neutron.conf
90eb1d2 extapi: aaa: add auth result to credential change
994996b backend: Add richer formatting of migration duration
98e293b core: handle fence agent power wait param on stop
bb9ecfb engine: Clear eclipse warning in AddVdsCommand
36dd138 aaa: always use engine context for queries
24f0cf8 restapi: rsdl_metadata - quota.id in add disk
7161ac0 tools: Expose VmGracefulShutdownTimeout option to engine-config
8255f44 aaa: more fixes to command context propgation
b8feb57 restapi: missing vms link under affinity groups
f056835 core, engine: Fix HotPlugCpuSupported config value
4492ef7 core, engine: Avoid migration in ppc64
2710b07 ui: avoid casting warnings on findbugs
bcb156c core: adding missing command constructor
92c1522 core: Changing Host free space threshold
a0d000b webadmin: column sorting support for Disks sub-tabs
5a0c76f webadmin: column sorting support for Storage sub-tabs
14a625e webadmin: column sorting support for Disks tabs
a32d199 core: DiskConditionField - extract verbs to constants
48cc09d core: fixed searching disks by creation date
10 years, 4 months
[vdsm] Infrastructure design for node (host) devices
by Martin Polednik
Hello,
I'm actively working on getting host device passthrough (pci, usb and scsi)
exposed in VDSM, but I've encountered growing complexity of this feature.
The devices are currently created in the same manner as virtual devices and
their reporting is done via hostDevices list in getCaps. As I implemented
usb and scsi devices, the size of this list grew almost twice - and that is
on a laptop.
Similar problem is with the devices themselves, they are closely tied to host
and currently, engine would have to keep their mapping to VMs, reattach back
loose devices and handle all of this in case of migration.
I would like to hear your opinion on building something like host device pool
in VDSM. The pool would be populated and periodically updated (to handle
hot(un)plugs) and VMs/engine could query it for free/assigned/possibly problematic
devices (which could be reattached by the pool). This has added benefit of
requiring fewer libvirt calls, but a bit more complexity and possibly one thread.
The persistence of the pool on VDSM restart could be kept in config or constructed
from XML.
I'd need new API verbs to allow engine to communicate with the pool,
possibly leaving caps as they are and engine could detect the presence of newer
vdsm by presence of these API verbs. The vmCreate call would remain almost the
same, only with the addition of new device for VMs (where the detach and tracking
routine would be communicated with the pool).
10 years, 4 months