i remember that previously we exposed both used and total size for VM
images. Right now it seems the API is only exposing total but not used.
The GUI is exposing used. Do i remember this correctly?
A second question on snapshots: here i think we never showed used/total,
and the GUI does not show this either. Do we track this data in the BE,
or would this also require backend changes?
Please let me know and i'll file the right BZs.
I have updated the ovf uploader in response to some comments in
gerrit and to handle some serious Python 2.6 issues. The 2.6 version
of ElementTree (i.e. the XML parser) was producing invalid XML.
Fortunately, lxml is a drop-in replacement.
After taking a look at Ovirt-engine, I happen to notice that for one
DC(storage pool), only one type of data domain can be attached. As
far as I know, from VDSM side, different data storages can be managed
by one storagepool. So, why is there such limitation? is it possible
to remove such rule from front-end?
Maybe there are some reasons I don't know support this rule, hope
someone can show me some light.
I ran into some jaxb-annotated beans in engine, and the annotations seemed to be a bit unsettled.
- Some annotations are on the field, some on the getter.
- I have also found a case where the setter is private, it is actually dead code. I was puzzled for a minute :-) It seems in this case jaxb actually sets the value through reflection, so the setter really is dead code.
- Some annotations state name, others just build on the defaults
Is there an agreement for the annotations?
It would be great if we could annotate the getter always and use a proper name in the @XmlElement annotation even if it is equal to the default, so it is less likely to break at refactoring.
This is a multi-part message in MIME format.
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hi, I have preliminary (WIP) patches for shared FS up on gerrit. There
is a lot of work to be done reorganizing the patches but I just wanted
all the TLV guys to have a chance to look at it on Sunday.
I did some testing and should work as expected for most cases.
To test just connectStorageServer with storageType=6 (sharedfs)
connection params are
to check with an existing NFS domain you can just
I only tested NFS but I am going to test more exotic stuff on Monday.
This is the patch to build the RPM from.
Have a good weekend
Content-Type: text/html; charset=ISO-8859-1
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
<body text="#000000" bgcolor="#FFFFFF">
<font size="-1">Hi, I have preliminary (WIP) patches for shared FS
up on gerrit. There is a lot of work to be done reorganizing the
patches but I just wanted all the TLV guys to have a chance to
look at it on Sunday.<br>
I did some testing and should work as expected for most cases.<br>
To test just connectStorageServer with storageType=6 (sharedfs)<br>
connection params are<br>
to check with an existing NFS domain you can just<br>
I only tested NFS but I am going to test more exotic stuff on
This is the patch to build the RPM from.<br>
<meta http-equiv="content-type" content="text/html;
Have a good weekend<br>
I looked into Mike's database patch ( http://gerrit.ovirt.org/#change,500 ) today and read Yaniv's comment on it. I have seen another patches related to enums and how they are stored in the database. I made a quick test to compare between varchar and enum and the results are here: http://dummywarhead.blogspot.com/2011/12/postgresql-enums-vs-varchar.html
IMO enums could be a good solution, but changing enums could be a pain under postgres 9.1. So what if we could use varchar now and migrate to enum once postgres 9.1 replaces the older installations :)
I read the design here and I like to make sure that the future road map
will expand beyond the current scope.
The current design totally rely on libvirt and does not parse the
content of the PCI addressing. That's really really basic. The user
should be able to specify pci slot allocation of his devices through the
gui. I guess you won't be able to do that w/ the current scheme.
Also, what about devices that can't be hot plug (like qxl)? You need to
reveal this info to the user. Currently we have ability in the kvm bios
(seabios) to automatically disable the host plug of some critical
devices like the vga driver (qxl) and others. The user should be allowed
to hot plug/unplug only allowed devices.
You have to make your design work w/ pci bridges since we'll add it to
qemu and once there is such VM (management should enable the bridge)
there will be more pci devices available to it.