[Users] so, what do you want next in oVirt?

earlier in the year we did a survey for feature requests / improvements / etc. since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next? below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage). Thanks, Itamar [1] from the top 12 V Allow disk resize V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template ? Enable hypervisor upgrade/updates through engine[3] V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init X Integrate v2v into engine ? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS - Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list - Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch) [2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent.

------J92G8WNH1WR83N0F2SZP3RLKZFYP5D Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Itamar Heim <iheim@redhat.com> wrote:
earlier in the year we did a survey for feature requests / improvements
/ etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template ? Enable hypervisor upgrade/updates through engine[3] V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init X Integrate v2v into engine ? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS - Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list - Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
What I would like to see in the next version is pxe boot of the nodes. Probably not easy to achieve because of dependency on dhcp. Joop ------J92G8WNH1WR83N0F2SZP3RLKZFYP5D Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html><head></head><body><div class="gmail_quote">Itamar Heim <iheim@redhat.com> wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"> <pre class="k9mail">earlier in the year we did a survey for feature requests / improvements <br />/ etc.<br /><br />since a lot of things were added, and priorities usually change, I'd <br />like to ask again for "what do you need the most from oVirt / what are <br />your pain points" next?<br /><br />below[1] I've listed my understanding of what already went in from <br />previous survey requests (to various degrees of coverage).<br /><br />Thanks,<br />Itamar<br /><br />[1] from the top 12<br />V Allow disk resize<br />V Integrate Nagios/Zabbix monitoring - via a ui plugin<br />V Highly Available engine - via hosted engine[2]<br />V Open vSwitch integration - via neutron integration<br />X Allow cloning VMs without template<br />? Enable hypervisor upgrade/updates through engine[3]<br />V Allow engine on an oVirt hosted VM - via hosted engine[2]<br />V Enable guest configuration (root password, SSH keys, network) via<br />guest agent in engine - via cloud-init<br />X Integ! rate v2v into engine<br />? Bond/extend ovirtmgmt with a second network for HA/increased<br />bandwidth[4]<br />X Integrate scheduling of snapshots and VM export for backups in<br />engine[5]<br />V Spice – support Google Chrome - via mime based launch<br /><br /><br />Other items mentioned in previous survey which should be covered by now:<br />- Fix timeout when adding local host during all-in-one configuration<br />- Fix engine set-up when SELinux is disabled<br />- Provide packages for el6 (CentOS, Red Hat Enterprise Linux)<br />- Allow multiple VMs to be deployed from the same template at the same<br />time<br />- ISO domains on local/GlusterS<br />- Show IP addresses in Virtual Machines->Network Interfaces<br />- OpenStack Quantum support (now called Neutron)<br />- noVNC support<br />- Support spice.html5 and websocket proxy<br />- Add other guest OSes to list<br />- Port oVirt guest agent to Ubuntu[6]<br />- SLA - Allow resource time-sharing<br />- Spice - Mac client (vi! a mime based launch)<br />- Spice - port XPI plug-in to Windows (not sure this will happen, but<br />mime based launch allows using firefox now)<br />- Spice - client for Ubuntu/Debian (should be covered via mime based<br />launch)<br /><br /><br />[2] hosted engine is in active development, but not released yet.<br />[3] host update is supported, but not for general yum update.<br />[4] a lot of improvements were done in this space, but i'm not sure if<br />they cover this exact use case<br />[5] backup api is now being pushed to master, and orchestration of<br />backups should probably happen via 3rd part backup vendors?<br />[6] I'm not sure packaging exists yet, but ubuntu is covered for the<br />basic functionality of the guest agent.<br /><hr /><br />Users mailing list<br />Users@ovirt.org<br /><a href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br /></pre></blockquote></div><br clear="all">What I would like to see in the ! next version is pxe boot of the nodes. Probably not easy to achieve because of dependency on dhcp. <br> <br> Joop</body></html> ------J92G8WNH1WR83N0F2SZP3RLKZFYP5D--

On 08/21/2013 02:40 AM, Joop van de Wege wrote:
What I would like to see in the ! next version is pxe boot of the nodes. Probably not easy to achieve because of dependency on dhcp.
Hi Joop, can you please give a bit more information on the use case / how you envision this? current thinking around bare metal provisioning of hosts is to extend the functionality around the foreman provider for this, but you may have other suggestions? Thanks, Itamar

On 08/21/2013 12:11 PM, Itamar Heim wrote:
On 08/21/2013 02:40 AM, Joop van de Wege wrote:
What I would like to see in the ! next version is pxe boot of the nodes. Probably not easy to achieve because of dependency on dhcp.
Hi Joop,
can you please give a bit more information on the use case / how you envision this?
current thinking around bare metal provisioning of hosts is to extend the functionality around the foreman provider for this, but you may have other suggestions?
I think Joop means to be able to add hosts (nodes) to a cluster by adding their MAC address to the dhcp list for PXE boot into ovirt-node and thus join the cluster. This would make it easy to add new physical nodes without any spinning disks or other local storage requirements. I suppose this may not be easy with complex network connections (bonds on mgmt network, mgmt network on a tagged vlan, etc), but it should be possible if the management network interface is plain and physical. /Simon PS, Perhaps Joop can confirm this idea, we've talked about it IRL.

On 09/02/2013 05:29 PM, NOC wrote:
On 08/21/2013 12:11 PM, Itamar Heim wrote:
On 08/21/2013 02:40 AM, Joop van de Wege wrote:
What I would like to see in the ! next version is pxe boot of the nodes. Probably not easy to achieve because of dependency on dhcp.
Hi Joop,
can you please give a bit more information on the use case / how you envision this?
current thinking around bare metal provisioning of hosts is to extend the functionality around the foreman provider for this, but you may have other suggestions?
I think Joop means to be able to add hosts (nodes) to a cluster by adding their MAC address to the dhcp list for PXE boot into ovirt-node and thus join the cluster. This would make it easy to add new physical nodes without any spinning disks or other local storage requirements.
we started adding foreman integration in 3.3: http://www.ovirt.org/Features/ForemanIntegration adding ohad and oved for their thoughts on this.
I suppose this may not be easy with complex network connections (bonds on mgmt network, mgmt network on a tagged vlan, etc), but it should be possible if the management network interface is plain and physical.
/Simon
PS, Perhaps Joop can confirm this idea, we've talked about it IRL. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 2-9-2013 16:32, Itamar Heim wrote:
On 09/02/2013 05:29 PM, NOC wrote:
On 08/21/2013 12:11 PM, Itamar Heim wrote:
On 08/21/2013 02:40 AM, Joop van de Wege wrote:
What I would like to see in the ! next version is pxe boot of the nodes. Probably not easy to achieve because of dependency on dhcp.
Hi Joop,
can you please give a bit more information on the use case / how you envision this?
current thinking around bare metal provisioning of hosts is to extend the functionality around the foreman provider for this, but you may have other suggestions?
I think Joop means to be able to add hosts (nodes) to a cluster by adding their MAC address to the dhcp list for PXE boot into ovirt-node and thus join the cluster. This would make it easy to add new physical nodes without any spinning disks or other local storage requirements.
we started adding foreman integration in 3.3: http://www.ovirt.org/Features/ForemanIntegration
adding ohad and oved for their thoughts on this.
I suppose this may not be easy with complex network connections (bonds on mgmt network, mgmt network on a tagged vlan, etc), but it should be possible if the management network interface is plain and physical.
/Simon
PS, Perhaps Joop can confirm this idea, we've talked about it IRL. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
This isn't about provisioning with Foreman. Its about having the compute nodes NOT having any spinning disks. So the only way to start a node is to pxeboot it and then let it (re)connect with the engine. Then it will be identified by engine as either a new node or a reconnecting node and it will get its configuration from the engine. For reference: thats how VirtualIron works. It has a managment network, just like ovirt, and on that it runs a tftp and dhcp server. Nodes are plugged into the managment network, without disk, and then pxe booted after which they appear in the webui as new unconfigured nodes. You then can set various settings and upon rebooting the nodes will recieve these settings because it is recognised by its mac address. The advantage of this construct is that you can place a new server into a rack, cable it, power on and go back to you office where you'll find the new node waiting to be configured. No messing around with CDs to install an OS, not being in the datacenter for hours on end, just in and out. Yes, disks are cheap but they brake down, need maintenance, means downtime and in general more admin time then when you don't have them. ( its a shame to have a raid1 of 2 1Tb disk just to install an OS of less then 10G) Regards, Joop

--_000_5F9E965F5A80BC468BE5F40576769F092E732A3Aexchange21_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 dG9yIDIwMTMtMDktMDUga2xvY2thbiAwOTozMCArMDIwMCBza3JldiBub2M6DQoNCg0KT24gMi05 LTIwMTMgMTY6MzIsIEl0YW1hciBIZWltIHdyb3RlOg0KPiBPbiAwOS8wMi8yMDEzIDA1OjI5IFBN LCBOT0Mgd3JvdGU6DQo+PiBPbiAwOC8yMS8yMDEzIDEyOjExIFBNLCBJdGFtYXIgSGVpbSB3cm90 ZToNCj4+PiBPbiAwOC8yMS8yMDEzIDAyOjQwIEFNLCBKb29wIHZhbiBkZSBXZWdlIHdyb3RlOg0K Pj4+Pg0KPj4+PiBXaGF0IEkgd291bGQgbGlrZSB0byBzZWUgaW4gdGhlICEgbmV4dCB2ZXJzaW9u IGlzIHB4ZSBib290IG9mIHRoZQ0KPj4+PiBub2Rlcy4NCj4+Pj4gUHJvYmFibHkgbm90IGVhc3kg dG8gYWNoaWV2ZSBiZWNhdXNlIG9mIGRlcGVuZGVuY3kgb24gZGhjcC4NCj4+Pg0KPj4+IEhpIEpv b3AsDQo+Pj4NCj4+PiBjYW4geW91IHBsZWFzZSBnaXZlIGEgYml0IG1vcmUgaW5mb3JtYXRpb24g b24gdGhlIHVzZSBjYXNlIC8gaG93IHlvdQ0KPj4+IGVudmlzaW9uIHRoaXM/DQo+Pj4NCj4+PiBj dXJyZW50IHRoaW5raW5nIGFyb3VuZCBiYXJlIG1ldGFsIHByb3Zpc2lvbmluZyBvZiBob3N0cyBp cyB0byBleHRlbmQNCj4+PiB0aGUgZnVuY3Rpb25hbGl0eSBhcm91bmQgdGhlIGZvcmVtYW4gcHJv dmlkZXIgZm9yIHRoaXMsIGJ1dCB5b3UgbWF5DQo+Pj4gaGF2ZSBvdGhlciBzdWdnZXN0aW9ucz8N Cj4+DQo+PiBJIHRoaW5rIEpvb3AgbWVhbnMgdG8gYmUgYWJsZSB0byBhZGQgaG9zdHMgKG5vZGVz KSB0byBhIGNsdXN0ZXIgYnkNCj4+IGFkZGluZyB0aGVpciBNQUMgYWRkcmVzcyB0byB0aGUgZGhj cCBsaXN0IGZvciBQWEUgYm9vdCBpbnRvIG92aXJ0LW5vZGUNCj4+IGFuZCB0aHVzIGpvaW4gdGhl IGNsdXN0ZXIuIFRoaXMgd291bGQgbWFrZSBpdCBlYXN5IHRvIGFkZCBuZXcgcGh5c2ljYWwNCj4+ IG5vZGVzIHdpdGhvdXQgYW55IHNwaW5uaW5nIGRpc2tzIG9yIG90aGVyIGxvY2FsIHN0b3JhZ2Ug cmVxdWlyZW1lbnRzLg0KPg0KPiB3ZSBzdGFydGVkIGFkZGluZyBmb3JlbWFuIGludGVncmF0aW9u IGluIDMuMzoNCj4gaHR0cDovL3d3dy5vdmlydC5vcmcvRmVhdHVyZXMvRm9yZW1hbkludGVncmF0 aW9uDQo+DQo+IGFkZGluZyBvaGFkIGFuZCBvdmVkIGZvciB0aGVpciB0aG91Z2h0cyBvbiB0aGlz Lg0KPg0KPj4NCj4+IEkgc3VwcG9zZSB0aGlzIG1heSBub3QgYmUgZWFzeSB3aXRoIGNvbXBsZXgg bmV0d29yayBjb25uZWN0aW9ucyAoYm9uZHMNCj4+IG9uIG1nbXQgbmV0d29yaywgbWdtdCBuZXR3 b3JrIG9uIGEgdGFnZ2VkIHZsYW4sIGV0YyksIGJ1dCBpdCBzaG91bGQgYmUNCj4+IHBvc3NpYmxl IGlmIHRoZSBtYW5hZ2VtZW50IG5ldHdvcmsgaW50ZXJmYWNlIGlzIHBsYWluIGFuZCBwaHlzaWNh bC4NCj4+DQo+PiAvU2ltb24NCj4+DQo+PiBQUywgUGVyaGFwcyBKb29wIGNhbiBjb25maXJtIHRo aXMgaWRlYSwgd2UndmUgdGFsa2VkIGFib3V0IGl0IElSTC4NCj4+IF9fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+PiBVc2VycyBtYWlsaW5nIGxpc3QNCj4+ IFVzZXJzQG92aXJ0Lm9yZzxtYWlsdG86VXNlcnNAb3ZpcnQub3JnPg0KPj4gaHR0cDovL2xpc3Rz Lm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzDQo+DQpUaGlzIGlzbid0IGFib3V0IHBy b3Zpc2lvbmluZyB3aXRoIEZvcmVtYW4uIEl0cyBhYm91dCBoYXZpbmcgdGhlIGNvbXB1dGUNCm5v ZGVzIE5PVCBoYXZpbmcgYW55IHNwaW5uaW5nIGRpc2tzLiBTbyB0aGUgb25seSB3YXkgdG8gc3Rh cnQgYSBub2RlIGlzDQp0byBweGVib290IGl0IGFuZCB0aGVuIGxldCBpdCAocmUpY29ubmVjdCB3 aXRoIHRoZSBlbmdpbmUuIFRoZW4gaXQgd2lsbA0KYmUgaWRlbnRpZmllZCBieSBlbmdpbmUgYXMg ZWl0aGVyIGEgbmV3IG5vZGUgb3IgYSByZWNvbm5lY3Rpbmcgbm9kZSBhbmQNCml0IHdpbGwgZ2V0 IGl0cyBjb25maWd1cmF0aW9uIGZyb20gdGhlIGVuZ2luZS4gRm9yIHJlZmVyZW5jZTogdGhhdHMg aG93DQpWaXJ0dWFsSXJvbiB3b3Jrcy4gSXQgaGFzIGEgbWFuYWdtZW50IG5ldHdvcmssIGp1c3Qg bGlrZSBvdmlydCwgYW5kIG9uDQp0aGF0IGl0IHJ1bnMgYSB0ZnRwIGFuZCBkaGNwIHNlcnZlci4g Tm9kZXMgYXJlIHBsdWdnZWQgaW50byB0aGUNCm1hbmFnbWVudCBuZXR3b3JrLCB3aXRob3V0IGRp c2ssIGFuZCB0aGVuIHB4ZSBib290ZWQgYWZ0ZXIgd2hpY2ggdGhleQ0KYXBwZWFyIGluIHRoZSB3 ZWJ1aSBhcyBuZXcgdW5jb25maWd1cmVkIG5vZGVzLiBZb3UgdGhlbiBjYW4gc2V0IHZhcmlvdXMN CnNldHRpbmdzIGFuZCB1cG9uIHJlYm9vdGluZyB0aGUgbm9kZXMgd2lsbCByZWNpZXZlIHRoZXNl IHNldHRpbmdzDQpiZWNhdXNlIGl0IGlzIHJlY29nbmlzZWQgYnkgaXRzIG1hYyBhZGRyZXNzLiBU aGUgYWR2YW50YWdlIG9mIHRoaXMNCmNvbnN0cnVjdCBpcyB0aGF0IHlvdSBjYW4gcGxhY2UgYSBu ZXcgc2VydmVyIGludG8gYSByYWNrLCBjYWJsZSBpdCwNCnBvd2VyIG9uIGFuZCBnbyBiYWNrIHRv IHlvdSBvZmZpY2Ugd2hlcmUgeW91J2xsIGZpbmQgdGhlIG5ldyBub2RlDQp3YWl0aW5nIHRvIGJl IGNvbmZpZ3VyZWQuIE5vIG1lc3NpbmcgYXJvdW5kIHdpdGggQ0RzIHRvIGluc3RhbGwgYW4gT1Ms DQpub3QgYmVpbmcgaW4gdGhlIGRhdGFjZW50ZXIgZm9yIGhvdXJzIG9uIGVuZCwganVzdCBpbiBh bmQgb3V0Lg0KDQpZZXMsIGRpc2tzIGFyZSBjaGVhcCBidXQgdGhleSBicmFrZSBkb3duLCBuZWVk IG1haW50ZW5hbmNlLCBtZWFucw0KZG93bnRpbWUgYW5kIGluIGdlbmVyYWwgbW9yZSBhZG1pbiB0 aW1lIHRoZW4gd2hlbiB5b3UgZG9uJ3QgaGF2ZSB0aGVtLiAoDQppdHMgYSBzaGFtZSB0byBoYXZl IGEgcmFpZDEgb2YgMiAxVGIgZGlzayBqdXN0IHRvIGluc3RhbGwgYW4gT1Mgb2YgbGVzcw0KdGhl biAxMEcpDQoNCg0KV2hhdCBhYm91dCBzd2FwPyBEb27CtHQgdGhleSBoYXZlIGFueSBzd2FwIGZp bGUsIG9yIHBhcnRpdGlvbiBhdCBhbGw/IFN3YXAtZmlsZSBvdmVyIE5GUyBtdXN0IHN1Y2sgcXVp dGUgYmFkbHksIG5vPyBXZSBoYXZlIGJvdWdodCBzZW1pLWNoZWFwIFNTRCdzIGZvciBvdXIgSG9z dHMgdG8gdXNlIGFzIHN3YXAgdG8gZ2V0IG1vcmUgb3V0IG9mIHRoZW0gZnJvbSBtZW1vcnkgb3Zl cnByb3Zpc2lvbmluZy4NCg0KDQoNCg0KUmVnYXJkcywNCg0KSm9vcA0KDQoNCg0KDQpfX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KVXNlcnMgbWFpbGluZyBs aXN0DQpVc2Vyc0BvdmlydC5vcmc8bWFpbHRvOlVzZXJzQG92aXJ0Lm9yZz4NCmh0dHA6Ly9saXN0 cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vycw0KDQoNCi0tDQoNCk1lZCBWw6RubGln YSBIw6Rsc25pbmdhcg0KLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KS2FybGkgU2rDtmJlcmcNClN3 ZWRpc2ggVW5pdmVyc2l0eSBvZiBBZ3JpY3VsdHVyYWwgU2NpZW5jZXMNCkJveCA3MDc5IChWaXNp dGluZyBBZGRyZXNzIEtyb27DpXN2w6RnZW4gOCkNClMtNzUwIDA3IFVwcHNhbGEsIFN3ZWRlbg0K UGhvbmU6ICArNDYtKDApMTgtNjcgMTUgNjYNCmthcmxpLnNqb2JlcmdAc2x1LnNlPG1haWx0bzpr YXJsaS5zam9iZXJnQGFkbS5zbHUuc2U+DQo= --_000_5F9E965F5A80BC468BE5F40576769F092E732A3Aexchange21_ Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: base64 PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUUkFOU0lUSU9OQUwv L0VOIj4NCjxodG1sPg0KPGhlYWQ+DQo8bWV0YSBodHRwLWVxdWl2PSJDb250ZW50LVR5cGUiIGNv bnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD11dGYtOCI+DQo8bWV0YSBuYW1lPSJHRU5FUkFUT1Ii IGNvbnRlbnQ9Ikd0a0hUTUwvNC42LjQiPg0KPC9oZWFkPg0KPGJvZHk+DQp0b3IgMjAxMy0wOS0w NSBrbG9ja2FuIDA5OjMwICYjNDM7MDIwMCBza3JldiBub2M6DQo8YmxvY2txdW90ZSB0eXBlPSJD SVRFIj4NCjxwcmU+Ck9uIDItOS0yMDEzIDE2OjMyLCBJdGFtYXIgSGVpbSB3cm90ZToKPGZvbnQg Y29sb3I9IiM3MzczNzMiPiZndDsgT24gMDkvMDIvMjAxMyAwNToyOSBQTSwgTk9DIHdyb3RlOjwv Zm9udD4KPGZvbnQgY29sb3I9IiM3MzczNzMiPiZndDsmZ3Q7IE9uIDA4LzIxLzIwMTMgMTI6MTEg UE0sIEl0YW1hciBIZWltIHdyb3RlOjwvZm9udD4KPGZvbnQgY29sb3I9IiM3MzczNzMiPiZndDsm Z3Q7Jmd0OyBPbiAwOC8yMS8yMDEzIDAyOjQwIEFNLCBKb29wIHZhbiBkZSBXZWdlIHdyb3RlOjwv Zm9udD4KPGZvbnQgY29sb3I9IiM3MzczNzMiPiZndDsmZ3Q7Jmd0OyZndDs8L2ZvbnQ+Cjxmb250 IGNvbG9yPSIjNzM3MzczIj4mZ3Q7Jmd0OyZndDsmZ3Q7IFdoYXQgSSB3b3VsZCBsaWtlIHRvIHNl ZSBpbiB0aGUgISBuZXh0IHZlcnNpb24gaXMgcHhlIGJvb3Qgb2YgdGhlIDwvZm9udD4KPGZvbnQg Y29sb3I9IiM3MzczNzMiPiZndDsmZ3Q7Jmd0OyZndDsgbm9kZXMuPC9mb250Pgo8Zm9udCBjb2xv cj0iIzczNzM3MyI+Jmd0OyZndDsmZ3Q7Jmd0OyBQcm9iYWJseSBub3QgZWFzeSB0byBhY2hpZXZl IGJlY2F1c2Ugb2YgZGVwZW5kZW5jeSBvbiBkaGNwLjwvZm9udD4KPGZvbnQgY29sb3I9IiM3Mzcz NzMiPiZndDsmZ3Q7Jmd0OzwvZm9udD4KPGZvbnQgY29sb3I9IiM3MzczNzMiPiZndDsmZ3Q7Jmd0 OyBIaSBKb29wLDwvZm9udD4KPGZvbnQgY29sb3I9IiM3MzczNzMiPiZndDsmZ3Q7Jmd0OzwvZm9u dD4KPGZvbnQgY29sb3I9IiM3MzczNzMiPiZndDsmZ3Q7Jmd0OyBjYW4geW91IHBsZWFzZSBnaXZl IGEgYml0IG1vcmUgaW5mb3JtYXRpb24gb24gdGhlIHVzZSBjYXNlIC8gaG93IHlvdTwvZm9udD4K PGZvbnQgY29sb3I9IiM3MzczNzMiPiZndDsmZ3Q7Jmd0OyBlbnZpc2lvbiB0aGlzPzwvZm9udD4K PGZvbnQgY29sb3I9IiM3MzczNzMiPiZndDsmZ3Q7Jmd0OzwvZm9udD4KPGZvbnQgY29sb3I9IiM3 MzczNzMiPiZndDsmZ3Q7Jmd0OyBjdXJyZW50IHRoaW5raW5nIGFyb3VuZCBiYXJlIG1ldGFsIHBy b3Zpc2lvbmluZyBvZiBob3N0cyBpcyB0byBleHRlbmQ8L2ZvbnQ+Cjxmb250IGNvbG9yPSIjNzM3 MzczIj4mZ3Q7Jmd0OyZndDsgdGhlIGZ1bmN0aW9uYWxpdHkgYXJvdW5kIHRoZSBmb3JlbWFuIHBy b3ZpZGVyIGZvciB0aGlzLCBidXQgeW91IG1heTwvZm9udD4KPGZvbnQgY29sb3I9IiM3MzczNzMi PiZndDsmZ3Q7Jmd0OyBoYXZlIG90aGVyIHN1Z2dlc3Rpb25zPzwvZm9udD4KPGZvbnQgY29sb3I9 IiM3MzczNzMiPiZndDsmZ3Q7PC9mb250Pgo8Zm9udCBjb2xvcj0iIzczNzM3MyI+Jmd0OyZndDsg SSB0aGluayBKb29wIG1lYW5zIHRvIGJlIGFibGUgdG8gYWRkIGhvc3RzIChub2RlcykgdG8gYSBj bHVzdGVyIGJ5PC9mb250Pgo8Zm9udCBjb2xvcj0iIzczNzM3MyI+Jmd0OyZndDsgYWRkaW5nIHRo ZWlyIE1BQyBhZGRyZXNzIHRvIHRoZSBkaGNwIGxpc3QgZm9yIFBYRSBib290IGludG8gb3ZpcnQt bm9kZTwvZm9udD4KPGZvbnQgY29sb3I9IiM3MzczNzMiPiZndDsmZ3Q7IGFuZCB0aHVzIGpvaW4g dGhlIGNsdXN0ZXIuIFRoaXMgd291bGQgbWFrZSBpdCBlYXN5IHRvIGFkZCBuZXcgcGh5c2ljYWw8 L2ZvbnQ+Cjxmb250IGNvbG9yPSIjNzM3MzczIj4mZ3Q7Jmd0OyBub2RlcyB3aXRob3V0IGFueSBz cGlubmluZyBkaXNrcyBvciBvdGhlciBsb2NhbCBzdG9yYWdlIHJlcXVpcmVtZW50cy48L2ZvbnQ+ Cjxmb250IGNvbG9yPSIjNzM3MzczIj4mZ3Q7PC9mb250Pgo8Zm9udCBjb2xvcj0iIzczNzM3MyI+ Jmd0OyB3ZSBzdGFydGVkIGFkZGluZyBmb3JlbWFuIGludGVncmF0aW9uIGluIDMuMzo8L2ZvbnQ+ Cjxmb250IGNvbG9yPSIjNzM3MzczIj4mZ3Q7IDxhIGhyZWY9Imh0dHA6Ly93d3cub3ZpcnQub3Jn L0ZlYXR1cmVzL0ZvcmVtYW5JbnRlZ3JhdGlvbiI+aHR0cDovL3d3dy5vdmlydC5vcmcvRmVhdHVy ZXMvRm9yZW1hbkludGVncmF0aW9uPC9hPjwvZm9udD4KPGZvbnQgY29sb3I9IiM3MzczNzMiPiZn dDs8L2ZvbnQ+Cjxmb250IGNvbG9yPSIjNzM3MzczIj4mZ3Q7IGFkZGluZyBvaGFkIGFuZCBvdmVk IGZvciB0aGVpciB0aG91Z2h0cyBvbiB0aGlzLjwvZm9udD4KPGZvbnQgY29sb3I9IiM3MzczNzMi PiZndDs8L2ZvbnQ+Cjxmb250IGNvbG9yPSIjNzM3MzczIj4mZ3Q7Jmd0OzwvZm9udD4KPGZvbnQg Y29sb3I9IiM3MzczNzMiPiZndDsmZ3Q7IEkgc3VwcG9zZSB0aGlzIG1heSBub3QgYmUgZWFzeSB3 aXRoIGNvbXBsZXggbmV0d29yayBjb25uZWN0aW9ucyAoYm9uZHM8L2ZvbnQ+Cjxmb250IGNvbG9y PSIjNzM3MzczIj4mZ3Q7Jmd0OyBvbiBtZ210IG5ldHdvcmssIG1nbXQgbmV0d29yayBvbiBhIHRh Z2dlZCB2bGFuLCBldGMpLCBidXQgaXQgc2hvdWxkIGJlPC9mb250Pgo8Zm9udCBjb2xvcj0iIzcz NzM3MyI+Jmd0OyZndDsgcG9zc2libGUgaWYgdGhlIG1hbmFnZW1lbnQgbmV0d29yayBpbnRlcmZh Y2UgaXMgcGxhaW4gYW5kIHBoeXNpY2FsLjwvZm9udD4KPGZvbnQgY29sb3I9IiM3MzczNzMiPiZn dDsmZ3Q7PC9mb250Pgo8Zm9udCBjb2xvcj0iIzczNzM3MyI+Jmd0OyZndDsgL1NpbW9uPC9mb250 Pgo8Zm9udCBjb2xvcj0iIzczNzM3MyI+Jmd0OyZndDs8L2ZvbnQ+Cjxmb250IGNvbG9yPSIjNzM3 MzczIj4mZ3Q7Jmd0OyBQUywgUGVyaGFwcyBKb29wIGNhbiBjb25maXJtIHRoaXMgaWRlYSwgd2Un dmUgdGFsa2VkIGFib3V0IGl0IElSTC48L2ZvbnQ+Cjxmb250IGNvbG9yPSIjNzM3MzczIj4mZ3Q7 Jmd0OyBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzwvZm9u dD4KPGZvbnQgY29sb3I9IiM3MzczNzMiPiZndDsmZ3Q7IFVzZXJzIG1haWxpbmcgbGlzdDwvZm9u dD4KPGZvbnQgY29sb3I9IiM3MzczNzMiPiZndDsmZ3Q7IDxhIGhyZWY9Im1haWx0bzpVc2Vyc0Bv dmlydC5vcmciPlVzZXJzQG92aXJ0Lm9yZzwvYT48L2ZvbnQ+Cjxmb250IGNvbG9yPSIjNzM3Mzcz Ij4mZ3Q7Jmd0OyA8YSBocmVmPSJodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGlu Zm8vdXNlcnMiPmh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vyczwv YT48L2ZvbnQ+Cjxmb250IGNvbG9yPSIjNzM3MzczIj4mZ3Q7PC9mb250PgpUaGlzIGlzbid0IGFi b3V0IHByb3Zpc2lvbmluZyB3aXRoIEZvcmVtYW4uIEl0cyBhYm91dCBoYXZpbmcgdGhlIGNvbXB1 dGUgCm5vZGVzIE5PVCBoYXZpbmcgYW55IHNwaW5uaW5nIGRpc2tzLiBTbyB0aGUgb25seSB3YXkg dG8gc3RhcnQgYSBub2RlIGlzIAp0byBweGVib290IGl0IGFuZCB0aGVuIGxldCBpdCAocmUpY29u bmVjdCB3aXRoIHRoZSBlbmdpbmUuIFRoZW4gaXQgd2lsbCAKYmUgaWRlbnRpZmllZCBieSBlbmdp bmUgYXMgZWl0aGVyIGEgbmV3IG5vZGUgb3IgYSByZWNvbm5lY3Rpbmcgbm9kZSBhbmQgCml0IHdp bGwgZ2V0IGl0cyBjb25maWd1cmF0aW9uIGZyb20gdGhlIGVuZ2luZS4gRm9yIHJlZmVyZW5jZTog dGhhdHMgaG93IApWaXJ0dWFsSXJvbiB3b3Jrcy4gSXQgaGFzIGEgbWFuYWdtZW50IG5ldHdvcmss IGp1c3QgbGlrZSBvdmlydCwgYW5kIG9uIAp0aGF0IGl0IHJ1bnMgYSB0ZnRwIGFuZCBkaGNwIHNl cnZlci4gTm9kZXMgYXJlIHBsdWdnZWQgaW50byB0aGUgCm1hbmFnbWVudCBuZXR3b3JrLCB3aXRo b3V0IGRpc2ssIGFuZCB0aGVuIHB4ZSBib290ZWQgYWZ0ZXIgd2hpY2ggdGhleSAKYXBwZWFyIGlu IHRoZSB3ZWJ1aSBhcyBuZXcgdW5jb25maWd1cmVkIG5vZGVzLiBZb3UgdGhlbiBjYW4gc2V0IHZh cmlvdXMgCnNldHRpbmdzIGFuZCB1cG9uIHJlYm9vdGluZyB0aGUgbm9kZXMgd2lsbCByZWNpZXZl IHRoZXNlIHNldHRpbmdzIApiZWNhdXNlIGl0IGlzIHJlY29nbmlzZWQgYnkgaXRzIG1hYyBhZGRy ZXNzLiBUaGUgYWR2YW50YWdlIG9mIHRoaXMgCmNvbnN0cnVjdCBpcyB0aGF0IHlvdSBjYW4gcGxh Y2UgYSBuZXcgc2VydmVyIGludG8gYSByYWNrLCBjYWJsZSBpdCwgCnBvd2VyIG9uIGFuZCBnbyBi YWNrIHRvIHlvdSBvZmZpY2Ugd2hlcmUgeW91J2xsIGZpbmQgdGhlIG5ldyBub2RlIAp3YWl0aW5n IHRvIGJlIGNvbmZpZ3VyZWQuIE5vIG1lc3NpbmcgYXJvdW5kIHdpdGggQ0RzIHRvIGluc3RhbGwg YW4gT1MsIApub3QgYmVpbmcgaW4gdGhlIGRhdGFjZW50ZXIgZm9yIGhvdXJzIG9uIGVuZCwganVz dCBpbiBhbmQgb3V0LgoKWWVzLCBkaXNrcyBhcmUgY2hlYXAgYnV0IHRoZXkgYnJha2UgZG93biwg bmVlZCBtYWludGVuYW5jZSwgbWVhbnMgCmRvd250aW1lIGFuZCBpbiBnZW5lcmFsIG1vcmUgYWRt aW4gdGltZSB0aGVuIHdoZW4geW91IGRvbid0IGhhdmUgdGhlbS4gKCAKaXRzIGEgc2hhbWUgdG8g aGF2ZSBhIHJhaWQxIG9mIDIgMVRiIGRpc2sganVzdCB0byBpbnN0YWxsIGFuIE9TIG9mIGxlc3Mg CnRoZW4gMTBHKQo8L3ByZT4NCjwvYmxvY2txdW90ZT4NCjxicj4NCldoYXQgYWJvdXQgc3dhcD8g RG9uwrR0IHRoZXkgaGF2ZSBhbnkgc3dhcCBmaWxlLCBvciBwYXJ0aXRpb24gYXQgYWxsPyBTd2Fw LWZpbGUgb3ZlciBORlMgbXVzdCBzdWNrIHF1aXRlIGJhZGx5LCBubz8gV2UgaGF2ZSBib3VnaHQg c2VtaS1jaGVhcCBTU0QncyBmb3Igb3VyIEhvc3RzIHRvIHVzZSBhcyBzd2FwIHRvIGdldCBtb3Jl IG91dCBvZiB0aGVtIGZyb20gbWVtb3J5IG92ZXJwcm92aXNpb25pbmcuPGJyPg0KPGJyPg0KPGJs b2NrcXVvdGUgdHlwZT0iQ0lURSI+DQo8cHJlPgoKUmVnYXJkcywKCkpvb3AKCgoKCl9fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClVzZXJzIG1haWxpbmcgbGlz dAo8YSBocmVmPSJtYWlsdG86VXNlcnNAb3ZpcnQub3JnIj5Vc2Vyc0BvdmlydC5vcmc8L2E+Cjxh IGhyZWY9Imh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VycyI+aHR0 cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPC9hPgo8L3ByZT4NCjwv YmxvY2txdW90ZT4NCjxicj4NCjx0YWJsZSBjZWxsc3BhY2luZz0iMCIgY2VsbHBhZGRpbmc9IjAi IHdpZHRoPSIxMDAlIj4NCjx0Ym9keT4NCjx0cj4NCjx0ZD4tLSA8YnI+DQo8YnI+DQpNZWQgVsOk bmxpZ2EgSMOkbHNuaW5nYXI8YnI+DQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPGJyPg0KS2FybGkg U2rDtmJlcmc8YnI+DQpTd2VkaXNoIFVuaXZlcnNpdHkgb2YgQWdyaWN1bHR1cmFsIFNjaWVuY2Vz PGJyPg0KQm94IDcwNzkgKFZpc2l0aW5nIEFkZHJlc3MgS3JvbsOlc3bDpGdlbiA4KTxicj4NClMt NzUwIDA3IFVwcHNhbGEsIFN3ZWRlbjxicj4NClBob25lOiAmbmJzcDsmIzQzOzQ2LSgwKTE4LTY3 IDE1IDY2PGJyPg0KPGEgaHJlZj0ibWFpbHRvOmthcmxpLnNqb2JlcmdAYWRtLnNsdS5zZSI+a2Fy bGkuc2pvYmVyZ0BzbHUuc2U8L2E+IDwvdGQ+DQo8L3RyPg0KPC90Ym9keT4NCjwvdGFibGU+DQo8 L2JvZHk+DQo8L2h0bWw+DQo= --_000_5F9E965F5A80BC468BE5F40576769F092E732A3Aexchange21_--

On 09/06/2013 08:11 AM, Karli Sjöberg wrote:
What about swap? Don´t they have any swap file, or partition at all? Swap-file over NFS must suck quite badly, no? We have bought semi-cheap SSD's for our Hosts to use as swap to get more out of them from memory overprovisioning.
I don't quite see the point, unless you have mirrored ssd's. Data corruption on ssd's is more frequent than on spinning disks and disastrous when it happens in a swapped out page... (And why not add more RAM?) But why use swap at all? I'm assuming you dimension a node to its intended use, so I guess lots of memory and CPU cores (in our case anyway) and you can allocate the available memory to the VMs until it's finished. I don't see much need for swap there. Perhaps my view is different than usual, I see swap as the gravelly escape roads on steep hills for trucks/cars who have failing breaks. It just reduces the damage in case of unexpected failures. Swap isn't something to use on a regular basis as cheep memory, as RAM is not that expensive anymore and magnitudes faster. /Simon

--_000_5F9E965F5A80BC468BE5F40576769F092E732B96exchange21_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 ZnJlIDIwMTMtMDktMDYga2xvY2thbiAwOTo0NiArMDIwMCBza3JldiBOT0M6DQoNCg0KT24gMDkv MDYvMjAxMyAwODoxMSBBTSwgS2FybGkgU2rDtmJlcmcgd3JvdGU6DQo+IFdoYXQgYWJvdXQgc3dh cD8gRG9uwrR0IHRoZXkgaGF2ZSBhbnkgc3dhcCBmaWxlLCBvciBwYXJ0aXRpb24gYXQgYWxsPw0K PiBTd2FwLWZpbGUgb3ZlciBORlMgbXVzdCBzdWNrIHF1aXRlIGJhZGx5LCBubz8gV2UgaGF2ZSBi b3VnaHQNCj4gc2VtaS1jaGVhcCBTU0QncyBmb3Igb3VyIEhvc3RzIHRvIHVzZSBhcyBzd2FwIHRv IGdldCBtb3JlIG91dCBvZiB0aGVtDQo+IGZyb20gbWVtb3J5IG92ZXJwcm92aXNpb25pbmcuDQoN CkkgZG9uJ3QgcXVpdGUgc2VlIHRoZSBwb2ludCwgdW5sZXNzIHlvdSBoYXZlIG1pcnJvcmVkIHNz ZCdzLiBEYXRhDQpjb3JydXB0aW9uIG9uIHNzZCdzIGlzIG1vcmUgZnJlcXVlbnQgdGhhbiBvbiBz cGlubmluZyBkaXNrcyBhbmQNCmRpc2FzdHJvdXMgd2hlbiBpdCBoYXBwZW5zIGluIGEgc3dhcHBl ZCBvdXQgcGFnZS4uLg0KDQoNClllcyB0aGV5IGFic29sdXRlbHkgbmVlZCB0byBiZSBtaXJyb3Jl ZC4gSSBkaXNhZ3JlZSB0aG91Z2ggdGhhdCBtb2Rlcm4gU1NEJ3MgYXJlIG1vcmUgZmFpbHVyZS1w cm9uZSB0aGFuIHNwaW5uaW5nIGRpc2tzLiBJIHdvdWxkIHJhdGhlciBzYXkgdGhhdCB0aGV5IGFi b3V0IGFzIGZhaWx1cmUtcHJvbmUgYXMgYW55IG90aGVyIGRpc2s6DQpodHRwOi8vd3d3LnRvbXNo YXJkd2FyZS5jb20vcmV2aWV3cy9zc2QtcmVsaWFiaWxpdHktZmFpbHVyZS1yYXRlLDI5MjMtOS5o dG1sDQoNCkFuZCBpZiB5b3Uga25vdyB3aGljaCBvbmUncyB0byBidXksIGFuZCB3aGljaCBvbmVz IHRvIHdhdGNoIG91dCBmb3IsIHRoZXkgbWF5IGV2ZW4gbGFzdCB5b3UgbG9uZ2VyOg0KaHR0cDov L3d3dy5oYXJkd2FyZS1yZXZvbHV0aW9uLmNvbS9iZXN0LXNzZC1iZXN0LWhkZC1mb3IteW91ci1t b25leS1tYXktMjAxMy8NCg0KDQoNCihBbmQgd2h5IG5vdCBhZGQNCm1vcmUgUkFNPykNCg0KDQpX ZWxsLCBmb3IgdXMsIGl0wrRzIG1haW5seSBzaW5jZSB3ZSBhcmUgZXh0cmVtZWx5IGJ1ZGdldC1j b25zdHJhaW50LCB1c2luZyBvbGQsIHdvcm4gb3V0IHNlcnZlcnMgdGhhdCBjYW7CtHQgaGFuZGxl IHB1dHRpbmcgaW4gdGhhdCBtdWNoIFJBTS4gQWxzbyBvbGRlciBzZXJ2ZXJzIHJlcXVpcmUgb2xk ZXIgUkFNLCB3aGljaCBpbiB0dXJuIGFyZSBoYXJkZXItIGFuZCBtb3JlIGV4cGVuc2l2ZS0gdG8g Y29tZSBieS4NCg0KDQoNCg0KQnV0IHdoeSB1c2Ugc3dhcCBhdCBhbGw/IEknbSBhc3N1bWluZyB5 b3UgZGltZW5zaW9uIGEgbm9kZSB0byBpdHMNCmludGVuZGVkIHVzZSwgc28gSSBndWVzcyBsb3Rz IG9mIG1lbW9yeSBhbmQgQ1BVIGNvcmVzIChpbiBvdXIgY2FzZQ0KYW55d2F5KSBhbmQgeW91IGNh biBhbGxvY2F0ZSB0aGUgYXZhaWxhYmxlIG1lbW9yeSB0byB0aGUgVk1zIHVudGlsIGl0J3MNCmZp bmlzaGVkLiBJIGRvbid0IHNlZSBtdWNoIG5lZWQgZm9yIHN3YXAgdGhlcmUuDQoNClBlcmhhcHMg bXkgdmlldyBpcyBkaWZmZXJlbnQgdGhhbiB1c3VhbCwgSSBzZWUgc3dhcCBhcyB0aGUgZ3JhdmVs bHkNCmVzY2FwZSByb2FkcyBvbiBzdGVlcCBoaWxscyBmb3IgdHJ1Y2tzL2NhcnMgd2hvIGhhdmUg ZmFpbGluZyBicmVha3MuIEl0DQpqdXN0IHJlZHVjZXMgdGhlIGRhbWFnZSBpbiBjYXNlIG9mIHVu ZXhwZWN0ZWQgZmFpbHVyZXMuDQoNCg0KSXMgdGhlcmUgYSBkb3duLXNpZGUgdG8gcmVkdWNlIGRh bWFnZSBpbiBjYXNlIG9mIHVuZXhwZWN0ZWQgZmFpbHVyZXM/OikNCg0KDQoNClN3YXAgaXNuJ3QN CnNvbWV0aGluZyB0byB1c2Ugb24gYSByZWd1bGFyIGJhc2lzIGFzIGNoZWVwIG1lbW9yeSwgYXMg UkFNIGlzIG5vdCB0aGF0DQpleHBlbnNpdmUgYW55bW9yZSBhbmQgbWFnbml0dWRlcyBmYXN0ZXIu DQoNCi9TaW1vbg0KDQoNCg0KLS0NCg0KTWVkIFbDpG5saWdhIEjDpGxzbmluZ2FyDQotLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tDQpLYXJsaSBTasO2YmVyZw0KU3dlZGlzaCBVbml2ZXJzaXR5IG9mIEFn cmljdWx0dXJhbCBTY2llbmNlcw0KQm94IDcwNzkgKFZpc2l0aW5nIEFkZHJlc3MgS3JvbsOlc3bD pGdlbiA4KQ0KUy03NTAgMDcgVXBwc2FsYSwgU3dlZGVuDQpQaG9uZTogICs0Ni0oMCkxOC02NyAx NSA2Ng0Ka2FybGkuc2pvYmVyZ0BzbHUuc2U8bWFpbHRvOmthcmxpLnNqb2JlcmdAYWRtLnNsdS5z ZT4NCg== --_000_5F9E965F5A80BC468BE5F40576769F092E732B96exchange21_ Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: base64 PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUUkFOU0lUSU9OQUwv L0VOIj4NCjxodG1sPg0KPGhlYWQ+DQo8bWV0YSBodHRwLWVxdWl2PSJDb250ZW50LVR5cGUiIGNv bnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD11dGYtOCI+DQo8bWV0YSBuYW1lPSJHRU5FUkFUT1Ii IGNvbnRlbnQ9Ikd0a0hUTUwvNC42LjQiPg0KPC9oZWFkPg0KPGJvZHk+DQpmcmUgMjAxMy0wOS0w NiBrbG9ja2FuIDA5OjQ2ICYjNDM7MDIwMCBza3JldiBOT0M6DQo8YmxvY2txdW90ZSB0eXBlPSJD SVRFIj4NCjxwcmU+Ck9uIDA5LzA2LzIwMTMgMDg6MTEgQU0sIEthcmxpIFNqw7ZiZXJnIHdyb3Rl Ogo8Zm9udCBjb2xvcj0iIzczNzM3MyI+Jmd0OyBXaGF0IGFib3V0IHN3YXA/IERvbsK0dCB0aGV5 IGhhdmUgYW55IHN3YXAgZmlsZSwgb3IgcGFydGl0aW9uIGF0IGFsbD8gPC9mb250Pgo8Zm9udCBj b2xvcj0iIzczNzM3MyI+Jmd0OyBTd2FwLWZpbGUgb3ZlciBORlMgbXVzdCBzdWNrIHF1aXRlIGJh ZGx5LCBubz8gV2UgaGF2ZSBib3VnaHQgPC9mb250Pgo8Zm9udCBjb2xvcj0iIzczNzM3MyI+Jmd0 OyBzZW1pLWNoZWFwIFNTRCdzIGZvciBvdXIgSG9zdHMgdG8gdXNlIGFzIHN3YXAgdG8gZ2V0IG1v cmUgb3V0IG9mIHRoZW0gPC9mb250Pgo8Zm9udCBjb2xvcj0iIzczNzM3MyI+Jmd0OyBmcm9tIG1l bW9yeSBvdmVycHJvdmlzaW9uaW5nLjwvZm9udD4KCkkgZG9uJ3QgcXVpdGUgc2VlIHRoZSBwb2lu dCwgdW5sZXNzIHlvdSBoYXZlIG1pcnJvcmVkIHNzZCdzLiBEYXRhIApjb3JydXB0aW9uIG9uIHNz ZCdzIGlzIG1vcmUgZnJlcXVlbnQgdGhhbiBvbiBzcGlubmluZyBkaXNrcyBhbmQgCmRpc2FzdHJv dXMgd2hlbiBpdCBoYXBwZW5zIGluIGEgc3dhcHBlZCBvdXQgcGFnZS4uLiAKPC9wcmU+DQo8L2Js b2NrcXVvdGU+DQo8YnI+DQpZZXMgdGhleSBhYnNvbHV0ZWx5IG5lZWQgdG8gYmUgbWlycm9yZWQu IEkgZGlzYWdyZWUgdGhvdWdoIHRoYXQgbW9kZXJuIFNTRCdzIGFyZSBtb3JlIGZhaWx1cmUtcHJv bmUgdGhhbiBzcGlubmluZyBkaXNrcy4gSSB3b3VsZCByYXRoZXIgc2F5IHRoYXQgdGhleSBhYm91 dCBhcyBmYWlsdXJlLXByb25lIGFzIGFueSBvdGhlciBkaXNrOjxicj4NCjxhIGhyZWY9Imh0dHA6 Ly93d3cudG9tc2hhcmR3YXJlLmNvbS9yZXZpZXdzL3NzZC1yZWxpYWJpbGl0eS1mYWlsdXJlLXJh dGUsMjkyMy05Lmh0bWwiPmh0dHA6Ly93d3cudG9tc2hhcmR3YXJlLmNvbS9yZXZpZXdzL3NzZC1y ZWxpYWJpbGl0eS1mYWlsdXJlLXJhdGUsMjkyMy05Lmh0bWw8L2E+PGJyPg0KPGJyPg0KQW5kIGlm IHlvdSBrbm93IHdoaWNoIG9uZSdzIHRvIGJ1eSwgYW5kIHdoaWNoIG9uZXMgdG8gd2F0Y2ggb3V0 IGZvciwgdGhleSBtYXkgZXZlbiBsYXN0IHlvdSBsb25nZXI6PGJyPg0KPGEgaHJlZj0iaHR0cDov L3d3dy5oYXJkd2FyZS1yZXZvbHV0aW9uLmNvbS9iZXN0LXNzZC1iZXN0LWhkZC1mb3IteW91ci1t b25leS1tYXktMjAxMy8iPmh0dHA6Ly93d3cuaGFyZHdhcmUtcmV2b2x1dGlvbi5jb20vYmVzdC1z c2QtYmVzdC1oZGQtZm9yLXlvdXItbW9uZXktbWF5LTIwMTMvPC9hPjxicj4NCjxicj4NCjxibG9j a3F1b3RlIHR5cGU9IkNJVEUiPg0KPHByZT4KKEFuZCB3aHkgbm90IGFkZCAKbW9yZSBSQU0/KQo8 L3ByZT4NCjwvYmxvY2txdW90ZT4NCjxicj4NCldlbGwsIGZvciB1cywgaXTCtHMgbWFpbmx5IHNp bmNlIHdlIGFyZSBleHRyZW1lbHkgYnVkZ2V0LWNvbnN0cmFpbnQsIHVzaW5nIG9sZCwgd29ybiBv dXQgc2VydmVycyB0aGF0IGNhbsK0dCBoYW5kbGUgcHV0dGluZyBpbiB0aGF0IG11Y2ggUkFNLiBB bHNvIG9sZGVyIHNlcnZlcnMgcmVxdWlyZSBvbGRlciBSQU0sIHdoaWNoIGluIHR1cm4gYXJlIGhh cmRlci0gYW5kIG1vcmUgZXhwZW5zaXZlLSB0byBjb21lIGJ5Ljxicj4NCjxicj4NCjxibG9ja3F1 b3RlIHR5cGU9IkNJVEUiPg0KPHByZT4KCkJ1dCB3aHkgdXNlIHN3YXAgYXQgYWxsPyBJJ20gYXNz dW1pbmcgeW91IGRpbWVuc2lvbiBhIG5vZGUgdG8gaXRzIAppbnRlbmRlZCB1c2UsIHNvIEkgZ3Vl c3MgbG90cyBvZiBtZW1vcnkgYW5kIENQVSBjb3JlcyAoaW4gb3VyIGNhc2UgCmFueXdheSkgYW5k IHlvdSBjYW4gYWxsb2NhdGUgdGhlIGF2YWlsYWJsZSBtZW1vcnkgdG8gdGhlIFZNcyB1bnRpbCBp dCdzIApmaW5pc2hlZC4gSSBkb24ndCBzZWUgbXVjaCBuZWVkIGZvciBzd2FwIHRoZXJlLgoKUGVy aGFwcyBteSB2aWV3IGlzIGRpZmZlcmVudCB0aGFuIHVzdWFsLCBJIHNlZSBzd2FwIGFzIHRoZSBn cmF2ZWxseSAKZXNjYXBlIHJvYWRzIG9uIHN0ZWVwIGhpbGxzIGZvciB0cnVja3MvY2FycyB3aG8g aGF2ZSBmYWlsaW5nIGJyZWFrcy4gSXQgCmp1c3QgcmVkdWNlcyB0aGUgZGFtYWdlIGluIGNhc2Ug b2YgdW5leHBlY3RlZCBmYWlsdXJlcy4gCjwvcHJlPg0KPC9ibG9ja3F1b3RlPg0KPGJyPg0KSXMg dGhlcmUgYSBkb3duLXNpZGUgdG8gcmVkdWNlIGRhbWFnZSBpbiBjYXNlIG9mIHVuZXhwZWN0ZWQg ZmFpbHVyZXM/Oik8YnI+DQo8YnI+DQo8YmxvY2txdW90ZSB0eXBlPSJDSVRFIj4NCjxwcmU+ClN3 YXAgaXNuJ3QgCnNvbWV0aGluZyB0byB1c2Ugb24gYSByZWd1bGFyIGJhc2lzIGFzIGNoZWVwIG1l bW9yeSwgYXMgUkFNIGlzIG5vdCB0aGF0IApleHBlbnNpdmUgYW55bW9yZSBhbmQgbWFnbml0dWRl cyBmYXN0ZXIuCgovU2ltb24KCjwvcHJlPg0KPC9ibG9ja3F1b3RlPg0KPGJyPg0KPHRhYmxlIGNl bGxzcGFjaW5nPSIwIiBjZWxscGFkZGluZz0iMCIgd2lkdGg9IjEwMCUiPg0KPHRib2R5Pg0KPHRy Pg0KPHRkPi0tIDxicj4NCjxicj4NCk1lZCBWw6RubGlnYSBIw6Rsc25pbmdhcjxicj4NCi0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS08YnI+DQpLYXJsaSBTasO2YmVyZzxicj4NClN3ZWRpc2ggVW5pdmVy c2l0eSBvZiBBZ3JpY3VsdHVyYWwgU2NpZW5jZXM8YnI+DQpCb3ggNzA3OSAoVmlzaXRpbmcgQWRk cmVzcyBLcm9uw6VzdsOkZ2VuIDgpPGJyPg0KUy03NTAgMDcgVXBwc2FsYSwgU3dlZGVuPGJyPg0K UGhvbmU6ICZuYnNwOyYjNDM7NDYtKDApMTgtNjcgMTUgNjY8YnI+DQo8YSBocmVmPSJtYWlsdG86 a2FybGkuc2pvYmVyZ0BhZG0uc2x1LnNlIj5rYXJsaS5zam9iZXJnQHNsdS5zZTwvYT4gPC90ZD4N CjwvdHI+DQo8L3Rib2R5Pg0KPC90YWJsZT4NCjwvYm9keT4NCjwvaHRtbD4NCg== --_000_5F9E965F5A80BC468BE5F40576769F092E732B96exchange21_--

This is a multi-part message in MIME format. --------------030202060201030403090807 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit I thought Foreman already integrates with oVirt / RHEV ? Alex On 09/05/2013 08:30 AM, noc wrote:
we started adding foreman integration in 3.3: http://www.ovirt.org/Features/ForemanIntegration
adding ohad and oved for their thoughts on this.
--------------030202060201030403090807 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <font size="-1"><font face="Tahoma">I thought Foreman already integrates with oVirt / RHEV ? <br> <br> Alex<br> </font></font><br> On 09/05/2013 08:30 AM, noc wrote: <blockquote cite="mid:52283305.7090801@nieuwland.nl" type="cite">we started adding foreman integration in 3.3: <br> <a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://www.ovirt.org/Features/ForemanIntegration">http://www.ovirt.org/Features/ForemanIntegration</a> <br> <br> adding ohad and oved for their thoughts on this. </blockquote> </body> </html> --------------030202060201030403090807--

On 09/05/2013 10:30 AM, noc wrote:
On 08/21/2013 12:11 PM, Itamar Heim wrote:
On 08/21/2013 02:40 AM, Joop van de Wege wrote:
What I would like to see in the ! next version is pxe boot of the nodes. Probably not easy to achieve because of dependency on dhcp.
Hi Joop,
can you please give a bit more information on the use case / how you envision this?
current thinking around bare metal provisioning of hosts is to extend the functionality around the foreman provider for this, but you may have other suggestions?
I think Joop means to be able to add hosts (nodes) to a cluster by adding their MAC address to the dhcp list for PXE boot into ovirt-node and thus join the cluster. This would make it easy to add new physical nodes without any spinning disks or other local storage requirements.
we started adding foreman integration in 3.3: http://www.ovirt.org/Features/ForemanIntegration
adding ohad and oved for their thoughts on this.
I suppose this may not be easy with complex network connections (bonds on mgmt network, mgmt network on a tagged vlan, etc), but it should be possible if the management network interface is plain and physical.
/Simon
PS, Perhaps Joop can confirm this idea, we've talked about it IRL. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
This isn't about provisioning with Foreman. Its about having the compute nodes NOT having any spinning disks. So the only way to start a node is to pxeboot it and then let it (re)connect with the engine. Then it will be identified by engine as either a new node or a reconnecting node and it will get its configuration from the engine. For reference: thats how VirtualIron works. It has a managment network, just like ovirt, and on that it runs a tftp and dhcp server. Nodes are plugged into the managment network, without disk, and then pxe booted after which they appear in the webui as new unconfigured nodes. You then can set various settings and upon rebooting the nodes will recieve these settings because it is recognised by its mac address. The advantage of this construct is that you can place a new server into a rack, cable it, power on and go back to you office where you'll find the new node waiting to be configured. No messing around with CDs to install an OS, not being in the datacenter for hours on end, just in and out.
Yes, disks are cheap but they brake down, need maintenance, means downtime and in general more admin time then when you don't have them. ( its a shame to have a raid1 of 2 1Tb disk just to install an OS of less then 10G)
just wondering, how do they prevent a rogue node/guest from masquerading as such a host, getting access/secrets/VMs to be launched on such an untrusted node (they could easily report a different mac address if the layer 2 isn't hardened against that)? other than that, yes. we actually used to have this via the AutoApprovePatterns config option, which would have the engine approve a pending node as it registers (I admit i don't think anyone used this last several years, and it may be totally broken by now). please note this doesn't solve the need for a disk, just the auto-registration part (if it still works)

On 6-9-2013 10:12, Itamar Heim wrote:
On 09/05/2013 10:30 AM, noc wrote:
On 08/21/2013 12:11 PM, Itamar Heim wrote:
On 08/21/2013 02:40 AM, Joop van de Wege wrote:
What I would like to see in the ! next version is pxe boot of the nodes. Probably not easy to achieve because of dependency on dhcp.
Hi Joop,
can you please give a bit more information on the use case / how you envision this?
current thinking around bare metal provisioning of hosts is to extend the functionality around the foreman provider for this, but you may have other suggestions?
I think Joop means to be able to add hosts (nodes) to a cluster by adding their MAC address to the dhcp list for PXE boot into ovirt-node and thus join the cluster. This would make it easy to add new physical nodes without any spinning disks or other local storage requirements.
we started adding foreman integration in 3.3: http://www.ovirt.org/Features/ForemanIntegration
adding ohad and oved for their thoughts on this.
I suppose this may not be easy with complex network connections (bonds on mgmt network, mgmt network on a tagged vlan, etc), but it should be possible if the management network interface is plain and physical.
/Simon
PS, Perhaps Joop can confirm this idea, we've talked about it IRL. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
This isn't about provisioning with Foreman. Its about having the compute nodes NOT having any spinning disks. So the only way to start a node is to pxeboot it and then let it (re)connect with the engine. Then it will be identified by engine as either a new node or a reconnecting node and it will get its configuration from the engine. For reference: thats how VirtualIron works. It has a managment network, just like ovirt, and on that it runs a tftp and dhcp server. Nodes are plugged into the managment network, without disk, and then pxe booted after which they appear in the webui as new unconfigured nodes. You then can set various settings and upon rebooting the nodes will recieve these settings because it is recognised by its mac address. The advantage of this construct is that you can place a new server into a rack, cable it, power on and go back to you office where you'll find the new node waiting to be configured. No messing around with CDs to install an OS, not being in the datacenter for hours on end, just in and out.
Yes, disks are cheap but they brake down, need maintenance, means downtime and in general more admin time then when you don't have them. ( its a shame to have a raid1 of 2 1Tb disk just to install an OS of less then 10G)
just wondering, how do they prevent a rogue node/guest from masquerading as such a host, getting access/secrets/VMs to be launched on such an untrusted node (they could easily report a different mac address if the layer 2 isn't hardened against that)?
They would need physical access to your rack which ofcourse is locked, you would need to powerdown/up which would trigger an alert, switch port down/up would trigger an alert, so probably you're notified that something not quite right is happening. I haven't gone through the source to see if there is more then just the mac address check.
other than that, yes. we actually used to have this via the AutoApprovePatterns config option, which would have the engine approve a pending node as it registers (I admit i don't think anyone used this last several years, and it may be totally broken by now).
please note this doesn't solve the need for a disk, just the auto-registration part (if it still works) What I would like is to have the ovirt Node pxe booting and getting its config from engine or autoregister. I know there is a script which converts the iso into a huge pxeboot kernel but don't know how to solve or if its solved the config part.
@karli: If you run your cluster in Memory Optimization=None then you won't need swap. Have been doing that for years and haven't had a single problem attributed to that. I just would like to have the choice, pxe boot the node and know that you don't have swap. Run with disks if you really need overprovisioning. Regards, Joop

------=_Part_196_10883757.1378464169793 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Could be great o have on the Engine: - An upload option for the ISO files - A backup and restore option - An high availability for the engine: install the engine on 2 platforms (hardware?), than integrate them for synchronization Jose ----- Original Message ----- From: "noc" <noc@nieuwland.nl> Cc: users@ovirt.org Sent: Sexta-feira, 6 de Setembro de 2013 10:28:09 Subject: Re: [Users] so, what do you want next in oVirt? On 6-9-2013 10:12, Itamar Heim wrote:
On 09/05/2013 10:30 AM, noc wrote:
On 08/21/2013 12:11 PM, Itamar Heim wrote:
On 08/21/2013 02:40 AM, Joop van de Wege wrote:
What I would like to see in the ! next version is pxe boot of the nodes. Probably not easy to achieve because of dependency on dhcp.
Hi Joop,
can you please give a bit more information on the use case / how you envision this?
current thinking around bare metal provisioning of hosts is to extend the functionality around the foreman provider for this, but you may have other suggestions?
I think Joop means to be able to add hosts (nodes) to a cluster by adding their MAC address to the dhcp list for PXE boot into ovirt-node and thus join the cluster. This would make it easy to add new physical nodes without any spinning disks or other local storage requirements.
we started adding foreman integration in 3.3: http://www.ovirt.org/Features/ForemanIntegration
adding ohad and oved for their thoughts on this.
I suppose this may not be easy with complex network connections (bonds on mgmt network, mgmt network on a tagged vlan, etc), but it should be possible if the management network interface is plain and physical.
/Simon
PS, Perhaps Joop can confirm this idea, we've talked about it IRL. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
This isn't about provisioning with Foreman. Its about having the compute nodes NOT having any spinning disks. So the only way to start a node is to pxeboot it and then let it (re)connect with the engine. Then it will be identified by engine as either a new node or a reconnecting node and it will get its configuration from the engine. For reference: thats how VirtualIron works. It has a managment network, just like ovirt, and on that it runs a tftp and dhcp server. Nodes are plugged into the managment network, without disk, and then pxe booted after which they appear in the webui as new unconfigured nodes. You then can set various settings and upon rebooting the nodes will recieve these settings because it is recognised by its mac address. The advantage of this construct is that you can place a new server into a rack, cable it, power on and go back to you office where you'll find the new node waiting to be configured. No messing around with CDs to install an OS, not being in the datacenter for hours on end, just in and out.
Yes, disks are cheap but they brake down, need maintenance, means downtime and in general more admin time then when you don't have them. ( its a shame to have a raid1 of 2 1Tb disk just to install an OS of less then 10G)
just wondering, how do they prevent a rogue node/guest from masquerading as such a host, getting access/secrets/VMs to be launched on such an untrusted node (they could easily report a different mac address if the layer 2 isn't hardened against that)?
They would need physical access to your rack which ofcourse is locked, you would need to powerdown/up which would trigger an alert, switch port down/up would trigger an alert, so probably you're notified that something not quite right is happening. I haven't gone through the source to see if there is more then just the mac address check.
other than that, yes. we actually used to have this via the AutoApprovePatterns config option, which would have the engine approve a pending node as it registers (I admit i don't think anyone used this last several years, and it may be totally broken by now).
please note this doesn't solve the need for a disk, just the auto-registration part (if it still works) What I would like is to have the ovirt Node pxe booting and getting its config from engine or autoregister. I know there is a script which converts the iso into a huge pxeboot kernel but don't know how to solve or if its solved the config part.
@karli: If you run your cluster in Memory Optimization=None then you won't need swap. Have been doing that for years and haven't had a single problem attributed to that. I just would like to have the choice, pxe boot the node and know that you don't have swap. Run with disks if you really need overprovisioning. Regards, Joop _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ------=_Part_196_10883757.1378464169793 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><head><style type=3D'text/css'>p { margin: 0; }</style></head><body><= div style=3D'font-family: arial,helvetica,sans-serif; font-size: 10pt; colo= r: #000000'>Could be great o have on the Engine:<br>- An upload option for = the ISO files<br>- A backup and restore option<br>- An high availability fo= r the engine: install the engine on 2 platforms (hardware?), than integrate= them for synchronization<br><br>Jose<br><br><hr id=3D"zwchr"><div style=3D= "color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decorat= ion: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>Fr= om: </b>"noc" <noc@nieuwland.nl><br><b>Cc: </b>users@ovirt.org<br><b>= Sent: </b>Sexta-feira, 6 de Setembro de 2013 10:28:09<br><b>Subject: </b>Re= : [Users] so, what do you want next in oVirt?<br><br>On 6-9-2013 10:12, Ita= mar Heim wrote:<br>> On 09/05/2013 10:30 AM, noc wrote:<br>>>>&= gt; On 08/21/2013 12:11 PM, Itamar Heim wrote:<br>>>>>> On 0= 8/21/2013 02:40 AM, Joop van de Wege wrote:<br>>>>>>><br>= >>>>>> What I would like to see in the ! next version is = pxe boot of the<br>>>>>>> nodes.<br>>>>>>&= gt; Probably not easy to achieve because of dependency on dhcp.<br>>>= >>><br>>>>>> Hi Joop,<br>>>>>><br>&g= t;>>>> can you please give a bit more information on the use ca= se / how you<br>>>>>> envision this?<br>>>>>>= <br>>>>>> current thinking around bare metal provisioning of= hosts is to extend<br>>>>>> the functionality around the fo= reman provider for this, but you may<br>>>>>> have other sug= gestions?<br>>>>><br>>>>> I think Joop means to be = able to add hosts (nodes) to a cluster by<br>>>>> adding their = MAC address to the dhcp list for PXE boot into ovirt-node<br>>>>&g= t; and thus join the cluster. This would make it easy to add new physical<b= r>>>>> nodes without any spinning disks or other local storage = requirements.<br>>>><br>>>> we started adding foreman int= egration in 3.3:<br>>>> http://www.ovirt.org/Features/ForemanInteg= ration<br>>>><br>>>> adding ohad and oved for their thoug= hts on this.<br>>>><br>>>>><br>>>>> I supp= ose this may not be easy with complex network connections (bonds<br>>>= ;>> on mgmt network, mgmt network on a tagged vlan, etc), but it shou= ld be<br>>>>> possible if the management network interface is p= lain and physical.<br>>>>><br>>>>> /Simon<br>>&g= t;>><br>>>>> PS, Perhaps Joop can confirm this idea, we'v= e talked about it IRL.<br>>>>> ________________________________= _______________<br>>>>> Users mailing list<br>>>>> = Users@ovirt.org<br>>>>> http://lists.ovirt.org/mailman/listinfo= /users<br>>>><br>>> This isn't about provisioning with Forem= an. Its about having the compute<br>>> nodes NOT having any spinning = disks. So the only way to start a node is<br>>> to pxeboot it and the= n let it (re)connect with the engine. Then it will<br>>> be identifie= d by engine as either a new node or a reconnecting node and<br>>> it = will get its configuration from the engine. For reference: thats how<br>>= ;> VirtualIron works. It has a managment network, just like ovirt, and o= n<br>>> that it runs a tftp and dhcp server. Nodes are plugged into t= he<br>>> managment network, without disk, and then pxe booted after w= hich they<br>>> appear in the webui as new unconfigured nodes. You th= en can set various<br>>> settings and upon rebooting the nodes will r= ecieve these settings<br>>> because it is recognised by its mac addre= ss. The advantage of this<br>>> construct is that you can place a new= server into a rack, cable it,<br>>> power on and go back to you offi= ce where you'll find the new node<br>>> waiting to be configured. No = messing around with CDs to install an OS,<br>>> not being in the data= center for hours on end, just in and out.<br>>><br>>> Yes, disk= s are cheap but they brake down, need maintenance, means<br>>> downti= me and in general more admin time then when you don't have them. (<br>>&= gt; its a shame to have a raid1 of 2 1Tb disk just to install an OS of less= <br>>> then 10G)<br>><br>> just wondering, how do they prevent = a rogue node/guest from <br>> masquerading as such a host, getting acces= s/secrets/VMs to be launched <br>> on such an untrusted node (they could= easily report a different mac <br>> address if the layer 2 isn't harden= ed against that)?<br>><br>They would need physical access to your rack w= hich ofcourse is locked, <br>you would need to powerdown/up which would tri= gger an alert, switch port <br>down/up would trigger an alert, so probably = you're notified that <br>something not quite right is happening. I ha= ven't gone through the <br>source to see if there is more then just the mac= address check.<br><br>> other than that, yes. we actually used to have = this via the <br>> AutoApprovePatterns config option, which would have t= he engine approve <br>> a pending node as it registers (I admit i don't = think anyone used this <br>> last several years, and it may be totally b= roken by now).<br>><br>> please note this doesn't solve the need for = a disk, just the <br>> auto-registration part (if it still works)<br>Wha= t I would like is to have the ovirt Node pxe booting and getting its <br>co= nfig from engine or autoregister. I know there is a script which <br>conver= ts the iso into a huge pxeboot kernel but don't know how to solve <br>or if= its solved the config part.<br><br>@karli:<br>If you run your cluster in M= emory Optimization=3DNone then you won't need <br>swap. Have been doing tha= t for years and haven't had a single problem <br>attributed to that. I just= would like to have the choice, pxe boot the <br>node and know that you don= 't have swap. Run with disks if you really <br>need overprovisioning.<br><b= r>Regards,<br><br>Joop<br><br>_____________________________________________= __<br>Users mailing list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailm= an/listinfo/users<br></div><br></div></body></html> ------=_Part_196_10883757.1378464169793--

A lot of work.. but has there been any consideration into redesigning the web UI? Right now it's so complex and heavy, takes a while to load and too many features are hidden! I've been using oVirt for half a year.. and I'm still finding small useful things hidden here and there. In comparison I like how Openstack has a lovely interface which hides its thousands of complexities behind a clean interface. Just a thought.. On Fri, Sep 6, 2013 at 8:43 PM, <suporte@logicworks.pt> wrote:
Could be great o have on the Engine: - An upload option for the ISO files - A backup and restore option - An high availability for the engine: install the engine on 2 platforms (hardware?), than integrate them for synchronization
Jose
------------------------------ *From: *"noc" <noc@nieuwland.nl> *Cc: *users@ovirt.org *Sent: *Sexta-feira, 6 de Setembro de 2013 10:28:09
*Subject: *Re: [Users] so, what do you want next in oVirt?
On 6-9-2013 10:12, Itamar Heim wrote:
On 09/05/2013 10:30 AM, noc wrote:
On 08/21/2013 12:11 PM, Itamar Heim wrote:
On 08/21/2013 02:40 AM, Joop van de Wege wrote: > > What I would like to see in the ! next version is pxe boot of the > nodes. > Probably not easy to achieve because of dependency on dhcp.
Hi Joop,
can you please give a bit more information on the use case / how you envision this?
current thinking around bare metal provisioning of hosts is to extend the functionality around the foreman provider for this, but you may have other suggestions?
I think Joop means to be able to add hosts (nodes) to a cluster by adding their MAC address to the dhcp list for PXE boot into ovirt-node and thus join the cluster. This would make it easy to add new physical nodes without any spinning disks or other local storage requirements.
we started adding foreman integration in 3.3: http://www.ovirt.org/Features/ForemanIntegration
adding ohad and oved for their thoughts on this.
I suppose this may not be easy with complex network connections (bonds on mgmt network, mgmt network on a tagged vlan, etc), but it should be possible if the management network interface is plain and physical.
/Simon
PS, Perhaps Joop can confirm this idea, we've talked about it IRL. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
This isn't about provisioning with Foreman. Its about having the compute nodes NOT having any spinning disks. So the only way to start a node is to pxeboot it and then let it (re)connect with the engine. Then it will be identified by engine as either a new node or a reconnecting node and it will get its configuration from the engine. For reference: thats how VirtualIron works. It has a managment network, just like ovirt, and on that it runs a tftp and dhcp server. Nodes are plugged into the managment network, without disk, and then pxe booted after which they appear in the webui as new unconfigured nodes. You then can set various settings and upon rebooting the nodes will recieve these settings because it is recognised by its mac address. The advantage of this construct is that you can place a new server into a rack, cable it, power on and go back to you office where you'll find the new node waiting to be configured. No messing around with CDs to install an OS, not being in the datacenter for hours on end, just in and out.
Yes, disks are cheap but they brake down, need maintenance, means downtime and in general more admin time then when you don't have them. ( its a shame to have a raid1 of 2 1Tb disk just to install an OS of less then 10G)
just wondering, how do they prevent a rogue node/guest from masquerading as such a host, getting access/secrets/VMs to be launched on such an untrusted node (they could easily report a different mac address if the layer 2 isn't hardened against that)?
They would need physical access to your rack which ofcourse is locked, you would need to powerdown/up which would trigger an alert, switch port down/up would trigger an alert, so probably you're notified that something not quite right is happening. I haven't gone through the source to see if there is more then just the mac address check.
other than that, yes. we actually used to have this via the AutoApprovePatterns config option, which would have the engine approve a pending node as it registers (I admit i don't think anyone used this last several years, and it may be totally broken by now).
please note this doesn't solve the need for a disk, just the auto-registration part (if it still works) What I would like is to have the ovirt Node pxe booting and getting its config from engine or autoregister. I know there is a script which converts the iso into a huge pxeboot kernel but don't know how to solve or if its solved the config part.
@karli: If you run your cluster in Memory Optimization=None then you won't need swap. Have been doing that for years and haven't had a single problem attributed to that. I just would like to have the choice, pxe boot the node and know that you don't have swap. Run with disks if you really need overprovisioning.
Regards,
Joop
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 09/06/2013 02:19 PM, Andrew Lau wrote:
A lot of work.. but has there been any consideration into redesigning the web UI? Right now it's so complex and heavy, takes a while to load and too many features are hidden! I've been using oVirt for half a year.. and I'm still finding small useful things hidden here and there.
Can you provide more details on issues you're seeing, and suggestions for a future approach?

On Fri, Sep 6, 2013 at 10:33 PM, Itamar Heim <iheim@redhat.com> wrote:
On 09/06/2013 02:19 PM, Andrew Lau wrote:
A lot of work.. but has there been any consideration into redesigning the web UI? Right now it's so complex and heavy, takes a while to load and too many features are hidden! I've been using oVirt for half a year.. and I'm still finding small useful things hidden here and there.
Can you provide more details on issues you're seeing, and suggestions for a future approach?
It's not really an issue.. more of a suggestion. But if for example you look at the Foreman UI, everything is categorized and if for example you want to modify X it's right there under the obvious menu option. When it comes to oVirt, the documentation isn't as readily available and for example if you wanted to modify the network configuration it takes about 10 different mouse clicks to get to where you want, and then sometimes it errors and then you need to refresh the page or something. The general feel, maybe it's just my setups but the login process always takes between 30-90 seconds, probably to load all the javascript etc. A simple HTML + jquery approach similar to Openstack / Foreman would be much cleaner experience. Whenever I've showed oVirt to other people. They've always mentioned how slow the UI experience is, different from the stereotype of being able to do things more effectively on Linux compared to Windows. Obviously I'm looking from just the Administration portal POV, and my suggestion isn't really critical as it doesn't effect what matters most (VMs, Features etc.) but it would really bring oVirt to a new level visually and make it a little more appealing. Apologizes for the poorly formatted ramble.

Hi Jose, the latter is available by hosted engine, which is a highly available VM which will be migrated / restarted on a different host if something goes wrong. ----- Original Message -----
From: suporte@logicworks.pt To: users@ovirt.org Sent: Friday, September 6, 2013 1:43:04 PM Subject: Re: [Users] so, what do you want next in oVirt?
Could be great o have on the Engine: - An upload option for the ISO files - A backup and restore option - An high availability for the engine: install the engine on 2 platforms (hardware?), than integrate them for synchronization
Jose
From: "noc" <noc@nieuwland.nl> Cc: users@ovirt.org Sent: Sexta-feira, 6 de Setembro de 2013 10:28:09 Subject: Re: [Users] so, what do you want next in oVirt?
On 6-9-2013 10:12, Itamar Heim wrote:
On 09/05/2013 10:30 AM, noc wrote:
On 08/21/2013 12:11 PM, Itamar Heim wrote:
On 08/21/2013 02:40 AM, Joop van de Wege wrote: > > What I would like to see in the ! next version is pxe boot of the > nodes. > Probably not easy to achieve because of dependency on dhcp.
Hi Joop,
can you please give a bit more information on the use case / how you envision this?
current thinking around bare metal provisioning of hosts is to extend the functionality around the foreman provider for this, but you may have other suggestions?
I think Joop means to be able to add hosts (nodes) to a cluster by adding their MAC address to the dhcp list for PXE boot into ovirt-node and thus join the cluster. This would make it easy to add new physical nodes without any spinning disks or other local storage requirements.
we started adding foreman integration in 3.3: http://www.ovirt.org/Features/ForemanIntegration
adding ohad and oved for their thoughts on this.
I suppose this may not be easy with complex network connections (bonds on mgmt network, mgmt network on a tagged vlan, etc), but it should be possible if the management network interface is plain and physical.
/Simon
PS, Perhaps Joop can confirm this idea, we've talked about it IRL. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
This isn't about provisioning with Foreman. Its about having the compute nodes NOT having any spinning disks. So the only way to start a node is to pxeboot it and then let it (re)connect with the engine. Then it will be identified by engine as either a new node or a reconnecting node and it will get its configuration from the engine. For reference: thats how VirtualIron works. It has a managment network, just like ovirt, and on that it runs a tftp and dhcp server. Nodes are plugged into the managment network, without disk, and then pxe booted after which they appear in the webui as new unconfigured nodes. You then can set various settings and upon rebooting the nodes will recieve these settings because it is recognised by its mac address. The advantage of this construct is that you can place a new server into a rack, cable it, power on and go back to you office where you'll find the new node waiting to be configured. No messing around with CDs to install an OS, not being in the datacenter for hours on end, just in and out.
Yes, disks are cheap but they brake down, need maintenance, means downtime and in general more admin time then when you don't have them. ( its a shame to have a raid1 of 2 1Tb disk just to install an OS of less then 10G)
just wondering, how do they prevent a rogue node/guest from masquerading as such a host, getting access/secrets/VMs to be launched on such an untrusted node (they could easily report a different mac address if the layer 2 isn't hardened against that)?
They would need physical access to your rack which ofcourse is locked, you would need to powerdown/up which would trigger an alert, switch port down/up would trigger an alert, so probably you're notified that something not quite right is happening. I haven't gone through the source to see if there is more then just the mac address check.
other than that, yes. we actually used to have this via the AutoApprovePatterns config option, which would have the engine approve a pending node as it registers (I admit i don't think anyone used this last several years, and it may be totally broken by now).
please note this doesn't solve the need for a disk, just the auto-registration part (if it still works) What I would like is to have the ovirt Node pxe booting and getting its config from engine or autoregister. I know there is a script which converts the iso into a huge pxeboot kernel but don't know how to solve or if its solved the config part.
@karli: If you run your cluster in Memory Optimization=None then you won't need swap. Have been doing that for years and haven't had a single problem attributed to that. I just would like to have the choice, pxe boot the node and know that you don't have swap. Run with disks if you really need overprovisioning.
Regards,
Joop
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

------=_Part_367_18577940.1378732117836 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Hi Doron, But first you have to install the engine, before the VM. So, the idea is to make a backup and restore it to a VM? ----- Original Message ----- From: "Doron Fediuck" <dfediuck@redhat.com> To: suporte@logicworks.pt Cc: users@ovirt.org Sent: Domingo, 8 de Setembro de 2013 23:06:20 Subject: Re: [Users] so, what do you want next in oVirt? Hi Jose, the latter is available by hosted engine, which is a highly available VM which will be migrated / restarted on a different host if something goes wrong. ----- Original Message -----
From: suporte@logicworks.pt To: users@ovirt.org Sent: Friday, September 6, 2013 1:43:04 PM Subject: Re: [Users] so, what do you want next in oVirt?
Could be great o have on the Engine: - An upload option for the ISO files - A backup and restore option - An high availability for the engine: install the engine on 2 platforms (hardware?), than integrate them for synchronization
Jose
From: "noc" <noc@nieuwland.nl> Cc: users@ovirt.org Sent: Sexta-feira, 6 de Setembro de 2013 10:28:09 Subject: Re: [Users] so, what do you want next in oVirt?
On 6-9-2013 10:12, Itamar Heim wrote:
On 09/05/2013 10:30 AM, noc wrote:
On 08/21/2013 12:11 PM, Itamar Heim wrote:
On 08/21/2013 02:40 AM, Joop van de Wege wrote: > > What I would like to see in the ! next version is pxe boot of the > nodes. > Probably not easy to achieve because of dependency on dhcp.
Hi Joop,
can you please give a bit more information on the use case / how you envision this?
current thinking around bare metal provisioning of hosts is to extend the functionality around the foreman provider for this, but you may have other suggestions?
I think Joop means to be able to add hosts (nodes) to a cluster by adding their MAC address to the dhcp list for PXE boot into ovirt-node and thus join the cluster. This would make it easy to add new physical nodes without any spinning disks or other local storage requirements.
we started adding foreman integration in 3.3: http://www.ovirt.org/Features/ForemanIntegration
adding ohad and oved for their thoughts on this.
I suppose this may not be easy with complex network connections (bonds on mgmt network, mgmt network on a tagged vlan, etc), but it should be possible if the management network interface is plain and physical.
/Simon
PS, Perhaps Joop can confirm this idea, we've talked about it IRL. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
This isn't about provisioning with Foreman. Its about having the compute nodes NOT having any spinning disks. So the only way to start a node is to pxeboot it and then let it (re)connect with the engine. Then it will be identified by engine as either a new node or a reconnecting node and it will get its configuration from the engine. For reference: thats how VirtualIron works. It has a managment network, just like ovirt, and on that it runs a tftp and dhcp server. Nodes are plugged into the managment network, without disk, and then pxe booted after which they appear in the webui as new unconfigured nodes. You then can set various settings and upon rebooting the nodes will recieve these settings because it is recognised by its mac address. The advantage of this construct is that you can place a new server into a rack, cable it, power on and go back to you office where you'll find the new node waiting to be configured. No messing around with CDs to install an OS, not being in the datacenter for hours on end, just in and out.
Yes, disks are cheap but they brake down, need maintenance, means downtime and in general more admin time then when you don't have them. ( its a shame to have a raid1 of 2 1Tb disk just to install an OS of less then 10G)
just wondering, how do they prevent a rogue node/guest from masquerading as such a host, getting access/secrets/VMs to be launched on such an untrusted node (they could easily report a different mac address if the layer 2 isn't hardened against that)?
They would need physical access to your rack which ofcourse is locked, you would need to powerdown/up which would trigger an alert, switch port down/up would trigger an alert, so probably you're notified that something not quite right is happening. I haven't gone through the source to see if there is more then just the mac address check.
other than that, yes. we actually used to have this via the AutoApprovePatterns config option, which would have the engine approve a pending node as it registers (I admit i don't think anyone used this last several years, and it may be totally broken by now).
please note this doesn't solve the need for a disk, just the auto-registration part (if it still works) What I would like is to have the ovirt Node pxe booting and getting its config from engine or autoregister. I know there is a script which converts the iso into a huge pxeboot kernel but don't know how to solve or if its solved the config part.
@karli: If you run your cluster in Memory Optimization=None then you won't need swap. Have been doing that for years and haven't had a single problem attributed to that. I just would like to have the choice, pxe boot the node and know that you don't have swap. Run with disks if you really need overprovisioning.
Regards,
Joop
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
<b>Sent: </b>Domingo, 8 de Setembro de 2013 23:06:20<br><b>Subject: </b>Re= : [Users] so, what do you want next in oVirt?<br><br>Hi Jose,<br>the latter= is available by hosted engine, which is a highly<br>available VM which wil= l be migrated / restarted on a different<br>host if something goes wrong.<b= r><br>----- Original Message -----<br>> From: suporte@logicworks.pt<br>&= gt; To: users@ovirt.org<br>> Sent: Friday, September 6, 2013 1:43:04 PM<= br>> Subject: Re: [Users] so, what do you want next in oVirt?<br>> <b= r>> Could be great o have on the Engine:<br>> - An upload option for =
------=_Part_367_18577940.1378732117836 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><head><style type=3D'text/css'>p { margin: 0; }</style></head><body><= div style=3D'font-family: arial,helvetica,sans-serif; font-size: 10pt; colo= r: #000000'>Hi Doron,<br><br>But first you have to install the engine, befo= re the VM. So, the idea is to make a backup and restore it to a VM?<br><br>= <hr id=3D"zwchr"><div style=3D"color: rgb(0, 0, 0); font-weight: normal; fo= nt-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-= serif; font-size: 12pt;"><b>From: </b>"Doron Fediuck" <dfediuck@redhat.c= om><br><b>To: </b>suporte@logicworks.pt<br><b>Cc: </b>users@ovirt.org<br= the ISO files<br>> - A backup and restore option<br>> - An high avail= ability for the engine: install the engine on 2 platforms<br>> (hardware= ?), than integrate them for synchronization<br>> <br>> Jose<br>> <= br>> <br>> From: "noc" <noc@nieuwland.nl><br>> Cc: users@ovi= rt.org<br>> Sent: Sexta-feira, 6 de Setembro de 2013 10:28:09<br>> Su= bject: Re: [Users] so, what do you want next in oVirt?<br>> <br>> On = 6-9-2013 10:12, Itamar Heim wrote:<br>> > On 09/05/2013 10:30 AM, noc= wrote:<br>> >>>> On 08/21/2013 12:11 PM, Itamar Heim wrote:= <br>> >>>>> On 08/21/2013 02:40 AM, Joop van de Wege wrot= e:<br>> >>>>>> <br>> >>>>>> What = I would like to see in the ! next version is pxe boot of the<br>> >&g= t;>>>> nodes.<br>> >>>>>> Probably not eas= y to achieve because of dependency on dhcp.<br>> >>>>> <b= r>> >>>>> Hi Joop,<br>> >>>>> <br>> = >>>>> can you please give a bit more information on the use = case / how you<br>> >>>>> envision this?<br>> >>= >>> <br>> >>>>> current thinking around bare met= al provisioning of hosts is to extend<br>> >>>>> the func= tionality around the foreman provider for this, but you may<br>> >>= ;>>> have other suggestions?<br>> >>>> <br>> >= ;>>> I think Joop means to be able to add hosts (nodes) to a clust= er by<br>> >>>> adding their MAC address to the dhcp list fo= r PXE boot into ovirt-node<br>> >>>> and thus join the clust= er. This would make it easy to add new physical<br>> >>>> no= des without any spinning disks or other local storage requirements.<br>>= >>> <br>> >>> we started adding foreman integration i= n 3.3:<br>> >>> http://www.ovirt.org/Features/ForemanIntegratio= n<br>> >>> <br>> >>> adding ohad and oved for their= thoughts on this.<br>> >>> <br>> >>>> <br>> = >>>> I suppose this may not be easy with complex network connec= tions (bonds<br>> >>>> on mgmt network, mgmt network on a ta= gged vlan, etc), but it should be<br>> >>>> possible if the = management network interface is plain and physical.<br>> >>>>= ; <br>> >>>> /Simon<br>> >>>> <br>> >&g= t;>> PS, Perhaps Joop can confirm this idea, we've talked about it IR= L.<br>> >>>> _______________________________________________= <br>> >>>> Users mailing list<br>> >>>> Users= @ovirt.org<br>> >>>> http://lists.ovirt.org/mailman/listinfo= /users<br>> >>> <br>> >> This isn't about provisioning= with Foreman. Its about having the compute<br>> >> nodes NOT havi= ng any spinning disks. So the only way to start a node is<br>> >> = to pxeboot it and then let it (re)connect with the engine. Then it will<br>= > >> be identified by engine as either a new node or a reconnectin= g node and<br>> >> it will get its configuration from the engine. = For reference: thats how<br>> >> VirtualIron works. It has a manag= ment network, just like ovirt, and on<br>> >> that it runs a tftp = and dhcp server. Nodes are plugged into the<br>> >> managment netw= ork, without disk, and then pxe booted after which they<br>> >> ap= pear in the webui as new unconfigured nodes. You then can set various<br>&g= t; >> settings and upon rebooting the nodes will recieve these settin= gs<br>> >> because it is recognised by its mac address. The advant= age of this<br>> >> construct is that you can place a new server i= nto a rack, cable it,<br>> >> power on and go back to you office w= here you'll find the new node<br>> >> waiting to be configured. No= messing around with CDs to install an OS,<br>> >> not being in th= e datacenter for hours on end, just in and out.<br>> >> <br>> &= gt;> Yes, disks are cheap but they brake down, need maintenance, means<b= r>> >> downtime and in general more admin time then when you don't= have them. (<br>> >> its a shame to have a raid1 of 2 1Tb disk ju= st to install an OS of less<br>> >> then 10G)<br>> > <br>>= ; > just wondering, how do they prevent a rogue node/guest from<br>> = > masquerading as such a host, getting access/secrets/VMs to be launched= <br>> > on such an untrusted node (they could easily report a differe= nt mac<br>> > address if the layer 2 isn't hardened against that)?<br=
> > <br>> They would need physical access to your rack which ofco= urse is locked,<br>> you would need to powerdown/up which would trigger = an alert, switch port<br>> down/up would trigger an alert, so probably y= ou're notified that<br>> something not quite right is happening. I haven= 't gone through the<br>> source to see if there is more then just the ma= c address check.<br>> <br>> > other than that, yes. we actually us= ed to have this via the<br>> > AutoApprovePatterns config option, whi= ch would have the engine approve<br>> > a pending node as it register= s (I admit i don't think anyone used this<br>> > last several years, = and it may be totally broken by now).<br>> > <br>> > please not= e this doesn't solve the need for a disk, just the<br>> > auto-regist= ration part (if it still works)<br>> What I would like is to have the ov= irt Node pxe booting and getting its<br>> config from engine or autoregi= ster. I know there is a script which<br>> converts the iso into a huge p= xeboot kernel but don't know how to solve<br>> or if its solved the conf= ig part.<br>> <br>> @karli:<br>> If you run your cluster in Memory= Optimization=3DNone then you won't need<br>> swap. Have been doing that= for years and haven't had a single problem<br>> attributed to that. I j= ust would like to have the choice, pxe boot the<br>> node and know that = you don't have swap. Run with disks if you really<br>> need overprovisio= ning.<br>> <br>> Regards,<br>> <br>> Joop<br>> <br>> ____= ___________________________________________<br>> Users mailing list<br>&= gt; Users@ovirt.org<br>> http://lists.ovirt.org/mailman/listinfo/users<b= r>> <br>> <br>> _______________________________________________<br= > Users mailing list<br>> Users@ovirt.org<br>> http://lists.ovirt= .org/mailman/listinfo/users<br>> <br></div><br></div></body></html> ------=_Part_367_18577940.1378732117836--

On 09/09/2013 04:09 PM, suporte@logicworks.pt wrote:
Hi Doron,
But first you have to install the engine, before the VM. So, the idea is to make a backup and restore it to a VM?
there is a special flow to create the engine VM. there is a deep dive right now on this.
------------------------------------------------------------------------ *From: *"Doron Fediuck" <dfediuck@redhat.com> *To: *suporte@logicworks.pt *Cc: *users@ovirt.org *Sent: *Domingo, 8 de Setembro de 2013 23:06:20 *Subject: *Re: [Users] so, what do you want next in oVirt?
Hi Jose, the latter is available by hosted engine, which is a highly available VM which will be migrated / restarted on a different host if something goes wrong.
From: suporte@logicworks.pt To: users@ovirt.org Sent: Friday, September 6, 2013 1:43:04 PM Subject: Re: [Users] so, what do you want next in oVirt?
Could be great o have on the Engine: - An upload option for the ISO files - A backup and restore option - An high availability for the engine: install the engine on 2 platforms (hardware?), than integrate them for synchronization
Jose
From: "noc" <noc@nieuwland.nl> Cc: users@ovirt.org Sent: Sexta-feira, 6 de Setembro de 2013 10:28:09 Subject: Re: [Users] so, what do you want next in oVirt?
On 6-9-2013 10:12, Itamar Heim wrote:
On 09/05/2013 10:30 AM, noc wrote:
On 08/21/2013 12:11 PM, Itamar Heim wrote: > On 08/21/2013 02:40 AM, Joop van de Wege wrote: >> >> What I would like to see in the ! next version is pxe boot of the >> nodes. >> Probably not easy to achieve because of dependency on dhcp. > > Hi Joop, > > can you please give a bit more information on the use case / how you > envision this? > > current thinking around bare metal provisioning of hosts is to extend > the functionality around the foreman provider for this, but you may > have other suggestions?
I think Joop means to be able to add hosts (nodes) to a cluster by adding their MAC address to the dhcp list for PXE boot into ovirt-node and thus join the cluster. This would make it easy to add new
nodes without any spinning disks or other local storage requirements.
we started adding foreman integration in 3.3: http://www.ovirt.org/Features/ForemanIntegration
adding ohad and oved for their thoughts on this.
I suppose this may not be easy with complex network connections
(bonds
on mgmt network, mgmt network on a tagged vlan, etc), but it should be possible if the management network interface is plain and physical.
/Simon
PS, Perhaps Joop can confirm this idea, we've talked about it IRL. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
This isn't about provisioning with Foreman. Its about having the compute nodes NOT having any spinning disks. So the only way to start a node is to pxeboot it and then let it (re)connect with the engine. Then it will be identified by engine as either a new node or a reconnecting node and it will get its configuration from the engine. For reference:
VirtualIron works. It has a managment network, just like ovirt, and on that it runs a tftp and dhcp server. Nodes are plugged into the managment network, without disk, and then pxe booted after which they appear in the webui as new unconfigured nodes. You then can set various settings and upon rebooting the nodes will recieve these settings because it is recognised by its mac address. The advantage of this construct is that you can place a new server into a rack, cable it, power on and go back to you office where you'll find the new node waiting to be configured. No messing around with CDs to install an OS, not being in the datacenter for hours on end, just in and out.
Yes, disks are cheap but they brake down, need maintenance, means downtime and in general more admin time then when you don't have
----- Original Message ----- physical thats how them. (
its a shame to have a raid1 of 2 1Tb disk just to install an OS of less then 10G)
just wondering, how do they prevent a rogue node/guest from masquerading as such a host, getting access/secrets/VMs to be launched on such an untrusted node (they could easily report a different mac address if the layer 2 isn't hardened against that)?
They would need physical access to your rack which ofcourse is locked, you would need to powerdown/up which would trigger an alert, switch port down/up would trigger an alert, so probably you're notified that something not quite right is happening. I haven't gone through the source to see if there is more then just the mac address check.
other than that, yes. we actually used to have this via the AutoApprovePatterns config option, which would have the engine approve a pending node as it registers (I admit i don't think anyone used this last several years, and it may be totally broken by now).
please note this doesn't solve the need for a disk, just the auto-registration part (if it still works) What I would like is to have the ovirt Node pxe booting and getting its config from engine or autoregister. I know there is a script which converts the iso into a huge pxeboot kernel but don't know how to solve or if its solved the config part.
@karli: If you run your cluster in Memory Optimization=None then you won't need swap. Have been doing that for years and haven't had a single problem attributed to that. I just would like to have the choice, pxe boot the node and know that you don't have swap. Run with disks if you really need overprovisioning.
Regards,
Joop
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Itamar Heim" <iheim@redhat.com> To: suporte@logicworks.pt Cc: users@ovirt.org Sent: Monday, September 9, 2013 4:10:58 PM Subject: Re: [Users] so, what do you want next in oVirt?
On 09/09/2013 04:09 PM, suporte@logicworks.pt wrote:
Hi Doron,
But first you have to install the engine, before the VM. So, the idea is to make a backup and restore it to a VM?
there is a special flow to create the engine VM. there is a deep dive right now on this.
Hi Jose, I hope you had a chance to join the session. You can find the lides explaining the process here: http://www.ovirt.org/OVirt_3.3_release_notes#Deep_dives Doron
------------------------------------------------------------------------ *From: *"Doron Fediuck" <dfediuck@redhat.com> *To: *suporte@logicworks.pt *Cc: *users@ovirt.org *Sent: *Domingo, 8 de Setembro de 2013 23:06:20 *Subject: *Re: [Users] so, what do you want next in oVirt?
Hi Jose, the latter is available by hosted engine, which is a highly available VM which will be migrated / restarted on a different host if something goes wrong.
From: suporte@logicworks.pt To: users@ovirt.org Sent: Friday, September 6, 2013 1:43:04 PM Subject: Re: [Users] so, what do you want next in oVirt?
Could be great o have on the Engine: - An upload option for the ISO files - A backup and restore option - An high availability for the engine: install the engine on 2 platforms (hardware?), than integrate them for synchronization
Jose
From: "noc" <noc@nieuwland.nl> Cc: users@ovirt.org Sent: Sexta-feira, 6 de Setembro de 2013 10:28:09 Subject: Re: [Users] so, what do you want next in oVirt?
On 6-9-2013 10:12, Itamar Heim wrote:
On 09/05/2013 10:30 AM, noc wrote:
> On 08/21/2013 12:11 PM, Itamar Heim wrote: >> On 08/21/2013 02:40 AM, Joop van de Wege wrote: >>> >>> What I would like to see in the ! next version is pxe boot of the >>> nodes. >>> Probably not easy to achieve because of dependency on dhcp. >> >> Hi Joop, >> >> can you please give a bit more information on the use case / how you >> envision this? >> >> current thinking around bare metal provisioning of hosts is to extend >> the functionality around the foreman provider for this, but you >> may >> have other suggestions? > > I think Joop means to be able to add hosts (nodes) to a cluster by > adding their MAC address to the dhcp list for PXE boot into ovirt-node > and thus join the cluster. This would make it easy to add new
> nodes without any spinning disks or other local storage requirements.
we started adding foreman integration in 3.3: http://www.ovirt.org/Features/ForemanIntegration
adding ohad and oved for their thoughts on this.
> > I suppose this may not be easy with complex network connections (bonds > on mgmt network, mgmt network on a tagged vlan, etc), but it should be > possible if the management network interface is plain and physical. > > /Simon > > PS, Perhaps Joop can confirm this idea, we've talked about it IRL. > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users
This isn't about provisioning with Foreman. Its about having the compute nodes NOT having any spinning disks. So the only way to start a node is to pxeboot it and then let it (re)connect with the engine. Then it will be identified by engine as either a new node or a reconnecting node and it will get its configuration from the engine. For reference:
VirtualIron works. It has a managment network, just like ovirt, and on that it runs a tftp and dhcp server. Nodes are plugged into the managment network, without disk, and then pxe booted after which they appear in the webui as new unconfigured nodes. You then can set various settings and upon rebooting the nodes will recieve these settings because it is recognised by its mac address. The advantage of this construct is that you can place a new server into a rack, cable it, power on and go back to you office where you'll find the new node waiting to be configured. No messing around with CDs to install an OS, not being in the datacenter for hours on end, just in and out.
Yes, disks are cheap but they brake down, need maintenance, means downtime and in general more admin time then when you don't have
----- Original Message ----- physical thats how them. (
its a shame to have a raid1 of 2 1Tb disk just to install an OS of less then 10G)
just wondering, how do they prevent a rogue node/guest from masquerading as such a host, getting access/secrets/VMs to be launched on such an untrusted node (they could easily report a different mac address if the layer 2 isn't hardened against that)?
They would need physical access to your rack which ofcourse is locked, you would need to powerdown/up which would trigger an alert, switch port down/up would trigger an alert, so probably you're notified that something not quite right is happening. I haven't gone through the source to see if there is more then just the mac address check.
other than that, yes. we actually used to have this via the AutoApprovePatterns config option, which would have the engine approve a pending node as it registers (I admit i don't think anyone used this last several years, and it may be totally broken by now).
please note this doesn't solve the need for a disk, just the auto-registration part (if it still works) What I would like is to have the ovirt Node pxe booting and getting its config from engine or autoregister. I know there is a script which converts the iso into a huge pxeboot kernel but don't know how to solve or if its solved the config part.
@karli: If you run your cluster in Memory Optimization=None then you won't need swap. Have been doing that for years and haven't had a single problem attributed to that. I just would like to have the choice, pxe boot the node and know that you don't have swap. Run with disks if you really need overprovisioning.
Regards,
Joop
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

grant a group permissions on a vm instead of adding them one by one = .<br>For example , creating a group called Dev and give the group<br>permis= sions on all relevant VMs instead of adding each user one by one .<br><br><= br>Hope to see this feature in the next versions .<br>Thanks,<br><br></div>= <div><span name=3D"x"></span><br>Dotan Paz , <br><span name=3D"x"></span><b= r></div><hr id=3D"zwchr"><div style=3D"color:#000;font-weight:normal;font-s= tyle:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;fon= t-size:12pt;" data-mce-style=3D"color: #000; font-weight: normal; font-styl= e: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; = font-size: 12pt;"><b>From: </b>"Itamar Heim" <iheim@redhat.com><br><b= To: </b>users@ovirt.org<br><b>Sent: </b>Wednesday, August 21, 2013 12:19:1= 6 AM<br><b>Subject: </b>[Users] so, what do you want next in oVirt?<br><div= <br></div>earlier in the year we did a survey for feature requests / impro= vements <br>/ etc.<br><div><br></div>since a lot of things were added, and =
------=_Part_8332204_6931164.1377173682102 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi Itamar ,=20 I would be happy to throw in an idea .=20 I think oVirt is lacking the ability to assign permissions on multiple VMs = at once .=20 For example , if I want to grant permissions to a certain user on 30=20 VMs , I need to do it manually (or with some script).=20 I think this feature is essential for an enterprise aimed product.=20 Also it would be really cool to have an ability to create groups within oVi= rt and not in the directory, and=20 grant a group permissions on a vm instead of adding them one by one .=20 For example , creating a group called Dev and give the group=20 permissions on all relevant VMs instead of adding each user one by one .=20 Hope to see this feature in the next versions .=20 Thanks,=20 Dotan Paz ,=20 ----- Original Message ----- From: "Itamar Heim" <iheim@redhat.com>=20 To: users@ovirt.org=20 Sent: Wednesday, August 21, 2013 12:19:16 AM=20 Subject: [Users] so, what do you want next in oVirt?=20 earlier in the year we did a survey for feature requests / improvements=20 / etc.=20 since a lot of things were added, and priorities usually change, I'd=20 like to ask again for "what do you need the most from oVirt / what are=20 your pain points" next?=20 below[1] I've listed my understanding of what already went in from=20 previous survey requests (to various degrees of coverage).=20 Thanks,=20 Itamar=20 [1] from the top 12=20 V Allow disk resize=20 V Integrate Nagios/Zabbix monitoring - via a ui plugin=20 V Highly Available engine - via hosted engine[2]=20 V Open vSwitch integration - via neutron integration=20 X Allow cloning VMs without template=20 ? Enable hypervisor upgrade/updates through engine[3]=20 V Allow engine on an oVirt hosted VM - via hosted engine[2]=20 V Enable guest configuration (root password, SSH keys, network) via=20 guest agent in engine - via cloud-init=20 X Integrate v2v into engine=20 ? Bond/extend ovirtmgmt with a second network for HA/increased=20 bandwidth[4]=20 X Integrate scheduling of snapshots and VM export for backups in=20 engine[5]=20 V Spice =E2=80=93 support Google Chrome - via mime based launch=20 Other items mentioned in previous survey which should be covered by now:=20 - Fix timeout when adding local host during all-in-one configuration=20 - Fix engine set-up when SELinux is disabled=20 - Provide packages for el6 (CentOS, Red Hat Enterprise Linux)=20 - Allow multiple VMs to be deployed from the same template at the same=20 time=20 - ISO domains on local/GlusterS=20 - Show IP addresses in Virtual Machines->Network Interfaces=20 - OpenStack Quantum support (now called Neutron)=20 - noVNC support=20 - Support spice.html5 and websocket proxy=20 - Add other guest OSes to list=20 - Port oVirt guest agent to Ubuntu[6]=20 - SLA - Allow resource time-sharing=20 - Spice - Mac client (via mime based launch)=20 - Spice - port XPI plug-in to Windows (not sure this will happen, but=20 mime based launch allows using firefox now)=20 - Spice - client for Ubuntu/Debian (should be covered via mime based=20 launch)=20 [2] hosted engine is in active development, but not released yet.=20 [3] host update is supported, but not for general yum update.=20 [4] a lot of improvements were done in this space, but i'm not sure if=20 they cover this exact use case=20 [5] backup api is now being pushed to master, and orchestration of=20 backups should probably happen via 3rd part backup vendors?=20 [6] I'm not sure packaging exists yet, but ubuntu is covered for the=20 basic functionality of the guest agent.=20 _______________________________________________=20 Users mailing list=20 Users@ovirt.org=20 http://lists.ovirt.org/mailman/listinfo/users=20 ------=_Part_8332204_6931164.1377173682102 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><body><div style=3D"font-family: times new roman, new york, times, se= rif; font-size: 12pt; color: #000000"><div>Hi Itamar ,<br>I would be happy = to throw in an idea .<br><br>I think oVirt is lacking the ability to a= ssign permissions on multiple VMs at once .<br>For example , if I want to g= rant permissions to a certain user on 30<br>VMs , I need to do it man= ually (or with some script). </div><div>I think this feature is essential f= or an enterprise aimed product.<br>Also it would be really cool to have an = ability to create groups within oVirt and not in the directory, and<br= priorities usually change, I'd <br>like to ask again for "what do you need = the most from oVirt / what are <br>your pain points" next?<br><div><br></di= v>below[1] I've listed my understanding of what already went in from <br>pr= evious survey requests (to various degrees of coverage).<br><div><br></div>= Thanks,<br> Itamar<br><div><br></div>[1] from the top 12<= br>V Allow disk resize<br>V Integrate Nagios/Zabbix monitoring - via a ui p= lugin<br>V Highly Available engine - via hosted engine[2]<br>V Open vSwitch= integration - via neutron integration<br>X Allow cloning VMs without templ= ate<br>? Enable hypervisor upgrade/updates through engine[3]<br>V Allow eng= ine on an oVirt hosted VM - via hosted engine[2]<br>V Enable guest configur= ation (root password, SSH keys, network) via<br> guest agent in= engine - via cloud-init<br>X Integrate v2v into engine<br>? Bond/extend ov= irtmgmt with a second network for HA/increased<br> bandwidth[4]= <br>X Integrate scheduling of snapshots and VM export for backups in<br>&nb= sp; engine[5]<br>V Spice =E2=80=93 support Google Chrome - via mime b= ased launch<br><div><br></div><br>Other items mentioned in previous survey = which should be covered by now:<br>- Fix timeout when adding local host dur= ing all-in-one configuration<br>- Fix engine set-up when SELinux is disable= d<br>- Provide packages for el6 (CentOS, Red Hat Enterprise Linux)<br>- All= ow multiple VMs to be deployed from the same template at the same<br> = time<br>- ISO domains on local/GlusterS<br>- Show IP addresses in Vi= rtual Machines->Network Interfaces<br>- OpenStack Quantum support (now c= alled Neutron)<br>- noVNC support<br>- Support spice.html5 and websocket pr= oxy<br>- Add other guest OSes to list<br>- Port oVirt guest agent to Ubuntu= [6]<br>- SLA - Allow resource time-sharing<br>- Spice - Mac client (via mim= e based launch)<br>- Spice - port XPI plug-in to Windows (not sure this wil= l happen, but<br> mime based launch allows using firefox now)<b= r>- Spice - client for Ubuntu/Debian (should be covered via mime based<br>&= nbsp; launch)<br><div><br></div><br>[2] hosted engine is in active de= velopment, but not released yet.<br>[3] host update is supported, but not f= or general yum update.<br>[4] a lot of improvements were done in this space= , but i'm not sure if<br> they cover this exact use case= <br>[5] backup api is now being pushed to master, and orchestration of<br>&= nbsp; backups should probably happen via 3rd part backup vendo= rs?<br>[6] I'm not sure packaging exists yet, but ubuntu is covered for the= <br> basic functionality of the guest agent.<br>________= _______________________________________<br>Users mailing list<br>Users@ovir= t.org<br>http://lists.ovirt.org/mailman/listinfo/users<br></div><div><br></= div></div></body></html> ------=_Part_8332204_6931164.1377173682102--

----- Original Message -----
From: "Dotan Paz" <dpaz@redhat.com> To: "Itamar Heim" <iheim@redhat.com> Cc: users@ovirt.org Sent: Thursday, August 22, 2013 3:14:42 PM Subject: Re: [Users] so, what do you want next in oVirt?
Hi Itamar , I would be happy to throw in an idea .
I think oVirt is lacking the ability to assign permissions on multiple VMs at once . For example , if I want to grant permissions to a certain user on 30 VMs , I need to do it manually (or with some script). I think this feature is essential for an enterprise aimed product. Also it would be really cool to have an ability to create groups within oVirt and not in the directory, and grant a group permissions on a vm instead of adding them one by one . For example , creating a group called Dev and give the group permissions on all relevant VMs instead of adding each user one by one .
Hope to see this feature in the next versions . Thanks,
Dotan Paz ,
1. Following the discussion at users this week about clustered engine + load balancing, what about a feature in which each engine will manage X data centers (let's call this a "site" - I took the term from past experience), but using the web admin you will be able to view/manage data centers that belong to different "sites"? 2. Keystone authentication?
----- Original Message -----
From: "Itamar Heim" <iheim@redhat.com> To: users@ovirt.org Sent: Wednesday, August 21, 2013 12:19:16 AM Subject: [Users] so, what do you want next in oVirt?
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template ? Enable hypervisor upgrade/updates through engine[3] V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init X Integrate v2v into engine ? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS - Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list - Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------090308030609040301030909 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Dne 20.8.2013 23:19, Itamar Heim napsal(a):
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template ? Enable hypervisor upgrade/updates through engine[3] V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init X Integrate v2v into engine ? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice support Google Chrome - via mime based launch
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS - Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list - Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Some time ago I was on IBM kvm virtualization preview and they show us gold image feature and I like it. It could be implemented in oVirt ;-) more info: http://pic.dhe.ibm.com/infocenter/ibmfsb/v2r1/index.jsp?topic=%2Fcom.ibm.sbs... Next nice feature could be importing existing iSCSI domain. Possibility of direct use of HW by VM. And the absolutely fantastic feature would be to create clusters from Intel and AMD processors together! Thank you ;-) --------------090308030609040301030909 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">Dne 20.8.2013 23:19, Itamar Heim napsal(a):<br> </div> <blockquote cite="mid:5213DD54.6090304@redhat.com" type="cite">earlier in the year we did a survey for feature requests / improvements / etc. <br> <br> since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next? <br> <br> below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage). <br> <br> Thanks, <br> Itamar <br> <br> [1] from the top 12 <br> V Allow disk resize <br> V Integrate Nagios/Zabbix monitoring - via a ui plugin <br> V Highly Available engine - via hosted engine[2] <br> V Open vSwitch integration - via neutron integration <br> X Allow cloning VMs without template <br> ? Enable hypervisor upgrade/updates through engine[3] <br> V Allow engine on an oVirt hosted VM - via hosted engine[2] <br> V Enable guest configuration (root password, SSH keys, network) via <br> guest agent in engine - via cloud-init <br> X Integrate v2v into engine <br> ? Bond/extend ovirtmgmt with a second network for HA/increased <br> bandwidth[4] <br> X Integrate scheduling of snapshots and VM export for backups in <br> engine[5] <br> V Spice support Google Chrome - via mime based launch <br> <br> <br> Other items mentioned in previous survey which should be covered by now: <br> - Fix timeout when adding local host during all-in-one configuration <br> - Fix engine set-up when SELinux is disabled <br> - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) <br> - Allow multiple VMs to be deployed from the same template at the same <br> time <br> - ISO domains on local/GlusterS <br> - Show IP addresses in Virtual Machines->Network Interfaces <br> - OpenStack Quantum support (now called Neutron) <br> - noVNC support <br> - Support spice.html5 and websocket proxy <br> - Add other guest OSes to list <br> - Port oVirt guest agent to Ubuntu[6] <br> - SLA - Allow resource time-sharing <br> - Spice - Mac client (via mime based launch) <br> - Spice - port XPI plug-in to Windows (not sure this will happen, but <br> mime based launch allows using firefox now) <br> - Spice - client for Ubuntu/Debian (should be covered via mime based <br> launch) <br> <br> <br> [2] hosted engine is in active development, but not released yet. <br> [3] host update is supported, but not for general yum update. <br> [4] a lot of improvements were done in this space, but i'm not sure if <br> they cover this exact use case <br> [5] backup api is now being pushed to master, and orchestration of <br> backups should probably happen via 3rd part backup vendors? <br> [6] I'm not sure packaging exists yet, but ubuntu is covered for the <br> basic functionality of the guest agent. <br> _______________________________________________ <br> Users mailing list <br> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> <br> </blockquote> <br> Some time ago I was on IBM kvm virtualization preview and they show us gold image feature and I like it. It could be implemented in oVirt ;-)<br> <br> more info:<br> <meta http-equiv="content-type" content="text/html; charset=windows-1252"> <a href="http://pic.dhe.ibm.com/infocenter/ibmfsb/v2r1/index.jsp?topic=%2Fcom.ibm.sbsolutions.doc%2FugGoldImageOverview.htm">http://pic.dhe.ibm.com/infocenter/ibmfsb/v2r1/index.jsp?topic=%2Fcom.ibm.sbsolutions.doc%2FugGoldImageOverview.htm</a><br> <br> Next nice feature could be importing existing iSCSI domain. Possibility of direct use of HW by VM. And the absolutely fantastic feature would be to create clusters from Intel and AMD processors together!<br> <br> Thank you ;-) <br> </body> </html> --------------090308030609040301030909--

On 08/27/2013 06:27 PM, Jakub Bittner wrote:
Dne 20.8.2013 23:19, Itamar Heim napsal(a):
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template ? Enable hypervisor upgrade/updates through engine[3] V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init X Integrate v2v into engine ? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS - Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list - Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Some time ago I was on IBM kvm virtualization preview and they show us gold image feature and I like it. It could be implemented in oVirt ;-)
more info: http://pic.dhe.ibm.com/infocenter/ibmfsb/v2r1/index.jsp?topic=%2Fcom.ibm.sbs...
Can you explain how you view this as different from ovirt templates?
Next nice feature could be importing existing iSCSI domain.
importing any data storage domain i assume?
Possibility of direct use of HW by VM.
such as?
and the absolutely fantastic feature would be to create clusters from Intel and AMD processors together!
well, you can do that today if you want to via a config change. the only thing is live migration won't work (you should probably use -cpu host to get best performance, since live migration won't be used anyway) (well, in theory we could live migrate only between hosts of same cpu vendor, but not sure interesting enough use case to make cluster and scheduling more complex). though you can do that part on your own with the new pluggable scheduler, or use -cpu host to get max performance if you don't care about live migration
Thank you ;-)
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Dne 27.8.2013 17:34, Itamar Heim napsal(a):
On 08/27/2013 06:27 PM, Jakub Bittner wrote:
Dne 20.8.2013 23:19, Itamar Heim napsal(a):
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template ? Enable hypervisor upgrade/updates through engine[3] V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init X Integrate v2v into engine ? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS - Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list - Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Some time ago I was on IBM kvm virtualization preview and they show us gold image feature and I like it. It could be implemented in oVirt ;-)
more info: http://pic.dhe.ibm.com/infocenter/ibmfsb/v2r1/index.jsp?topic=%2Fcom.ibm.sbs...
Can you explain how you view this as different from ovirt templates?
Next nice feature could be importing existing iSCSI domain.
importing any data storage domain i assume? Yes.
Possibility of direct use of HW by VM.
such as? Telephone modem. PCI-express cards. Graphic cards
and the absolutely fantastic feature would be to create clusters from Intel and AMD processors together!
well, you can do that today if you want to via a config change. the only thing is live migration won't work (you should probably use -cpu host to get best performance, since live migration won't be used anyway) (well, in theory we could live migrate only between hosts of same cpu vendor, but not sure interesting enough use case to make cluster and scheduling more complex). though you can do that part on your own with the new pluggable scheduler, or use -cpu host to get max performance if you don't care about live migration
From my point of view it is better to have slower cpu performance and possibility to use all of our servers in cluster. I would like to have live migration available from intel to amd. The problem is only in cpu instruction sets? If so, I can use only common sets. Another feature which I forgot is network between VMs and mirroring traffic. Both configurable from WUI.
Thank you ;-)
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This: http://lists.ovirt.org/pipermail/engine-devel/2013-August/005364.html and This: http://lists.ovirt.org/pipermail/users/2013-August/015713.html(freedom in mixing storage domain types no SPF Master Storage domain) - DHC On Tue, Aug 27, 2013 at 10:47 AM, Jakub Bittner <j.bittner@nbu.cz> wrote:
Dne 27.8.2013 17:34, Itamar Heim napsal(a):
On 08/27/2013 06:27 PM, Jakub Bittner wrote:
Dne 20.8.2013 23:19, Itamar Heim napsal(a):
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template ? Enable hypervisor upgrade/updates through engine[3] V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init X Integrate v2v into engine ? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS - Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list - Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. ______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
Some time ago I was on IBM kvm virtualization preview and they show us gold image feature and I like it. It could be implemented in oVirt ;-)
more info: http://pic.dhe.ibm.com/**infocenter/ibmfsb/v2r1/index.** jsp?topic=%2Fcom.ibm.**sbsolutions.doc%**2FugGoldImageOverview.htm<http://pic.dhe.ibm.com/infocenter/ibmfsb/v2r1/index.jsp?topic=%2Fcom.ibm.sbsolutions.doc%2FugGoldImageOverview.htm>
Can you explain how you view this as different from ovirt templates?
Next nice feature could be importing existing iSCSI domain.
importing any data storage domain i assume?
Yes.
Possibility of direct use of HW by VM.
such as?
Telephone modem. PCI-express cards. Graphic cards
and the absolutely fantastic feature would be to create clusters from
Intel and AMD processors together!
well, you can do that today if you want to via a config change. the only thing is live migration won't work (you should probably use -cpu host to get best performance, since live migration won't be used anyway) (well, in theory we could live migrate only between hosts of same cpu vendor, but not sure interesting enough use case to make cluster and scheduling more complex). though you can do that part on your own with the new pluggable scheduler, or use -cpu host to get max performance if you don't care about live migration
From my point of view it is better to have slower cpu performance and
possibility to use all of our servers in cluster. I would like to have live migration available from intel to amd. The problem is only in cpu instruction sets? If so, I can use only common sets.
Another feature which I forgot is network between VMs and mirroring traffic. Both configurable from WUI.
Thank you ;-)
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>

On 08/27/2013 09:48 PM, Dead Horse wrote:
This: http://lists.ovirt.org/pipermail/users/2013-August/015713.html (freedom in mixing storage domain types no SPF Master Storage domain)
going forward, the focus is to remove the concept of pool and master storage domain all together.

importing any data storage domain i assume?
Yes.
Possibility of direct use of HW by VM.
such as?
Telephone modem. PCI-express cards. Graphic cards
+ USB devices, we have hardware license keys for some software. In kvm/qemu I can expose the license key directly to a VM.
and the absolutely fantastic feature would be to create clusters from
Intel and AMD processors together!
well, you can do that today if you want to via a config change. the only thing is live migration won't work (you should probably use -cpu host to get best performance, since live migration won't be used anyway) (well, in theory we could live migrate only between hosts of same cpu vendor, but not sure interesting enough use case to make cluster and scheduling more complex). though you can do that part on your own with the new pluggable scheduler, or use -cpu host to get max performance if you don't care about live migration
From my point of view it is better to have slower cpu performance and
possibility to use all of our servers in cluster. I would like to have live migration available from intel to amd. The problem is only in cpu instruction sets? If so, I can use only common sets.
Another feature which I forgot is network between VMs and mirroring traffic. Both configurable from WUI.
Thank you ;-)
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>

On 08/27/2013 06:47 PM, Jakub Bittner wrote:
and the absolutely fantastic feature would be to create clusters from Intel and AMD processors together!
well, you can do that today if you want to via a config change. the only thing is live migration won't work (you should probably use -cpu host to get best performance, since live migration won't be used anyway) (well, in theory we could live migrate only between hosts of same cpu vendor, but not sure interesting enough use case to make cluster and scheduling more complex). though you can do that part on your own with the new pluggable scheduler, or use -cpu host to get max performance if you don't care about live migration
From my point of view it is better to have slower cpu performance and possibility to use all of our servers in cluster. I would like to have live migration available from intel to amd. The problem is only in cpu instruction sets? If so, I can use only common sets.
the problem is afaik kvm doesn't really support it for live migration (other than a prototype several years ago).
Another feature which I forgot is network between VMs and mirroring traffic. Both configurable from WUI.
port mirroring is supported today?

Hi, I would like to this existing feature request implemented: https://bugzilla.redhat.com/show_bug.cgi?id=884653 Bug 884653 - [RFE] support single sign-on to user and admin portals Thanks Regards, Siggi On Tue, August 20, 2013 23:19, Itamar Heim wrote:
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template ? Enable hypervisor upgrade/updates through engine[3] V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init X Integrate v2v into engine ? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice support Google Chrome - via mime based launch
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS - Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list - Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--_000_5F9E965F5A80BC468BE5F40576769F092E730B29exchange21_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 SGksDQoNCkkgd291bGQgbGlrZSB0byBzZWUgYSBtb3JlIGFkdmFuY2VkIHR5cGUgb2YgUG93ZXIt U2F2aW5nIENsdXN0ZXIgcG9saWN5LCB3aGVyZSBhbiBlbXB0eSBIb3N0IGdldHMgcHV0IGludG8g YSBzbGVlcC1tb2RlLCBzbyB0aGF0IGl0IGRyYXdzIGFzIGxpdHRsZSBwb3dlciBhcyBwb3NzaWJs ZSBhbmQgY2FuIGJlIGF3b2tlbiBieSBlbmdpbmUgd2hlbiBWTSdzIHN0YXJ0cyB0byB1c2UgbW9y ZSByZXNvdXJjZXMuDQoNCkEgdHlwaWNhbCBzY2VuYXJpbyB3b3VsZCBiZSB0aGF0IGFsbCBWTSdz IGFyZSBzcHJlYWQgZXZlbmx5IG92ZXIgSG9zdHMgZHVyaW5nIGRheS10aW1lLCB3aGVuIGN1c3Rv bWVycyBhcmUgYWN0aXZlbHkgdXNpbmcgdGhlaXIgVk0ncy4gQnV0IHRoZW4gZHVyaW5nIG9mZi1o b3VycywgVk0ncyBnZXRzIGNsdW1wZWQgdXAgaW50byBtYXliZSBvbmUgb3IgdHdvIEhvc3RzLCBh bmQgdGhlIG90aGVyIEhvc3RzIGNhbiBiZSBwdXQgdG8gc2xlZXAgdG8gc2F2ZSBlbmVyZ3ksIGFu ZCB0aGVuIHdva2VuIHVwIGluIHRoZSBtb3JuaW5nIGFnYWluLg0KDQovS2FybGkNCg0KdGlzIDIw MTMtMDgtMjAga2xvY2thbiAxNzoxOSAtMDQwMCBza3JldiBJdGFtYXIgSGVpbToNCg0KDQplYXJs aWVyIGluIHRoZSB5ZWFyIHdlIGRpZCBhIHN1cnZleSBmb3IgZmVhdHVyZSByZXF1ZXN0cyAvIGlt cHJvdmVtZW50cw0KLyBldGMuDQoNCnNpbmNlIGEgbG90IG9mIHRoaW5ncyB3ZXJlIGFkZGVkLCBh bmQgcHJpb3JpdGllcyB1c3VhbGx5IGNoYW5nZSwgSSdkDQpsaWtlIHRvIGFzayBhZ2FpbiBmb3Ig IndoYXQgZG8geW91IG5lZWQgdGhlIG1vc3QgZnJvbSBvVmlydCAvIHdoYXQgYXJlDQp5b3VyIHBh aW4gcG9pbnRzIiBuZXh0Pw0KDQpiZWxvd1sxXSBJJ3ZlIGxpc3RlZCBteSB1bmRlcnN0YW5kaW5n IG9mIHdoYXQgYWxyZWFkeSB3ZW50IGluIGZyb20NCnByZXZpb3VzIHN1cnZleSByZXF1ZXN0cyAo dG8gdmFyaW91cyBkZWdyZWVzIG9mIGNvdmVyYWdlKS4NCg0KVGhhbmtzLA0KICAgIEl0YW1hcg0K DQpbMV0gZnJvbSB0aGUgdG9wIDEyDQpWIEFsbG93IGRpc2sgcmVzaXplDQpWIEludGVncmF0ZSBO YWdpb3MvWmFiYml4IG1vbml0b3JpbmcgLSB2aWEgYSB1aSBwbHVnaW4NClYgSGlnaGx5IEF2YWls YWJsZSBlbmdpbmUgLSB2aWEgaG9zdGVkIGVuZ2luZVsyXQ0KViBPcGVuIHZTd2l0Y2ggaW50ZWdy YXRpb24gLSB2aWEgbmV1dHJvbiBpbnRlZ3JhdGlvbg0KWCBBbGxvdyBjbG9uaW5nIFZNcyB3aXRo b3V0IHRlbXBsYXRlDQo/IEVuYWJsZSBoeXBlcnZpc29yIHVwZ3JhZGUvdXBkYXRlcyB0aHJvdWdo IGVuZ2luZVszXQ0KViBBbGxvdyBlbmdpbmUgb24gYW4gb1ZpcnQgaG9zdGVkIFZNIC0gdmlhIGhv c3RlZCBlbmdpbmVbMl0NClYgRW5hYmxlIGd1ZXN0IGNvbmZpZ3VyYXRpb24gKHJvb3QgcGFzc3dv cmQsIFNTSCBrZXlzLCBuZXR3b3JrKSB2aWENCiAgIGd1ZXN0IGFnZW50IGluIGVuZ2luZSAtIHZp YSBjbG91ZC1pbml0DQpYIEludGVncmF0ZSB2MnYgaW50byBlbmdpbmUNCj8gQm9uZC9leHRlbmQg b3ZpcnRtZ210IHdpdGggYSBzZWNvbmQgbmV0d29yayBmb3IgSEEvaW5jcmVhc2VkDQogICBiYW5k d2lkdGhbNF0NClggSW50ZWdyYXRlIHNjaGVkdWxpbmcgb2Ygc25hcHNob3RzIGFuZCBWTSBleHBv cnQgZm9yIGJhY2t1cHMgaW4NCiAgIGVuZ2luZVs1XQ0KViBTcGljZSDigJMgc3VwcG9ydCBHb29n bGUgQ2hyb21lIC0gdmlhIG1pbWUgYmFzZWQgbGF1bmNoDQoNCg0KT3RoZXIgaXRlbXMgbWVudGlv bmVkIGluIHByZXZpb3VzIHN1cnZleSB3aGljaCBzaG91bGQgYmUgY292ZXJlZCBieSBub3c6DQot IEZpeCB0aW1lb3V0IHdoZW4gYWRkaW5nIGxvY2FsIGhvc3QgZHVyaW5nIGFsbC1pbi1vbmUgY29u ZmlndXJhdGlvbg0KLSBGaXggZW5naW5lIHNldC11cCB3aGVuIFNFTGludXggaXMgZGlzYWJsZWQN Ci0gUHJvdmlkZSBwYWNrYWdlcyBmb3IgZWw2IChDZW50T1MsIFJlZCBIYXQgRW50ZXJwcmlzZSBM aW51eCkNCi0gQWxsb3cgbXVsdGlwbGUgVk1zIHRvIGJlIGRlcGxveWVkIGZyb20gdGhlIHNhbWUg dGVtcGxhdGUgYXQgdGhlIHNhbWUNCiAgIHRpbWUNCi0gSVNPIGRvbWFpbnMgb24gbG9jYWwvR2x1 c3RlclMNCi0gU2hvdyBJUCBhZGRyZXNzZXMgaW4gVmlydHVhbCBNYWNoaW5lcy0+TmV0d29yayBJ bnRlcmZhY2VzDQotIE9wZW5TdGFjayBRdWFudHVtIHN1cHBvcnQgKG5vdyBjYWxsZWQgTmV1dHJv bikNCi0gbm9WTkMgc3VwcG9ydA0KLSBTdXBwb3J0IHNwaWNlLmh0bWw1IGFuZCB3ZWJzb2NrZXQg cHJveHkNCi0gQWRkIG90aGVyIGd1ZXN0IE9TZXMgdG8gbGlzdA0KLSBQb3J0IG9WaXJ0IGd1ZXN0 IGFnZW50IHRvIFVidW50dVs2XQ0KLSBTTEEgLSBBbGxvdyByZXNvdXJjZSB0aW1lLXNoYXJpbmcN Ci0gU3BpY2UgLSBNYWMgY2xpZW50ICh2aWEgbWltZSBiYXNlZCBsYXVuY2gpDQotIFNwaWNlIC0g cG9ydCBYUEkgcGx1Zy1pbiB0byBXaW5kb3dzIChub3Qgc3VyZSB0aGlzIHdpbGwgaGFwcGVuLCBi dXQNCiAgIG1pbWUgYmFzZWQgbGF1bmNoIGFsbG93cyB1c2luZyBmaXJlZm94IG5vdykNCi0gU3Bp Y2UgLSBjbGllbnQgZm9yIFVidW50dS9EZWJpYW4gKHNob3VsZCBiZSBjb3ZlcmVkIHZpYSBtaW1l IGJhc2VkDQogICBsYXVuY2gpDQoNCg0KWzJdIGhvc3RlZCBlbmdpbmUgaXMgaW4gYWN0aXZlIGRl dmVsb3BtZW50LCBidXQgbm90IHJlbGVhc2VkIHlldC4NClszXSBob3N0IHVwZGF0ZSBpcyBzdXBw b3J0ZWQsIGJ1dCBub3QgZm9yIGdlbmVyYWwgeXVtIHVwZGF0ZS4NCls0XSBhIGxvdCBvZiBpbXBy b3ZlbWVudHMgd2VyZSBkb25lIGluIHRoaXMgc3BhY2UsIGJ1dCBpJ20gbm90IHN1cmUgaWYNCiAg ICAgdGhleSBjb3ZlciB0aGlzIGV4YWN0IHVzZSBjYXNlDQpbNV0gYmFja3VwIGFwaSBpcyBub3cg YmVpbmcgcHVzaGVkIHRvIG1hc3RlciwgYW5kIG9yY2hlc3RyYXRpb24gb2YNCiAgICAgYmFja3Vw cyBzaG91bGQgcHJvYmFibHkgaGFwcGVuIHZpYSAzcmQgcGFydCBiYWNrdXAgdmVuZG9ycz8NCls2 XSBJJ20gbm90IHN1cmUgcGFja2FnaW5nIGV4aXN0cyB5ZXQsIGJ1dCB1YnVudHUgaXMgY292ZXJl ZCBmb3IgdGhlDQogICAgIGJhc2ljIGZ1bmN0aW9uYWxpdHkgb2YgdGhlIGd1ZXN0IGFnZW50Lg0K X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NClVzZXJzIG1h aWxpbmcgbGlzdA0KVXNlcnNAb3ZpcnQub3JnPG1haWx0bzpVc2Vyc0BvdmlydC5vcmc+DQpodHRw Oi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMNCg0KDQotLQ0KDQpNZWQg VsOkbmxpZ2EgSMOkbHNuaW5nYXINCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCkthcmxpIFNqw7Zi ZXJnDQpTd2VkaXNoIFVuaXZlcnNpdHkgb2YgQWdyaWN1bHR1cmFsIFNjaWVuY2VzDQpCb3ggNzA3 OSAoVmlzaXRpbmcgQWRkcmVzcyBLcm9uw6VzdsOkZ2VuIDgpDQpTLTc1MCAwNyBVcHBzYWxhLCBT d2VkZW4NClBob25lOiAgKzQ2LSgwKTE4LTY3IDE1IDY2DQprYXJsaS5zam9iZXJnQHNsdS5zZTxt YWlsdG86a2FybGkuc2pvYmVyZ0BhZG0uc2x1LnNlPg0K --_000_5F9E965F5A80BC468BE5F40576769F092E730B29exchange21_ Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: base64 PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUUkFOU0lUSU9OQUwv L0VOIj4NCjxodG1sPg0KPGhlYWQ+DQo8bWV0YSBodHRwLWVxdWl2PSJDb250ZW50LVR5cGUiIGNv bnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD11dGYtOCI+DQo8bWV0YSBuYW1lPSJHRU5FUkFUT1Ii IGNvbnRlbnQ9Ikd0a0hUTUwvNC42LjQiPg0KPC9oZWFkPg0KPGJvZHk+DQpIaSw8YnI+DQo8YnI+ DQpJIHdvdWxkIGxpa2UgdG8gc2VlIGEgbW9yZSBhZHZhbmNlZCB0eXBlIG9mIFBvd2VyLVNhdmlu ZyBDbHVzdGVyIHBvbGljeSwgd2hlcmUgYW4gZW1wdHkgSG9zdCBnZXRzIHB1dCBpbnRvIGEgc2xl ZXAtbW9kZSwgc28gdGhhdCBpdCBkcmF3cyBhcyBsaXR0bGUgcG93ZXIgYXMgcG9zc2libGUgYW5k IGNhbiBiZSBhd29rZW4gYnkgZW5naW5lIHdoZW4gVk0ncyBzdGFydHMgdG8gdXNlIG1vcmUgcmVz b3VyY2VzLjxicj4NCjxicj4NCkEgdHlwaWNhbCBzY2VuYXJpbyB3b3VsZCBiZSB0aGF0IGFsbCBW TSdzIGFyZSBzcHJlYWQgZXZlbmx5IG92ZXIgSG9zdHMgZHVyaW5nIGRheS10aW1lLCB3aGVuIGN1 c3RvbWVycyBhcmUgYWN0aXZlbHkgdXNpbmcgdGhlaXIgVk0ncy4gQnV0IHRoZW4gZHVyaW5nIG9m Zi1ob3VycywgVk0ncyBnZXRzIGNsdW1wZWQgdXAgaW50byBtYXliZSBvbmUgb3IgdHdvIEhvc3Rz LCBhbmQgdGhlIG90aGVyIEhvc3RzIGNhbiBiZSBwdXQgdG8gc2xlZXAgdG8gc2F2ZQ0KIGVuZXJn eSwgYW5kIHRoZW4gd29rZW4gdXAgaW4gdGhlIG1vcm5pbmcgYWdhaW4uPGJyPg0KPGJyPg0KL0th cmxpPGJyPg0KPGJyPg0KdGlzIDIwMTMtMDgtMjAga2xvY2thbiAxNzoxOSAtMDQwMCBza3JldiBJ dGFtYXIgSGVpbToNCjxibG9ja3F1b3RlIHR5cGU9IkNJVEUiPg0KPHByZT4KZWFybGllciBpbiB0 aGUgeWVhciB3ZSBkaWQgYSBzdXJ2ZXkgZm9yIGZlYXR1cmUgcmVxdWVzdHMgLyBpbXByb3ZlbWVu dHMgCi8gZXRjLgoKc2luY2UgYSBsb3Qgb2YgdGhpbmdzIHdlcmUgYWRkZWQsIGFuZCBwcmlvcml0 aWVzIHVzdWFsbHkgY2hhbmdlLCBJJ2QgCmxpa2UgdG8gYXNrIGFnYWluIGZvciAmcXVvdDt3aGF0 IGRvIHlvdSBuZWVkIHRoZSBtb3N0IGZyb20gb1ZpcnQgLyB3aGF0IGFyZSAKeW91ciBwYWluIHBv aW50cyZxdW90OyBuZXh0PwoKYmVsb3dbMV0gSSd2ZSBsaXN0ZWQgbXkgdW5kZXJzdGFuZGluZyBv ZiB3aGF0IGFscmVhZHkgd2VudCBpbiBmcm9tIApwcmV2aW91cyBzdXJ2ZXkgcmVxdWVzdHMgKHRv IHZhcmlvdXMgZGVncmVlcyBvZiBjb3ZlcmFnZSkuCgpUaGFua3MsCiAgICBJdGFtYXIKClsxXSBm cm9tIHRoZSB0b3AgMTIKViBBbGxvdyBkaXNrIHJlc2l6ZQpWIEludGVncmF0ZSBOYWdpb3MvWmFi Yml4IG1vbml0b3JpbmcgLSB2aWEgYSB1aSBwbHVnaW4KViBIaWdobHkgQXZhaWxhYmxlIGVuZ2lu ZSAtIHZpYSBob3N0ZWQgZW5naW5lWzJdClYgT3BlbiB2U3dpdGNoIGludGVncmF0aW9uIC0gdmlh IG5ldXRyb24gaW50ZWdyYXRpb24KWCBBbGxvdyBjbG9uaW5nIFZNcyB3aXRob3V0IHRlbXBsYXRl Cj8gRW5hYmxlIGh5cGVydmlzb3IgdXBncmFkZS91cGRhdGVzIHRocm91Z2ggZW5naW5lWzNdClYg QWxsb3cgZW5naW5lIG9uIGFuIG9WaXJ0IGhvc3RlZCBWTSAtIHZpYSBob3N0ZWQgZW5naW5lWzJd ClYgRW5hYmxlIGd1ZXN0IGNvbmZpZ3VyYXRpb24gKHJvb3QgcGFzc3dvcmQsIFNTSCBrZXlzLCBu ZXR3b3JrKSB2aWEKICAgZ3Vlc3QgYWdlbnQgaW4gZW5naW5lIC0gdmlhIGNsb3VkLWluaXQKWCBJ bnRlZ3JhdGUgdjJ2IGludG8gZW5naW5lCj8gQm9uZC9leHRlbmQgb3ZpcnRtZ210IHdpdGggYSBz ZWNvbmQgbmV0d29yayBmb3IgSEEvaW5jcmVhc2VkCiAgIGJhbmR3aWR0aFs0XQpYIEludGVncmF0 ZSBzY2hlZHVsaW5nIG9mIHNuYXBzaG90cyBhbmQgVk0gZXhwb3J0IGZvciBiYWNrdXBzIGluCiAg IGVuZ2luZVs1XQpWIFNwaWNlIOKAkyBzdXBwb3J0IEdvb2dsZSBDaHJvbWUgLSB2aWEgbWltZSBi YXNlZCBsYXVuY2gKCgpPdGhlciBpdGVtcyBtZW50aW9uZWQgaW4gcHJldmlvdXMgc3VydmV5IHdo aWNoIHNob3VsZCBiZSBjb3ZlcmVkIGJ5IG5vdzoKLSBGaXggdGltZW91dCB3aGVuIGFkZGluZyBs b2NhbCBob3N0IGR1cmluZyBhbGwtaW4tb25lIGNvbmZpZ3VyYXRpb24KLSBGaXggZW5naW5lIHNl dC11cCB3aGVuIFNFTGludXggaXMgZGlzYWJsZWQKLSBQcm92aWRlIHBhY2thZ2VzIGZvciBlbDYg KENlbnRPUywgUmVkIEhhdCBFbnRlcnByaXNlIExpbnV4KQotIEFsbG93IG11bHRpcGxlIFZNcyB0 byBiZSBkZXBsb3llZCBmcm9tIHRoZSBzYW1lIHRlbXBsYXRlIGF0IHRoZSBzYW1lCiAgIHRpbWUK LSBJU08gZG9tYWlucyBvbiBsb2NhbC9HbHVzdGVyUwotIFNob3cgSVAgYWRkcmVzc2VzIGluIFZp cnR1YWwgTWFjaGluZXMtJmd0O05ldHdvcmsgSW50ZXJmYWNlcwotIE9wZW5TdGFjayBRdWFudHVt IHN1cHBvcnQgKG5vdyBjYWxsZWQgTmV1dHJvbikKLSBub1ZOQyBzdXBwb3J0Ci0gU3VwcG9ydCBz cGljZS5odG1sNSBhbmQgd2Vic29ja2V0IHByb3h5Ci0gQWRkIG90aGVyIGd1ZXN0IE9TZXMgdG8g bGlzdAotIFBvcnQgb1ZpcnQgZ3Vlc3QgYWdlbnQgdG8gVWJ1bnR1WzZdCi0gU0xBIC0gQWxsb3cg cmVzb3VyY2UgdGltZS1zaGFyaW5nCi0gU3BpY2UgLSBNYWMgY2xpZW50ICh2aWEgbWltZSBiYXNl ZCBsYXVuY2gpCi0gU3BpY2UgLSBwb3J0IFhQSSBwbHVnLWluIHRvIFdpbmRvd3MgKG5vdCBzdXJl IHRoaXMgd2lsbCBoYXBwZW4sIGJ1dAogICBtaW1lIGJhc2VkIGxhdW5jaCBhbGxvd3MgdXNpbmcg ZmlyZWZveCBub3cpCi0gU3BpY2UgLSBjbGllbnQgZm9yIFVidW50dS9EZWJpYW4gKHNob3VsZCBi ZSBjb3ZlcmVkIHZpYSBtaW1lIGJhc2VkCiAgIGxhdW5jaCkKCgpbMl0gaG9zdGVkIGVuZ2luZSBp cyBpbiBhY3RpdmUgZGV2ZWxvcG1lbnQsIGJ1dCBub3QgcmVsZWFzZWQgeWV0LgpbM10gaG9zdCB1 cGRhdGUgaXMgc3VwcG9ydGVkLCBidXQgbm90IGZvciBnZW5lcmFsIHl1bSB1cGRhdGUuCls0XSBh IGxvdCBvZiBpbXByb3ZlbWVudHMgd2VyZSBkb25lIGluIHRoaXMgc3BhY2UsIGJ1dCBpJ20gbm90 IHN1cmUgaWYKICAgICB0aGV5IGNvdmVyIHRoaXMgZXhhY3QgdXNlIGNhc2UKWzVdIGJhY2t1cCBh cGkgaXMgbm93IGJlaW5nIHB1c2hlZCB0byBtYXN0ZXIsIGFuZCBvcmNoZXN0cmF0aW9uIG9mCiAg ICAgYmFja3VwcyBzaG91bGQgcHJvYmFibHkgaGFwcGVuIHZpYSAzcmQgcGFydCBiYWNrdXAgdmVu ZG9ycz8KWzZdIEknbSBub3Qgc3VyZSBwYWNrYWdpbmcgZXhpc3RzIHlldCwgYnV0IHVidW50dSBp cyBjb3ZlcmVkIGZvciB0aGUKICAgICBiYXNpYyBmdW5jdGlvbmFsaXR5IG9mIHRoZSBndWVzdCBh Z2VudC4KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KVXNl cnMgbWFpbGluZyBsaXN0CjxhIGhyZWY9Im1haWx0bzpVc2Vyc0BvdmlydC5vcmciPlVzZXJzQG92 aXJ0Lm9yZzwvYT4KPGEgaHJlZj0iaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3Rp bmZvL3VzZXJzIj5odHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnM8 L2E+CjwvcHJlPg0KPC9ibG9ja3F1b3RlPg0KPGJyPg0KPHRhYmxlIGNlbGxzcGFjaW5nPSIwIiBj ZWxscGFkZGluZz0iMCIgd2lkdGg9IjEwMCUiPg0KPHRib2R5Pg0KPHRyPg0KPHRkPi0tIDxicj4N Cjxicj4NCk1lZCBWw6RubGlnYSBIw6Rsc25pbmdhcjxicj4NCi0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS08YnI+DQpLYXJsaSBTasO2YmVyZzxicj4NClN3ZWRpc2ggVW5pdmVyc2l0eSBvZiBBZ3JpY3Vs dHVyYWwgU2NpZW5jZXM8YnI+DQpCb3ggNzA3OSAoVmlzaXRpbmcgQWRkcmVzcyBLcm9uw6VzdsOk Z2VuIDgpPGJyPg0KUy03NTAgMDcgVXBwc2FsYSwgU3dlZGVuPGJyPg0KUGhvbmU6ICZuYnNwOyYj NDM7NDYtKDApMTgtNjcgMTUgNjY8YnI+DQo8YSBocmVmPSJtYWlsdG86a2FybGkuc2pvYmVyZ0Bh ZG0uc2x1LnNlIj5rYXJsaS5zam9iZXJnQHNsdS5zZTwvYT4gPC90ZD4NCjwvdHI+DQo8L3Rib2R5 Pg0KPC90YWJsZT4NCjwvYm9keT4NCjwvaHRtbD4NCg== --_000_5F9E965F5A80BC468BE5F40576769F092E730B29exchange21_--

Hi, On 09/02/2013 01:12 PM, Karli Sjöberg wrote:
Hi,
I would like to see a more advanced type of Power-Saving Cluster policy, where an empty Host gets put into a sleep-mode, so that it draws as little power as possible and can be awoken by engine when VM's starts to use more resources.
A typical scenario would be that all VM's are spread evenly over Hosts during day-time, when customers are actively using their VM's. But then during off-hours, VM's gets clumped up into maybe one or two Hosts, and the other Hosts can be put to sleep to save energy, and then woken up in the morning again.
I agree. Current power-saving policy doesn't do much other than consolidating VMs. Having a tunable based on which X/Y idle hosts in the cluster could be powered-off, where Y-X hosts are ON just for backup. I would love to see that in the coming days in oVirt. It would also benefit if there is some kind of power-metering that is flagged in the GUI. How much power each system is consuming and over-all cluster is consuming. This would be a real handy number to access power consumption of the systems. Regards, Deepthi
/Karli
tis 2013-08-20 klockan 17:19 -0400 skrev Itamar Heim:
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template ? Enable hypervisor upgrade/updates through engine[3] V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init X Integrate v2v into engine ? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS - Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list - Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
--
Med Vänliga Hälsningar ------------------------------------------------------------------------------- Karli Sjöberg Swedish University of Agricultural Sciences Box 7079 (Visiting Address Kronåsvägen 8) S-750 07 Uppsala, Sweden Phone: +46-(0)18-67 15 66 karli.sjoberg@slu.se <mailto:karli.sjoberg@adm.slu.se>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message ----- | From: "Karli Sjöberg" <Karli.Sjoberg@slu.se> | To: "Itamar Heim" <iheim@redhat.com> | Cc: users@ovirt.org, "Jan-Olov Persson" <jan-olov.persson@slu.se> | Sent: Monday, September 2, 2013 10:42:52 AM | Subject: Re: [Users] so, what do you want next in oVirt? | | Hi, | | I would like to see a more advanced type of Power-Saving Cluster policy, | where an empty Host gets put into a sleep-mode, so that it draws as little | power as possible and can be awoken by engine when VM's starts to use more | resources. | | A typical scenario would be that all VM's are spread evenly over Hosts during | day-time, when customers are actively using their VM's. But then during | off-hours, VM's gets clumped up into maybe one or two Hosts, and the other | Hosts can be put to sleep to save energy, and then woken up in the morning | again. | | /Karli | Hi Karli, this should be achievable for you in 3.3 with some minor work on your side; In oVirt 3.3 we introduce the new scheduler, which allows you to run your own code when doing VM placement and load balancing. You can write your own load balancing logic replacing the built-in one, which will switch to power saving at a given time. Once a host has no running VMs, your balance logic can switch it into S3 state (sleep), and when needed or in a given time use wake-on-lan to resume all suspended hosts. If you want we can guide you through it post 3.3 release. Doron | tis 2013-08-20 klockan 17:19 -0400 skrev Itamar Heim: | | | earlier in the year we did a survey for feature requests / improvements | / etc. | | since a lot of things were added, and priorities usually change, I'd | like to ask again for "what do you need the most from oVirt / what are | your pain points" next? | | below[1] I've listed my understanding of what already went in from | previous survey requests (to various degrees of coverage). | | Thanks, | Itamar | | [1] from the top 12 | V Allow disk resize | V Integrate Nagios/Zabbix monitoring - via a ui plugin | V Highly Available engine - via hosted engine[2] | V Open vSwitch integration - via neutron integration | X Allow cloning VMs without template | ? Enable hypervisor upgrade/updates through engine[3] | V Allow engine on an oVirt hosted VM - via hosted engine[2] | V Enable guest configuration (root password, SSH keys, network) via | guest agent in engine - via cloud-init | X Integrate v2v into engine | ? Bond/extend ovirtmgmt with a second network for HA/increased | bandwidth[4] | X Integrate scheduling of snapshots and VM export for backups in | engine[5] | V Spice – support Google Chrome - via mime based launch | | | Other items mentioned in previous survey which should be covered by now: | - Fix timeout when adding local host during all-in-one configuration | - Fix engine set-up when SELinux is disabled | - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) | - Allow multiple VMs to be deployed from the same template at the same | time | - ISO domains on local/GlusterS | - Show IP addresses in Virtual Machines->Network Interfaces | - OpenStack Quantum support (now called Neutron) | - noVNC support | - Support spice.html5 and websocket proxy | - Add other guest OSes to list | - Port oVirt guest agent to Ubuntu[6] | - SLA - Allow resource time-sharing | - Spice - Mac client (via mime based launch) | - Spice - port XPI plug-in to Windows (not sure this will happen, but | mime based launch allows using firefox now) | - Spice - client for Ubuntu/Debian (should be covered via mime based | launch) | | | [2] hosted engine is in active development, but not released yet. | [3] host update is supported, but not for general yum update. | [4] a lot of improvements were done in this space, but i'm not sure if | they cover this exact use case | [5] backup api is now being pushed to master, and orchestration of | backups should probably happen via 3rd part backup vendors? | [6] I'm not sure packaging exists yet, but ubuntu is covered for the | basic functionality of the guest agent. | _______________________________________________ | Users mailing list Users@ovirt.org | http://lists.ovirt.org/mailman/listinfo/users | | | -- | | Med Vänliga Hälsningar | ------------------------------------------------------------------------------- | Karli Sjöberg | Swedish University of Agricultural Sciences | Box 7079 (Visiting Address Kronåsvägen 8) | S-750 07 Uppsala, Sweden | Phone: +46-(0)18-67 15 66 | karli.sjoberg@slu.se | | _______________________________________________ | Users mailing list | Users@ovirt.org | http://lists.ovirt.org/mailman/listinfo/users |

hi please add support from solaris , ubuntu and other OS's. installation is a little hard specially in an evironment without internet. regards, On Mon, Sep 2, 2013 at 7:43 PM, Doron Fediuck <dfediuck@redhat.com> wrote:
----- Original Message ----- | From: "Karli Sjöberg" <Karli.Sjoberg@slu.se> | To: "Itamar Heim" <iheim@redhat.com> | Cc: users@ovirt.org, "Jan-Olov Persson" <jan-olov.persson@slu.se> | Sent: Monday, September 2, 2013 10:42:52 AM | Subject: Re: [Users] so, what do you want next in oVirt? | | Hi, | | I would like to see a more advanced type of Power-Saving Cluster policy, | where an empty Host gets put into a sleep-mode, so that it draws as little | power as possible and can be awoken by engine when VM's starts to use more | resources. | | A typical scenario would be that all VM's are spread evenly over Hosts during | day-time, when customers are actively using their VM's. But then during | off-hours, VM's gets clumped up into maybe one or two Hosts, and the other | Hosts can be put to sleep to save energy, and then woken up in the morning | again. | | /Karli |
Hi Karli, this should be achievable for you in 3.3 with some minor work on your side; In oVirt 3.3 we introduce the new scheduler, which allows you to run your own code when doing VM placement and load balancing.
You can write your own load balancing logic replacing the built-in one, which will switch to power saving at a given time. Once a host has no running VMs, your balance logic can switch it into S3 state (sleep), and when needed or in a given time use wake-on-lan to resume all suspended hosts.
If you want we can guide you through it post 3.3 release.
Doron
| tis 2013-08-20 klockan 17:19 -0400 skrev Itamar Heim: | | | earlier in the year we did a survey for feature requests / improvements | / etc. | | since a lot of things were added, and priorities usually change, I'd | like to ask again for "what do you need the most from oVirt / what are | your pain points" next? | | below[1] I've listed my understanding of what already went in from | previous survey requests (to various degrees of coverage). | | Thanks, | Itamar | | [1] from the top 12 | V Allow disk resize | V Integrate Nagios/Zabbix monitoring - via a ui plugin | V Highly Available engine - via hosted engine[2] | V Open vSwitch integration - via neutron integration | X Allow cloning VMs without template | ? Enable hypervisor upgrade/updates through engine[3] | V Allow engine on an oVirt hosted VM - via hosted engine[2] | V Enable guest configuration (root password, SSH keys, network) via | guest agent in engine - via cloud-init | X Integrate v2v into engine | ? Bond/extend ovirtmgmt with a second network for HA/increased | bandwidth[4] | X Integrate scheduling of snapshots and VM export for backups in | engine[5] | V Spice – support Google Chrome - via mime based launch | | | Other items mentioned in previous survey which should be covered by now: | - Fix timeout when adding local host during all-in-one configuration | - Fix engine set-up when SELinux is disabled | - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) | - Allow multiple VMs to be deployed from the same template at the same | time | - ISO domains on local/GlusterS | - Show IP addresses in Virtual Machines->Network Interfaces | - OpenStack Quantum support (now called Neutron) | - noVNC support | - Support spice.html5 and websocket proxy | - Add other guest OSes to list | - Port oVirt guest agent to Ubuntu[6] | - SLA - Allow resource time-sharing | - Spice - Mac client (via mime based launch) | - Spice - port XPI plug-in to Windows (not sure this will happen, but | mime based launch allows using firefox now) | - Spice - client for Ubuntu/Debian (should be covered via mime based | launch) | | | [2] hosted engine is in active development, but not released yet. | [3] host update is supported, but not for general yum update. | [4] a lot of improvements were done in this space, but i'm not sure if | they cover this exact use case | [5] backup api is now being pushed to master, and orchestration of | backups should probably happen via 3rd part backup vendors? | [6] I'm not sure packaging exists yet, but ubuntu is covered for the | basic functionality of the guest agent. | _______________________________________________ | Users mailing list Users@ovirt.org | http://lists.ovirt.org/mailman/listinfo/users | | | -- | | Med Vänliga Hälsningar | ------------------------------------------------------------------------------- | Karli Sjöberg | Swedish University of Agricultural Sciences | Box 7079 (Visiting Address Kronåsvägen 8) | S-750 07 Uppsala, Sweden | Phone: +46-(0)18-67 15 66 | karli.sjoberg@slu.se | | _______________________________________________ | Users mailing list | Users@ovirt.org | http://lists.ovirt.org/mailman/listinfo/users | _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 09/03/2013 10:53 AM, alireza sadeh seighalan wrote:
hi
please add support from solaris , ubuntu and other OS's.
we've added in 3.3 ubuntu and suse as guest os's - i can't tell you if they work or not. you can now easily add on your own more OSs as well: http://www.ovirt.org/OS_info guest agent for ubuntu should be working.
installation is a little hard specially in an evironment without internet.
can you please share more on the issues you faced? thanks, Itamar
regards,
On Mon, Sep 2, 2013 at 7:43 PM, Doron Fediuck <dfediuck@redhat.com <mailto:dfediuck@redhat.com>> wrote:
----- Original Message ----- | From: "Karli Sjöberg" <Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>> | To: "Itamar Heim" <iheim@redhat.com <mailto:iheim@redhat.com>> | Cc: users@ovirt.org <mailto:users@ovirt.org>, "Jan-Olov Persson" <jan-olov.persson@slu.se <mailto:jan-olov.persson@slu.se>> | Sent: Monday, September 2, 2013 10:42:52 AM | Subject: Re: [Users] so, what do you want next in oVirt? | | Hi, | | I would like to see a more advanced type of Power-Saving Cluster policy, | where an empty Host gets put into a sleep-mode, so that it draws as little | power as possible and can be awoken by engine when VM's starts to use more | resources. | | A typical scenario would be that all VM's are spread evenly over Hosts during | day-time, when customers are actively using their VM's. But then during | off-hours, VM's gets clumped up into maybe one or two Hosts, and the other | Hosts can be put to sleep to save energy, and then woken up in the morning | again. | | /Karli |
Hi Karli, this should be achievable for you in 3.3 with some minor work on your side; In oVirt 3.3 we introduce the new scheduler, which allows you to run your own code when doing VM placement and load balancing.
You can write your own load balancing logic replacing the built-in one, which will switch to power saving at a given time. Once a host has no running VMs, your balance logic can switch it into S3 state (sleep), and when needed or in a given time use wake-on-lan to resume all suspended hosts.
If you want we can guide you through it post 3.3 release.
Doron
| tis 2013-08-20 klockan 17:19 -0400 skrev Itamar Heim: | | | earlier in the year we did a survey for feature requests / improvements | / etc. | | since a lot of things were added, and priorities usually change, I'd | like to ask again for "what do you need the most from oVirt / what are | your pain points" next? | | below[1] I've listed my understanding of what already went in from | previous survey requests (to various degrees of coverage). | | Thanks, | Itamar | | [1] from the top 12 | V Allow disk resize | V Integrate Nagios/Zabbix monitoring - via a ui plugin | V Highly Available engine - via hosted engine[2] | V Open vSwitch integration - via neutron integration | X Allow cloning VMs without template | ? Enable hypervisor upgrade/updates through engine[3] | V Allow engine on an oVirt hosted VM - via hosted engine[2] | V Enable guest configuration (root password, SSH keys, network) via | guest agent in engine - via cloud-init | X Integrate v2v into engine | ? Bond/extend ovirtmgmt with a second network for HA/increased | bandwidth[4] | X Integrate scheduling of snapshots and VM export for backups in | engine[5] | V Spice – support Google Chrome - via mime based launch | | | Other items mentioned in previous survey which should be covered by now: | - Fix timeout when adding local host during all-in-one configuration | - Fix engine set-up when SELinux is disabled | - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) | - Allow multiple VMs to be deployed from the same template at the same | time | - ISO domains on local/GlusterS | - Show IP addresses in Virtual Machines->Network Interfaces | - OpenStack Quantum support (now called Neutron) | - noVNC support | - Support spice.html5 and websocket proxy | - Add other guest OSes to list | - Port oVirt guest agent to Ubuntu[6] | - SLA - Allow resource time-sharing | - Spice - Mac client (via mime based launch) | - Spice - port XPI plug-in to Windows (not sure this will happen, but | mime based launch allows using firefox now) | - Spice - client for Ubuntu/Debian (should be covered via mime based | launch) | | | [2] hosted engine is in active development, but not released yet. | [3] host update is supported, but not for general yum update. | [4] a lot of improvements were done in this space, but i'm not sure if | they cover this exact use case | [5] backup api is now being pushed to master, and orchestration of | backups should probably happen via 3rd part backup vendors? | [6] I'm not sure packaging exists yet, but ubuntu is covered for the | basic functionality of the guest agent. | _______________________________________________ | Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> | http://lists.ovirt.org/mailman/listinfo/users | | | -- | | Med Vänliga Hälsningar | ------------------------------------------------------------------------------- | Karli Sjöberg | Swedish University of Agricultural Sciences | Box 7079 (Visiting Address Kronåsvägen 8) | S-750 07 Uppsala, Sweden | Phone: +46-(0)18-67 15 66 | karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> | | _______________________________________________ | Users mailing list | Users@ovirt.org <mailto:Users@ovirt.org> | http://lists.ovirt.org/mailman/listinfo/users | _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

hi again ubuntu agent should be add in the list like vmware . about installation i have to install ovirt packages by local repository and there is problem in this status and in adding host to ovirt manager you need to install packages that need to internet . regards, On Tue, Sep 3, 2013 at 1:26 PM, Itamar Heim <iheim@redhat.com> wrote:
On 09/03/2013 10:53 AM, alireza sadeh seighalan wrote:
hi
please add support from solaris , ubuntu and other OS's.
we've added in 3.3 ubuntu and suse as guest os's - i can't tell you if they work or not. you can now easily add on your own more OSs as well: http://www.ovirt.org/OS_info
guest agent for ubuntu should be working.
installation is a little hard specially in an evironment without internet.
can you please share more on the issues you faced?
thanks, Itamar
regards,
On Mon, Sep 2, 2013 at 7:43 PM, Doron Fediuck <dfediuck@redhat.com <mailto:dfediuck@redhat.com>> wrote:
----- Original Message ----- | From: "Karli Sjöberg" <Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>> | To: "Itamar Heim" <iheim@redhat.com <mailto:iheim@redhat.com>> | Cc: users@ovirt.org <mailto:users@ovirt.org>, "Jan-Olov Persson" <jan-olov.persson@slu.se <mailto:jan-olov.persson@slu.**se<jan-olov.persson@slu.se>
| Sent: Monday, September 2, 2013 10:42:52 AM | Subject: Re: [Users] so, what do you want next in oVirt? | | Hi, | | I would like to see a more advanced type of Power-Saving Cluster policy, | where an empty Host gets put into a sleep-mode, so that it draws as little | power as possible and can be awoken by engine when VM's starts to use more | resources. | | A typical scenario would be that all VM's are spread evenly over Hosts during | day-time, when customers are actively using their VM's. But then during | off-hours, VM's gets clumped up into maybe one or two Hosts, and the other | Hosts can be put to sleep to save energy, and then woken up in the morning | again. | | /Karli |
Hi Karli, this should be achievable for you in 3.3 with some minor work on your side; In oVirt 3.3 we introduce the new scheduler, which allows you to run your own code when doing VM placement and load balancing.
You can write your own load balancing logic replacing the built-in one, which will switch to power saving at a given time. Once a host has no running VMs, your balance logic can switch it into S3 state (sleep), and when needed or in a given time use wake-on-lan to resume all suspended hosts.
If you want we can guide you through it post 3.3 release.
Doron
| tis 2013-08-20 klockan 17:19 -0400 skrev Itamar Heim: | | | earlier in the year we did a survey for feature requests / improvements | / etc. | | since a lot of things were added, and priorities usually change, I'd | like to ask again for "what do you need the most from oVirt / what are | your pain points" next? | | below[1] I've listed my understanding of what already went in from | previous survey requests (to various degrees of coverage). | | Thanks, | Itamar | | [1] from the top 12 | V Allow disk resize | V Integrate Nagios/Zabbix monitoring - via a ui plugin | V Highly Available engine - via hosted engine[2] | V Open vSwitch integration - via neutron integration | X Allow cloning VMs without template | ? Enable hypervisor upgrade/updates through engine[3] | V Allow engine on an oVirt hosted VM - via hosted engine[2] | V Enable guest configuration (root password, SSH keys, network) via | guest agent in engine - via cloud-init | X Integrate v2v into engine | ? Bond/extend ovirtmgmt with a second network for HA/increased | bandwidth[4] | X Integrate scheduling of snapshots and VM export for backups in | engine[5] | V Spice – support Google Chrome - via mime based launch | | | Other items mentioned in previous survey which should be covered by now: | - Fix timeout when adding local host during all-in-one configuration | - Fix engine set-up when SELinux is disabled | - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) | - Allow multiple VMs to be deployed from the same template at the same | time | - ISO domains on local/GlusterS | - Show IP addresses in Virtual Machines->Network Interfaces | - OpenStack Quantum support (now called Neutron) | - noVNC support | - Support spice.html5 and websocket proxy | - Add other guest OSes to list | - Port oVirt guest agent to Ubuntu[6] | - SLA - Allow resource time-sharing | - Spice - Mac client (via mime based launch) | - Spice - port XPI plug-in to Windows (not sure this will happen, but | mime based launch allows using firefox now) | - Spice - client for Ubuntu/Debian (should be covered via mime based | launch) | | | [2] hosted engine is in active development, but not released yet. | [3] host update is supported, but not for general yum update. | [4] a lot of improvements were done in this space, but i'm not sure if | they cover this exact use case | [5] backup api is now being pushed to master, and orchestration of | backups should probably happen via 3rd part backup vendors? | [6] I'm not sure packaging exists yet, but ubuntu is covered for the | basic functionality of the guest agent. | ______________________________**_________________ | Users mailing list Users@ovirt.org <mailto:Users@ovirt.org>
| http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users> | | | -- | | Med Vänliga Hälsningar | ------------------------------**------------------------------** ------------------- | Karli Sjöberg | Swedish University of Agricultural Sciences | Box 7079 (Visiting Address Kronåsvägen 8) | S-750 07 Uppsala, Sweden | Phone: +46-(0)18-67 15 66 | karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> | | ______________________________**_________________ | Users mailing list | Users@ovirt.org <mailto:Users@ovirt.org>
| http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users> | ______________________________**_________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>

On 09/03/2013 12:02 PM, alireza sadeh seighalan wrote:
hi again
ubuntu agent should be add in the list like vmware .
added in which list?
about installation i have to install ovirt packages by local repository and there is problem in this status and in adding host to ovirt manager you need to install packages that need to internet .
why do you need internet? you just need the host to be configured with a repo containing the packages, which can be local as well?
regards,
On Tue, Sep 3, 2013 at 1:26 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 09/03/2013 10:53 AM, alireza sadeh seighalan wrote:
hi
please add support from solaris , ubuntu and other OS's.
we've added in 3.3 ubuntu and suse as guest os's - i can't tell you if they work or not. you can now easily add on your own more OSs as well: http://www.ovirt.org/OS_info
guest agent for ubuntu should be working.
> installation is a little hard specially in an evironment without > internet.
can you please share more on the issues you faced?
thanks, Itamar
regards,
On Mon, Sep 2, 2013 at 7:43 PM, Doron Fediuck <dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>>> wrote:
----- Original Message ----- | From: "Karli Sjöberg" <Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>>> | To: "Itamar Heim" <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> | Cc: users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>>, "Jan-Olov Persson" <jan-olov.persson@slu.se <mailto:jan-olov.persson@slu.se> <mailto:jan-olov.persson@slu.__se <mailto:jan-olov.persson@slu.se>>> | Sent: Monday, September 2, 2013 10:42:52 AM | Subject: Re: [Users] so, what do you want next in oVirt? | | Hi, | | I would like to see a more advanced type of Power-Saving Cluster policy, | where an empty Host gets put into a sleep-mode, so that it draws as little | power as possible and can be awoken by engine when VM's starts to use more | resources. | | A typical scenario would be that all VM's are spread evenly over Hosts during | day-time, when customers are actively using their VM's. But then during | off-hours, VM's gets clumped up into maybe one or two Hosts, and the other | Hosts can be put to sleep to save energy, and then woken up in the morning | again. | | /Karli |
Hi Karli, this should be achievable for you in 3.3 with some minor work on your side; In oVirt 3.3 we introduce the new scheduler, which allows you to run your own code when doing VM placement and load balancing.
You can write your own load balancing logic replacing the built-in one, which will switch to power saving at a given time. Once a host has no running VMs, your balance logic can switch it into S3 state (sleep), and when needed or in a given time use wake-on-lan to resume all suspended hosts.
If you want we can guide you through it post 3.3 release.
Doron
| tis 2013-08-20 klockan 17:19 -0400 skrev Itamar Heim: | | | earlier in the year we did a survey for feature requests / improvements | / etc. | | since a lot of things were added, and priorities usually change, I'd | like to ask again for "what do you need the most from oVirt / what are | your pain points" next? | | below[1] I've listed my understanding of what already went in from | previous survey requests (to various degrees of coverage). | | Thanks, | Itamar | | [1] from the top 12 | V Allow disk resize | V Integrate Nagios/Zabbix monitoring - via a ui plugin | V Highly Available engine - via hosted engine[2] | V Open vSwitch integration - via neutron integration | X Allow cloning VMs without template | ? Enable hypervisor upgrade/updates through engine[3] | V Allow engine on an oVirt hosted VM - via hosted engine[2] | V Enable guest configuration (root password, SSH keys, network) via | guest agent in engine - via cloud-init | X Integrate v2v into engine | ? Bond/extend ovirtmgmt with a second network for HA/increased | bandwidth[4] | X Integrate scheduling of snapshots and VM export for backups in | engine[5] | V Spice – support Google Chrome - via mime based launch | | | Other items mentioned in previous survey which should be covered by now: | - Fix timeout when adding local host during all-in-one configuration | - Fix engine set-up when SELinux is disabled | - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) | - Allow multiple VMs to be deployed from the same template at the same | time | - ISO domains on local/GlusterS | - Show IP addresses in Virtual Machines->Network Interfaces | - OpenStack Quantum support (now called Neutron) | - noVNC support | - Support spice.html5 and websocket proxy | - Add other guest OSes to list | - Port oVirt guest agent to Ubuntu[6] | - SLA - Allow resource time-sharing | - Spice - Mac client (via mime based launch) | - Spice - port XPI plug-in to Windows (not sure this will happen, but | mime based launch allows using firefox now) | - Spice - client for Ubuntu/Debian (should be covered via mime based | launch) | | | [2] hosted engine is in active development, but not released yet. | [3] host update is supported, but not for general yum update. | [4] a lot of improvements were done in this space, but i'm not sure if | they cover this exact use case | [5] backup api is now being pushed to master, and orchestration of | backups should probably happen via 3rd part backup vendors? | [6] I'm not sure packaging exists yet, but ubuntu is covered for the | basic functionality of the guest agent. | _________________________________________________ | Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
| http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> | | | -- | | Med Vänliga Hälsningar |
------------------------------__------------------------------__------------------- | Karli Sjöberg | Swedish University of Agricultural Sciences | Box 7079 (Visiting Address Kronåsvägen 8) | S-750 07 Uppsala, Sweden | Phone: +46-(0)18-67 15 66 | karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>> | | _________________________________________________ | Users mailing list | Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
| http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> | _________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>

hi ubuntu should list in ther agent list. when you want to create vm's in the operating list drop down menu. please dont forget other OS's. regards On Tue, Sep 3, 2013 at 4:07 PM, Itamar Heim <iheim@redhat.com> wrote:
On 09/03/2013 12:02 PM, alireza sadeh seighalan wrote:
hi again
ubuntu agent should be add in the list like vmware .
added in which list?
about installation
i have to install ovirt packages by local repository and there is problem in this status and in adding host to ovirt manager you need to install packages that need to internet .
why do you need internet? you just need the host to be configured with a repo containing the packages, which can be local as well?
regards,
On Tue, Sep 3, 2013 at 1:26 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 09/03/2013 10:53 AM, alireza sadeh seighalan wrote:
hi
please add support from solaris , ubuntu and other OS's.
we've added in 3.3 ubuntu and suse as guest os's - i can't tell you if they work or not. you can now easily add on your own more OSs as well: http://www.ovirt.org/OS_info
guest agent for ubuntu should be working.
> installation is a little hard specially in an evironment without > internet.
can you please share more on the issues you faced?
thanks, Itamar
regards,
On Mon, Sep 2, 2013 at 7:43 PM, Doron Fediuck <dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>>> wrote:
----- Original Message ----- | From: "Karli Sjöberg" <Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>>* *> | To: "Itamar Heim" <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> | Cc: users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>>, "Jan-Olov Persson" <jan-olov.persson@slu.se <mailto:jan-olov.persson@slu.**se<jan-olov.persson@slu.se>
<mailto:jan-olov.persson@slu._**_se <mailto:jan-olov.persson@slu. **se <jan-olov.persson@slu.se>>>>
| Sent: Monday, September 2, 2013 10:42:52 AM | Subject: Re: [Users] so, what do you want next in oVirt? | | Hi, | | I would like to see a more advanced type of Power-Saving Cluster policy, | where an empty Host gets put into a sleep-mode, so that it draws as little | power as possible and can be awoken by engine when VM's starts to use more | resources. | | A typical scenario would be that all VM's are spread evenly over Hosts during | day-time, when customers are actively using their VM's. But then during | off-hours, VM's gets clumped up into maybe one or two Hosts, and the other | Hosts can be put to sleep to save energy, and then woken up in the morning | again. | | /Karli |
Hi Karli, this should be achievable for you in 3.3 with some minor work on your side; In oVirt 3.3 we introduce the new scheduler, which allows you to run your own code when doing VM placement and load balancing.
You can write your own load balancing logic replacing the built-in one, which will switch to power saving at a given time. Once a host has no running VMs, your balance logic can switch it into S3 state (sleep), and when needed or in a given time use wake-on-lan to resume all suspended hosts.
If you want we can guide you through it post 3.3 release.
Doron
| tis 2013-08-20 klockan 17:19 -0400 skrev Itamar Heim: | | | earlier in the year we did a survey for feature requests / improvements | / etc. | | since a lot of things were added, and priorities usually change, I'd | like to ask again for "what do you need the most from oVirt / what are | your pain points" next? | | below[1] I've listed my understanding of what already went in from | previous survey requests (to various degrees of coverage). | | Thanks, | Itamar | | [1] from the top 12 | V Allow disk resize | V Integrate Nagios/Zabbix monitoring - via a ui plugin | V Highly Available engine - via hosted engine[2] | V Open vSwitch integration - via neutron integration | X Allow cloning VMs without template | ? Enable hypervisor upgrade/updates through engine[3] | V Allow engine on an oVirt hosted VM - via hosted engine[2] | V Enable guest configuration (root password, SSH keys, network) via | guest agent in engine - via cloud-init | X Integrate v2v into engine | ? Bond/extend ovirtmgmt with a second network for HA/increased | bandwidth[4] | X Integrate scheduling of snapshots and VM export for backups in | engine[5] | V Spice – support Google Chrome - via mime based launch | | | Other items mentioned in previous survey which should be covered by now: | - Fix timeout when adding local host during all-in-one configuration | - Fix engine set-up when SELinux is disabled | - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) | - Allow multiple VMs to be deployed from the same template at the same | time | - ISO domains on local/GlusterS | - Show IP addresses in Virtual Machines->Network Interfaces | - OpenStack Quantum support (now called Neutron) | - noVNC support | - Support spice.html5 and websocket proxy | - Add other guest OSes to list | - Port oVirt guest agent to Ubuntu[6] | - SLA - Allow resource time-sharing | - Spice - Mac client (via mime based launch) | - Spice - port XPI plug-in to Windows (not sure this will happen, but | mime based launch allows using firefox now) | - Spice - client for Ubuntu/Debian (should be covered via mime based | launch) | | | [2] hosted engine is in active development, but not released yet. | [3] host update is supported, but not for general yum update. | [4] a lot of improvements were done in this space, but i'm not sure if | they cover this exact use case | [5] backup api is now being pushed to master, and orchestration of | backups should probably happen via 3rd part backup vendors? | [6] I'm not sure packaging exists yet, but ubuntu is covered for the | basic functionality of the guest agent. | ______________________________**___________________ | Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
| http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
| | | -- | | Med Vänliga Hälsningar |
------------------------------**__----------------------------** --__-------------------
| Karli Sjöberg | Swedish University of Agricultural Sciences | Box 7079 (Visiting Address Kronåsvägen 8) | S-750 07 Uppsala, Sweden | Phone: +46-(0)18-67 15 66 | karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>> | | ______________________________**___________________
| Users mailing list | Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
| http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
| ______________________________**___________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
______________________________**___________________
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>

On 09/03/2013 03:54 PM, alireza sadeh seighalan wrote:
hi
ubuntu should list in ther agent list. when you want to create vm's in the operating list drop down menu. please dont forget other OS's. regards
sorry, I'm still not sure i understand. we added ubuntu and sles to list of OSs in 3.3. can you please elaborate on what is missing? thanks, Itamar
On Tue, Sep 3, 2013 at 4:07 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 09/03/2013 12:02 PM, alireza sadeh seighalan wrote:
hi again
ubuntu agent should be add in the list like vmware .
added in which list?
> about installation
i have to install ovirt packages by local repository and there is problem in this status and in adding host to ovirt manager you need to install packages that need to internet .
why do you need internet? you just need the host to be configured with a repo containing the packages, which can be local as well?
regards,
On Tue, Sep 3, 2013 at 1:26 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> wrote:
On 09/03/2013 10:53 AM, alireza sadeh seighalan wrote:
hi
please add support from solaris , ubuntu and other OS's.
we've added in 3.3 ubuntu and suse as guest os's - i can't tell you if they work or not. you can now easily add on your own more OSs as well: http://www.ovirt.org/OS_info
guest agent for ubuntu should be working.
> installation is a little hard specially in an evironment without > internet.
can you please share more on the issues you faced?
thanks, Itamar
regards,
On Mon, Sep 2, 2013 at 7:43 PM, Doron Fediuck <dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>>>> wrote:
----- Original Message ----- | From: "Karli Sjöberg" <Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>>>__> | To: "Itamar Heim" <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>>> | Cc: users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>> <mailto:users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>>>, "Jan-Olov Persson" <jan-olov.persson@slu.se <mailto:jan-olov.persson@slu.se> <mailto:jan-olov.persson@slu.__se <mailto:jan-olov.persson@slu.se>> <mailto:jan-olov.persson@slu. <mailto:jan-olov.persson@slu.>____se <mailto:jan-olov.persson@slu.__se <mailto:jan-olov.persson@slu.se>>>>
| Sent: Monday, September 2, 2013 10:42:52 AM | Subject: Re: [Users] so, what do you want next in oVirt? | | Hi, | | I would like to see a more advanced type of Power-Saving Cluster policy, | where an empty Host gets put into a sleep-mode, so that it draws as little | power as possible and can be awoken by engine when VM's starts to use more | resources. | | A typical scenario would be that all VM's are spread evenly over Hosts during | day-time, when customers are actively using their VM's. But then during | off-hours, VM's gets clumped up into maybe one or two Hosts, and the other | Hosts can be put to sleep to save energy, and then woken up in the morning | again. | | /Karli |
Hi Karli, this should be achievable for you in 3.3 with some minor work on your side; In oVirt 3.3 we introduce the new scheduler, which allows you to run your own code when doing VM placement and load balancing.
You can write your own load balancing logic replacing the built-in one, which will switch to power saving at a given time. Once a host has no running VMs, your balance logic can switch it into S3 state (sleep), and when needed or in a given time use wake-on-lan to resume all suspended hosts.
If you want we can guide you through it post 3.3 release.
Doron
| tis 2013-08-20 klockan 17:19 -0400 skrev Itamar Heim: | | | earlier in the year we did a survey for feature requests / improvements | / etc. | | since a lot of things were added, and priorities usually change, I'd | like to ask again for "what do you need the most from oVirt / what are | your pain points" next? | | below[1] I've listed my understanding of what already went in from | previous survey requests (to various degrees of coverage). | | Thanks, | Itamar | | [1] from the top 12 | V Allow disk resize | V Integrate Nagios/Zabbix monitoring - via a ui plugin | V Highly Available engine - via hosted engine[2] | V Open vSwitch integration - via neutron integration | X Allow cloning VMs without template | ? Enable hypervisor upgrade/updates through engine[3] | V Allow engine on an oVirt hosted VM - via hosted engine[2] | V Enable guest configuration (root password, SSH keys, network) via | guest agent in engine - via cloud-init | X Integrate v2v into engine | ? Bond/extend ovirtmgmt with a second network for HA/increased | bandwidth[4] | X Integrate scheduling of snapshots and VM export for backups in | engine[5] | V Spice – support Google Chrome - via mime based launch | | | Other items mentioned in previous survey which should be covered by now: | - Fix timeout when adding local host during all-in-one configuration | - Fix engine set-up when SELinux is disabled | - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) | - Allow multiple VMs to be deployed from the same template at the same | time | - ISO domains on local/GlusterS | - Show IP addresses in Virtual Machines->Network Interfaces | - OpenStack Quantum support (now called Neutron) | - noVNC support | - Support spice.html5 and websocket proxy | - Add other guest OSes to list | - Port oVirt guest agent to Ubuntu[6] | - SLA - Allow resource time-sharing | - Spice - Mac client (via mime based launch) | - Spice - port XPI plug-in to Windows (not sure this will happen, but | mime based launch allows using firefox now) | - Spice - client for Ubuntu/Debian (should be covered via mime based | launch) | | | [2] hosted engine is in active development, but not released yet. | [3] host update is supported, but not for general yum update. | [4] a lot of improvements were done in this space, but i'm not sure if | they cover this exact use case | [5] backup api is now being pushed to master, and orchestration of | backups should probably happen via 3rd part backup vendors? | [6] I'm not sure packaging exists yet, but ubuntu is covered for the | basic functionality of the guest agent. | ___________________________________________________ | Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
| http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>> | | | -- | | Med Vänliga Hälsningar |
------------------------------____----------------------------__--__-------------------
| Karli Sjöberg | Swedish University of Agricultural Sciences | Box 7079 (Visiting Address Kronåsvägen 8) | S-750 07 Uppsala, Sweden | Phone: +46-(0)18-67 15 66 | karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>>> | | ___________________________________________________
| Users mailing list | Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
| http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>> | ___________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
___________________________________________________
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>

hi in ovirt 3.2 there isnt ubuntu in the list. 3.3 is released?can i download it ? regards On Tue, Sep 3, 2013 at 5:25 PM, Itamar Heim <iheim@redhat.com> wrote:
On 09/03/2013 03:54 PM, alireza sadeh seighalan wrote:
hi
ubuntu should list in ther agent list. when you want to create vm's in the operating list drop down menu. please dont forget other OS's. regards
sorry, I'm still not sure i understand. we added ubuntu and sles to list of OSs in 3.3. can you please elaborate on what is missing?
thanks, Itamar
On Tue, Sep 3, 2013 at 4:07 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 09/03/2013 12:02 PM, alireza sadeh seighalan wrote:
hi again
ubuntu agent should be add in the list like vmware .
added in which list?
> about installation
i have to install ovirt packages by local repository and there is problem in this status and in adding host to ovirt manager you need to install packages that need to internet .
why do you need internet? you just need the host to be configured with a repo containing the packages, which can be local as well?
regards,
On Tue, Sep 3, 2013 at 1:26 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> wrote:
On 09/03/2013 10:53 AM, alireza sadeh seighalan wrote:
hi
please add support from solaris , ubuntu and other OS's.
we've added in 3.3 ubuntu and suse as guest os's - i can't tell you if they work or not. you can now easily add on your own more OSs as well: http://www.ovirt.org/OS_info
guest agent for ubuntu should be working.
> installation is a little hard specially in an evironment without > internet.
can you please share more on the issues you faced?
thanks, Itamar
regards,
On Mon, Sep 2, 2013 at 7:43 PM, Doron Fediuck <dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>>>**> wrote:
----- Original Message ----- | From: "Karli Sjöberg" <Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto: Karli.Sjoberg@slu.se>> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>>**>__> | To: "Itamar Heim" <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>>>
| Cc: users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>> <mailto:users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>>>, "Jan-Olov Persson" <jan-olov.persson@slu.se <mailto:jan-olov.persson@slu.**se <jan-olov.persson@slu.se>> <mailto:jan-olov.persson@slu._**_se <mailto:jan-olov.persson@slu. **se <jan-olov.persson@slu.se>>> <mailto:jan-olov.persson@slu. <mailto:jan-olov.persson@slu.>**____se
<mailto:jan-olov.persson@slu._**_se <mailto:jan-olov.persson@slu.**se <jan-olov.persson@slu.se>>>>>
| Sent: Monday, September 2, 2013 10:42:52 AM | Subject: Re: [Users] so, what do you want next in oVirt? | | Hi, | | I would like to see a more advanced type of Power-Saving Cluster policy, | where an empty Host gets put into a sleep-mode, so that it draws as little | power as possible and can be awoken by engine when VM's starts to use more | resources. | | A typical scenario would be that all VM's are spread evenly over Hosts during | day-time, when customers are actively using their VM's. But then during | off-hours, VM's gets clumped up into maybe one or two Hosts, and the other | Hosts can be put to sleep to save energy, and then woken up in the morning | again. | | /Karli |
Hi Karli, this should be achievable for you in 3.3 with some minor work on your side; In oVirt 3.3 we introduce the new scheduler, which allows you to run your own code when doing VM placement and load balancing.
You can write your own load balancing logic replacing the built-in one, which will switch to power saving at a given time. Once a host has no running VMs, your balance logic can switch it into S3 state (sleep), and when needed or in a given time use wake-on-lan to resume all suspended hosts.
If you want we can guide you through it post 3.3 release.
Doron
| tis 2013-08-20 klockan 17:19 -0400 skrev Itamar Heim: | | | earlier in the year we did a survey for feature requests / improvements | / etc. | | since a lot of things were added, and priorities usually change, I'd | like to ask again for "what do you need the most from oVirt / what are | your pain points" next? | | below[1] I've listed my understanding of what already went in from | previous survey requests (to various degrees of coverage). | | Thanks, | Itamar | | [1] from the top 12 | V Allow disk resize | V Integrate Nagios/Zabbix monitoring - via a ui plugin | V Highly Available engine - via hosted engine[2] | V Open vSwitch integration - via neutron integration | X Allow cloning VMs without template | ? Enable hypervisor upgrade/updates through engine[3] | V Allow engine on an oVirt hosted VM - via hosted engine[2] | V Enable guest configuration (root password, SSH keys, network) via | guest agent in engine - via cloud-init | X Integrate v2v into engine | ? Bond/extend ovirtmgmt with a second network for HA/increased | bandwidth[4] | X Integrate scheduling of snapshots and VM export for backups in | engine[5] | V Spice – support Google Chrome - via mime based launch | | | Other items mentioned in previous survey which should be covered by now: | - Fix timeout when adding local host during all-in-one configuration | - Fix engine set-up when SELinux is disabled | - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) | - Allow multiple VMs to be deployed from the same template at the same | time | - ISO domains on local/GlusterS | - Show IP addresses in Virtual Machines->Network Interfaces | - OpenStack Quantum support (now called Neutron) | - noVNC support | - Support spice.html5 and websocket proxy | - Add other guest OSes to list | - Port oVirt guest agent to Ubuntu[6] | - SLA - Allow resource time-sharing | - Spice - Mac client (via mime based launch) | - Spice - port XPI plug-in to Windows (not sure this will happen, but | mime based launch allows using firefox now) | - Spice - client for Ubuntu/Debian (should be covered via mime based | launch) | | | [2] hosted engine is in active development, but not released yet. | [3] host update is supported, but not for general yum update. | [4] a lot of improvements were done in this space, but i'm not sure if | they cover this exact use case | [5] backup api is now being pushed to master, and orchestration of | backups should probably happen via 3rd part backup vendors? | [6] I'm not sure packaging exists yet, but ubuntu is covered for the | basic functionality of the guest agent. | ______________________________** _____________________
| Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
| http://lists.ovirt.org/____**mailman/listinfo/users<http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
| | | -- | | Med Vänliga Hälsningar |
------------------------------**____--------------------------** --__--__-------------------
| Karli Sjöberg | Swedish University of Agricultural Sciences | Box 7079 (Visiting Address Kronåsvägen 8) | S-750 07 Uppsala, Sweden | Phone: +46-(0)18-67 15 66 | karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>>**> | | ______________________________** _____________________
| Users mailing list | Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
| http://lists.ovirt.org/____**mailman/listinfo/users<http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
| ______________________________**
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
http://lists.ovirt.org/____**mailman/listinfo/users<http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
______________________________**_____________________
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____**mailman/listinfo/users<http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>

On 09/03/2013 03:58 PM, alireza sadeh seighalan wrote:
hi
in ovirt 3.2 there isnt ubuntu in the list. 3.3 is released?can i download it ?
go/no-go on 3.3 release is happening right now...
regards
On Tue, Sep 3, 2013 at 5:25 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 09/03/2013 03:54 PM, alireza sadeh seighalan wrote:
hi
ubuntu should list in ther agent list. when you want to create vm's in the operating list drop down menu. please dont forget other OS's. regards
sorry, I'm still not sure i understand. we added ubuntu and sles to list of OSs in 3.3. can you please elaborate on what is missing?
thanks, Itamar
On Tue, Sep 3, 2013 at 4:07 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> wrote:
On 09/03/2013 12:02 PM, alireza sadeh seighalan wrote:
hi again
ubuntu agent should be add in the list like vmware .
added in which list?
> about installation
i have to install ovirt packages by local repository and there is problem in this status and in adding host to ovirt manager you need to install packages that need to internet .
why do you need internet? you just need the host to be configured with a repo containing the packages, which can be local as well?
regards,
On Tue, Sep 3, 2013 at 1:26 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>>> wrote:
On 09/03/2013 10:53 AM, alireza sadeh seighalan wrote:
hi
please add support from solaris , ubuntu and other OS's.
we've added in 3.3 ubuntu and suse as guest os's - i can't tell you if they work or not. you can now easily add on your own more OSs as well: http://www.ovirt.org/OS_info
guest agent for ubuntu should be working.
> installation is a little hard specially in an evironment without > internet.
can you please share more on the issues you faced?
thanks, Itamar
regards,
On Mon, Sep 2, 2013 at 7:43 PM, Doron Fediuck <dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>>> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>>>>__> wrote:
----- Original Message ----- | From: "Karli Sjöberg" <Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>>> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>>>__>__> | To: "Itamar Heim" <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>>>>
| Cc: users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>> <mailto:users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>>> <mailto:users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>> <mailto:users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>>>>, "Jan-Olov Persson" <jan-olov.persson@slu.se <mailto:jan-olov.persson@slu.se> <mailto:jan-olov.persson@slu.__se <mailto:jan-olov.persson@slu.se>> <mailto:jan-olov.persson@slu. <mailto:jan-olov.persson@slu.>____se <mailto:jan-olov.persson@slu.__se <mailto:jan-olov.persson@slu.se>>> <mailto:jan-olov.persson@slu <mailto:jan-olov.persson@slu>. <mailto:jan-olov.persson@slu <mailto:jan-olov.persson@slu>.>______se
<mailto:jan-olov.persson@slu. <mailto:jan-olov.persson@slu.>____se <mailto:jan-olov.persson@slu.__se <mailto:jan-olov.persson@slu.se>>>>>
| Sent: Monday, September 2, 2013 10:42:52 AM | Subject: Re: [Users] so, what do you want next in oVirt? | | Hi, | | I would like to see a more advanced type of Power-Saving Cluster policy, | where an empty Host gets put into a sleep-mode, so that it draws as little | power as possible and can be awoken by engine when VM's starts to use more | resources. | | A typical scenario would be that all VM's are spread evenly over Hosts during | day-time, when customers are actively using their VM's. But then during | off-hours, VM's gets clumped up into maybe one or two Hosts, and the other | Hosts can be put to sleep to save energy, and then woken up in the morning | again. | | /Karli |
Hi Karli, this should be achievable for you in 3.3 with some minor work on your side; In oVirt 3.3 we introduce the new scheduler, which allows you to run your own code when doing VM placement and load balancing.
You can write your own load balancing logic replacing the built-in one, which will switch to power saving at a given time. Once a host has no running VMs, your balance logic can switch it into S3 state (sleep), and when needed or in a given time use wake-on-lan to resume all suspended hosts.
If you want we can guide you through it post 3.3 release.
Doron
| tis 2013-08-20 klockan 17:19 -0400 skrev Itamar Heim: | | | earlier in the year we did a survey for feature requests / improvements | / etc. | | since a lot of things were added, and priorities usually change, I'd | like to ask again for "what do you need the most from oVirt / what are | your pain points" next? | | below[1] I've listed my understanding of what already went in from | previous survey requests (to various degrees of coverage). | | Thanks, | Itamar | | [1] from the top 12 | V Allow disk resize | V Integrate Nagios/Zabbix monitoring - via a ui plugin | V Highly Available engine - via hosted engine[2] | V Open vSwitch integration - via neutron integration | X Allow cloning VMs without template | ? Enable hypervisor upgrade/updates through engine[3] | V Allow engine on an oVirt hosted VM - via hosted engine[2] | V Enable guest configuration (root password, SSH keys, network) via | guest agent in engine - via cloud-init | X Integrate v2v into engine | ? Bond/extend ovirtmgmt with a second network for HA/increased | bandwidth[4] | X Integrate scheduling of snapshots and VM export for backups in | engine[5] | V Spice – support Google Chrome - via mime based launch | | | Other items mentioned in previous survey which should be covered by now: | - Fix timeout when adding local host during all-in-one configuration | - Fix engine set-up when SELinux is disabled | - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) | - Allow multiple VMs to be deployed from the same template at the same | time | - ISO domains on local/GlusterS | - Show IP addresses in Virtual Machines->Network Interfaces | - OpenStack Quantum support (now called Neutron) | - noVNC support | - Support spice.html5 and websocket proxy | - Add other guest OSes to list | - Port oVirt guest agent to Ubuntu[6] | - SLA - Allow resource time-sharing | - Spice - Mac client (via mime based launch) | - Spice - port XPI plug-in to Windows (not sure this will happen, but | mime based launch allows using firefox now) | - Spice - client for Ubuntu/Debian (should be covered via mime based | launch) | | | [2] hosted engine is in active development, but not released yet. | [3] host update is supported, but not for general yum update. | [4] a lot of improvements were done in this space, but i'm not sure if | they cover this exact use case | [5] backup api is now being pushed to master, and orchestration of | backups should probably happen via 3rd part backup vendors? | [6] I'm not sure packaging exists yet, but ubuntu is covered for the | basic functionality of the guest agent. | _____________________________________________________
| Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>>
| http://lists.ovirt.org/______mailman/listinfo/users <http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>> | | | -- | | Med Vänliga Hälsningar |
------------------------------______--------------------------__--__--__-------------------
| Karli Sjöberg | Swedish University of Agricultural Sciences | Box 7079 (Visiting Address Kronåsvägen 8) | S-750 07 Uppsala, Sweden | Phone: +46-(0)18-67 15 66 | karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>>> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>>>__> | | _____________________________________________________
| Users mailing list | Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>>
| http://lists.ovirt.org/______mailman/listinfo/users <http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>> |
_____________________________________________________
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>>
http://lists.ovirt.org/______mailman/listinfo/users <http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>>
_____________________________________________________
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> http://lists.ovirt.org/______mailman/listinfo/users <http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>>

Dne 3.9.2013 15:50, Itamar Heim napsal(a):
On 09/03/2013 03:58 PM, alireza sadeh seighalan wrote:
hi
in ovirt 3.2 there isnt ubuntu in the list. 3.3 is released?can i download it ?
go/no-go on 3.3 release is happening right now...
regards
On Tue, Sep 3, 2013 at 5:25 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 09/03/2013 03:54 PM, alireza sadeh seighalan wrote:
hi
ubuntu should list in ther agent list. when you want to create vm's in the operating list drop down menu. please dont forget other OS's. regards
sorry, I'm still not sure i understand. we added ubuntu and sles to list of OSs in 3.3. can you please elaborate on what is missing?
thanks, Itamar
On Tue, Sep 3, 2013 at 4:07 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> wrote:
On 09/03/2013 12:02 PM, alireza sadeh seighalan wrote:
hi again
ubuntu agent should be add in the list like vmware .
added in which list?
> about installation
i have to install ovirt packages by local repository and there is problem in this status and in adding host to ovirt manager you need to install packages that need to internet .
why do you need internet? you just need the host to be configured with a repo containing the packages, which can be local as well?
regards,
On Tue, Sep 3, 2013 at 1:26 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>>> wrote:
On 09/03/2013 10:53 AM, alireza sadeh seighalan wrote:
hi
please add support from solaris , ubuntu and other OS's.
we've added in 3.3 ubuntu and suse as guest os's - i can't tell you if they work or not. you can now easily add on your own more OSs as well: http://www.ovirt.org/OS_info
guest agent for ubuntu should be working.
> installation is a little hard specially in an evironment without > internet.
can you please share more on the issues you faced?
thanks, Itamar
regards,
On Mon, Sep 2, 2013 at 7:43 PM, Doron Fediuck <dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>>> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>>>>__> wrote:
----- Original Message ----- | From: "Karli Sjöberg" <Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>>> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>>>__>__> | To: "Itamar Heim" <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>>>>
| Cc: users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>> <mailto:users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>>> <mailto:users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>> <mailto:users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>>>>, "Jan-Olov Persson" <jan-olov.persson@slu.se <mailto:jan-olov.persson@slu.se> <mailto:jan-olov.persson@slu.__se <mailto:jan-olov.persson@slu.se>> <mailto:jan-olov.persson@slu. <mailto:jan-olov.persson@slu.>____se <mailto:jan-olov.persson@slu.__se <mailto:jan-olov.persson@slu.se>>> <mailto:jan-olov.persson@slu <mailto:jan-olov.persson@slu>. <mailto:jan-olov.persson@slu <mailto:jan-olov.persson@slu>.>______se
<mailto:jan-olov.persson@slu. <mailto:jan-olov.persson@slu.>____se <mailto:jan-olov.persson@slu.__se <mailto:jan-olov.persson@slu.se>>>>>
| Sent: Monday, September 2, 2013 10:42:52 AM | Subject: Re: [Users] so, what do you want next in oVirt? | | Hi, | | I would like to see a more advanced type of Power-Saving Cluster policy, | where an empty Host gets put into a sleep-mode, so that it draws as little | power as possible and can be awoken by engine when VM's starts to use more | resources. | | A typical scenario would be that all VM's are spread evenly over Hosts during | day-time, when customers are actively using their VM's. But then during | off-hours, VM's gets clumped up into maybe one or two Hosts, and the other | Hosts can be put to sleep to save energy, and then woken up in the morning | again. | | /Karli |
Hi Karli, this should be achievable for you in 3.3 with some minor work on your side; In oVirt 3.3 we introduce the new scheduler, which allows you to run your own code when doing VM placement and load balancing.
You can write your own load balancing logic replacing the built-in one, which will switch to power saving at a given time. Once a host has no running VMs, your balance logic can switch it into S3 state (sleep), and when needed or in a given time use wake-on-lan to resume all suspended hosts.
If you want we can guide you through it post 3.3 release.
Doron
| tis 2013-08-20 klockan 17:19 -0400 skrev Itamar Heim: | | | earlier in the year we did a survey for feature requests / improvements | / etc. | | since a lot of things were added, and priorities usually change, I'd | like to ask again for "what do you need the most from oVirt / what are | your pain points" next? | | below[1] I've listed my understanding of what already went in from | previous survey requests (to various degrees of coverage). | | Thanks, | Itamar | | [1] from the top 12 | V Allow disk resize | V Integrate Nagios/Zabbix monitoring - via a ui plugin | V Highly Available engine - via hosted engine[2] | V Open vSwitch integration - via neutron integration | X Allow cloning VMs without template | ? Enable hypervisor upgrade/updates through engine[3] | V Allow engine on an oVirt hosted VM - via hosted engine[2] | V Enable guest configuration (root password, SSH keys, network) via | guest agent in engine - via cloud-init | X Integrate v2v into engine | ? Bond/extend ovirtmgmt with a second network for HA/increased | bandwidth[4] | X Integrate scheduling of snapshots and VM export for backups in | engine[5] | V Spice – support Google Chrome - via mime based launch | | | Other items mentioned in previous survey which should be covered by now: | - Fix timeout when adding local host during all-in-one configuration | - Fix engine set-up when SELinux is disabled | - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) | - Allow multiple VMs to be deployed from the same template at the same | time | - ISO domains on local/GlusterS | - Show IP addresses in Virtual Machines->Network Interfaces | - OpenStack Quantum support (now called Neutron) | - noVNC support | - Support spice.html5 and websocket proxy | - Add other guest OSes to list | - Port oVirt guest agent to Ubuntu[6] | - SLA - Allow resource time-sharing | - Spice - Mac client (via mime based launch) | - Spice - port XPI plug-in to Windows (not sure this will happen, but | mime based launch allows using firefox now) | - Spice - client for Ubuntu/Debian (should be covered via mime based | launch) | | | [2] hosted engine is in active development, but not released yet. | [3] host update is supported, but not for general yum update. | [4] a lot of improvements were done in this space, but i'm not sure if | they cover this exact use case | [5] backup api is now being pushed to master, and orchestration of | backups should probably happen via 3rd part backup vendors? | [6] I'm not sure packaging exists yet, but ubuntu is covered for the | basic functionality of the guest agent. | _____________________________________________________
| Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>>
| http://lists.ovirt.org/______mailman/listinfo/users <http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>> | | | -- | | Med Vänliga Hälsningar |
------------------------------______--------------------------__--__--__-------------------
| Karli Sjöberg | Swedish University of Agricultural Sciences | Box 7079 (Visiting Address Kronåsvägen 8) | S-750 07 Uppsala, Sweden | Phone: +46-(0)18-67 15 66 | karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>>> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>>>__> | | _____________________________________________________
| Users mailing list | Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>>
| http://lists.ovirt.org/______mailman/listinfo/users <http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>> |
_____________________________________________________
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>>
http://lists.ovirt.org/______mailman/listinfo/users <http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>>
_____________________________________________________
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> http://lists.ovirt.org/______mailman/listinfo/users <http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users Hello,
I dont have Ubuntu in Operating system picker in new VM creating dialog. Iam using oVirt 3.3RC. Someone wrote that there is working ovirt-agent for Ubuntu, but I can not find any info about it. Is there a deb package somewhere or howto about deploying it? I would suggest another feature (maybe it is covered by external providers). Network NAT for VMs (with buildin DHCP). Thank you.

On 09/03/2013 05:48 PM, Jakub Bittner wrote:
Dne 3.9.2013 15:50, Itamar Heim napsal(a):
On 09/03/2013 03:58 PM, alireza sadeh seighalan wrote:
hi
in ovirt 3.2 there isnt ubuntu in the list. 3.3 is released?can i download it ?
go/no-go on 3.3 release is happening right now...
regards
On Tue, Sep 3, 2013 at 5:25 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 09/03/2013 03:54 PM, alireza sadeh seighalan wrote:
hi
ubuntu should list in ther agent list. when you want to create vm's in the operating list drop down menu. please dont forget other OS's. regards
sorry, I'm still not sure i understand. we added ubuntu and sles to list of OSs in 3.3. can you please elaborate on what is missing?
thanks, Itamar
On Tue, Sep 3, 2013 at 4:07 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> wrote:
On 09/03/2013 12:02 PM, alireza sadeh seighalan wrote:
hi again
ubuntu agent should be add in the list like vmware .
added in which list?
> about installation
i have to install ovirt packages by local repository and there is problem in this status and in adding host to ovirt manager you need to install packages that need to internet .
why do you need internet? you just need the host to be configured with a repo containing the packages, which can be local as well?
regards,
On Tue, Sep 3, 2013 at 1:26 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>>> wrote:
On 09/03/2013 10:53 AM, alireza sadeh seighalan wrote:
hi
please add support from solaris , ubuntu and other OS's.
we've added in 3.3 ubuntu and suse as guest os's - i can't tell you if they work or not. you can now easily add on your own more OSs as well: http://www.ovirt.org/OS_info
guest agent for ubuntu should be working.
> installation is a little hard specially in an evironment without > internet.
can you please share more on the issues you faced?
thanks, Itamar
regards,
On Mon, Sep 2, 2013 at 7:43 PM, Doron Fediuck <dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>>> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>>>>__> wrote:
----- Original Message ----- | From: "Karli Sjöberg" <Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>>> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>>>__>__> | To: "Itamar Heim" <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>>>>
| Cc: users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>> <mailto:users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>>> <mailto:users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>> <mailto:users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>>>>, "Jan-Olov Persson" <jan-olov.persson@slu.se <mailto:jan-olov.persson@slu.se> <mailto:jan-olov.persson@slu.__se <mailto:jan-olov.persson@slu.se>> <mailto:jan-olov.persson@slu. <mailto:jan-olov.persson@slu.>____se <mailto:jan-olov.persson@slu.__se <mailto:jan-olov.persson@slu.se>>> <mailto:jan-olov.persson@slu <mailto:jan-olov.persson@slu>. <mailto:jan-olov.persson@slu <mailto:jan-olov.persson@slu>.>______se
<mailto:jan-olov.persson@slu. <mailto:jan-olov.persson@slu.>____se <mailto:jan-olov.persson@slu.__se <mailto:jan-olov.persson@slu.se>>>>>
| Sent: Monday, September 2, 2013 10:42:52 AM | Subject: Re: [Users] so, what do you want next in oVirt? | | Hi, | | I would like to see a more advanced type of Power-Saving Cluster policy, | where an empty Host gets put into a sleep-mode, so that it draws as little | power as possible and can be awoken by engine when VM's starts to use more | resources. | | A typical scenario would be that all VM's are spread evenly over Hosts during | day-time, when customers are actively using their VM's. But then during | off-hours, VM's gets clumped up into maybe one or two Hosts, and the other | Hosts can be put to sleep to save energy, and then woken up in the morning | again. | | /Karli |
Hi Karli, this should be achievable for you in 3.3 with some minor work on your side; In oVirt 3.3 we introduce the new scheduler, which allows you to run your own code when doing VM placement and load balancing.
You can write your own load balancing logic replacing the built-in one, which will switch to power saving at a given time. Once a host has no running VMs, your balance logic can switch it into S3 state (sleep), and when needed or in a given time use wake-on-lan to resume all suspended hosts.
If you want we can guide you through it post 3.3 release.
Doron
| tis 2013-08-20 klockan 17:19 -0400 skrev Itamar Heim: | | | earlier in the year we did a survey for feature requests / improvements | / etc. | | since a lot of things were added, and priorities usually change, I'd | like to ask again for "what do you need the most from oVirt / what are | your pain points" next? | | below[1] I've listed my understanding of what already went in from | previous survey requests (to various degrees of coverage). | | Thanks, | Itamar | | [1] from the top 12 | V Allow disk resize | V Integrate Nagios/Zabbix monitoring - via a ui plugin | V Highly Available engine - via hosted engine[2] | V Open vSwitch integration - via neutron integration | X Allow cloning VMs without template | ? Enable hypervisor upgrade/updates through engine[3] | V Allow engine on an oVirt hosted VM - via hosted engine[2] | V Enable guest configuration (root password, SSH keys, network) via | guest agent in engine - via cloud-init | X Integrate v2v into engine | ? Bond/extend ovirtmgmt with a second network for HA/increased | bandwidth[4] | X Integrate scheduling of snapshots and VM export for backups in | engine[5] | V Spice – support Google Chrome - via mime based launch | | | Other items mentioned in previous survey which should be covered by now: | - Fix timeout when adding local host during all-in-one configuration | - Fix engine set-up when SELinux is disabled | - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) | - Allow multiple VMs to be deployed from the same template at the same | time | - ISO domains on local/GlusterS | - Show IP addresses in Virtual Machines->Network Interfaces | - OpenStack Quantum support (now called Neutron) | - noVNC support | - Support spice.html5 and websocket proxy | - Add other guest OSes to list | - Port oVirt guest agent to Ubuntu[6] | - SLA - Allow resource time-sharing | - Spice - Mac client (via mime based launch) | - Spice - port XPI plug-in to Windows (not sure this will happen, but | mime based launch allows using firefox now) | - Spice - client for Ubuntu/Debian (should be covered via mime based | launch) | | | [2] hosted engine is in active development, but not released yet. | [3] host update is supported, but not for general yum update. | [4] a lot of improvements were done in this space, but i'm not sure if | they cover this exact use case | [5] backup api is now being pushed to master, and orchestration of | backups should probably happen via 3rd part backup vendors? | [6] I'm not sure packaging exists yet, but ubuntu is covered for the | basic functionality of the guest agent. | _____________________________________________________
| Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>>
| http://lists.ovirt.org/______mailman/listinfo/users <http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>> | | | -- | | Med Vänliga Hälsningar |
------------------------------______--------------------------__--__--__-------------------
| Karli Sjöberg | Swedish University of Agricultural Sciences | Box 7079 (Visiting Address Kronåsvägen 8) | S-750 07 Uppsala, Sweden | Phone: +46-(0)18-67 15 66 | karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>>> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>>>__> | | _____________________________________________________
| Users mailing list | Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>>
| http://lists.ovirt.org/______mailman/listinfo/users <http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>> |
_____________________________________________________
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>>
http://lists.ovirt.org/______mailman/listinfo/users <http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>>
_____________________________________________________
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> http://lists.ovirt.org/______mailman/listinfo/users <http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users Hello,
I dont have Ubuntu in Operating system picker in new VM creating dialog. Iam using oVirt 3.3RC.
Roy - didn't this one make 3.3? i suggest backporting it to 3.3.1 if didn't and possible: http://gerrit.ovirt.org/#/c/16680/
Someone wrote that there is working ovirt-agent for Ubuntu, but I can not find any info about it. Is there a deb package somewhere or howto about deploying it?
not sure if anyone packed the .deb, but the guest agent is supposed to be there. Vinzenz?
I would suggest another feature (maybe it is covered by external providers). Network NAT for VMs (with buildin DHCP).
Thank you. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Sep 3, 2013, at 14:58 , alireza sadeh seighalan <seighalani@gmail.com> wrote:
hi
in ovirt 3.2 there isnt ubuntu in the list. 3.3 is released?can i download it ? regards
On Tue, Sep 3, 2013 at 5:25 PM, Itamar Heim <iheim@redhat.com> wrote: On 09/03/2013 03:54 PM, alireza sadeh seighalan wrote: hi
ubuntu should list in ther agent list. when you want to create vm's in the operating list drop down menu. please dont forget other OS's. regards
sorry, I'm still not sure i understand. we added ubuntu and sles to list of OSs in 3.3. can you please elaborate on what is missing?
Ubuntu and SLES in the list didn't make the 3.3 branch off. It should be easy enough to backport it to 3.3.1 As well as the guest agent .deb package. The support is there but not in the 3.3 RC build. Thanks, michal
thanks, Itamar
On Tue, Sep 3, 2013 at 4:07 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 09/03/2013 12:02 PM, alireza sadeh seighalan wrote:
hi again
ubuntu agent should be add in the list like vmware .
added in which list?
> about installation
i have to install ovirt packages by local repository and there is problem in this status and in adding host to ovirt manager you need to install packages that need to internet .
why do you need internet? you just need the host to be configured with a repo containing the packages, which can be local as well?
regards,
On Tue, Sep 3, 2013 at 1:26 PM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> wrote:
On 09/03/2013 10:53 AM, alireza sadeh seighalan wrote:
hi
please add support from solaris , ubuntu and other OS's.
we've added in 3.3 ubuntu and suse as guest os's - i can't tell you if they work or not. you can now easily add on your own more OSs as well: http://www.ovirt.org/OS_info
guest agent for ubuntu should be working.
> installation is a little hard specially in an evironment without > internet.
can you please share more on the issues you faced?
thanks, Itamar
regards,
On Mon, Sep 2, 2013 at 7:43 PM, Doron Fediuck <dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com> <mailto:dfediuck@redhat.com <mailto:dfediuck@redhat.com>>>> wrote:
----- Original Message ----- | From: "Karli Sjöberg" <Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se> <mailto:Karli.Sjoberg@slu.se <mailto:Karli.Sjoberg@slu.se>>>__> | To: "Itamar Heim" <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>>>
| Cc: users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>> <mailto:users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>>>, "Jan-Olov Persson" <jan-olov.persson@slu.se <mailto:jan-olov.persson@slu.se> <mailto:jan-olov.persson@slu.__se <mailto:jan-olov.persson@slu.se>> <mailto:jan-olov.persson@slu. <mailto:jan-olov.persson@slu.>____se
<mailto:jan-olov.persson@slu.__se <mailto:jan-olov.persson@slu.se>>>>
| Sent: Monday, September 2, 2013 10:42:52 AM | Subject: Re: [Users] so, what do you want next in oVirt? | | Hi, | | I would like to see a more advanced type of Power-Saving Cluster policy, | where an empty Host gets put into a sleep-mode, so that it draws as little | power as possible and can be awoken by engine when VM's starts to use more | resources. | | A typical scenario would be that all VM's are spread evenly over Hosts during | day-time, when customers are actively using their VM's. But then during | off-hours, VM's gets clumped up into maybe one or two Hosts, and the other | Hosts can be put to sleep to save energy, and then woken up in the morning | again. | | /Karli |
Hi Karli, this should be achievable for you in 3.3 with some minor work on your side; In oVirt 3.3 we introduce the new scheduler, which allows you to run your own code when doing VM placement and load balancing.
You can write your own load balancing logic replacing the built-in one, which will switch to power saving at a given time. Once a host has no running VMs, your balance logic can switch it into S3 state (sleep), and when needed or in a given time use wake-on-lan to resume all suspended hosts.
If you want we can guide you through it post 3.3 release.
Doron
| tis 2013-08-20 klockan 17:19 -0400 skrev Itamar Heim: | | | earlier in the year we did a survey for feature requests / improvements | / etc. | | since a lot of things were added, and priorities usually change, I'd | like to ask again for "what do you need the most from oVirt / what are | your pain points" next? | | below[1] I've listed my understanding of what already went in from | previous survey requests (to various degrees of coverage). | | Thanks, | Itamar | | [1] from the top 12 | V Allow disk resize | V Integrate Nagios/Zabbix monitoring - via a ui plugin | V Highly Available engine - via hosted engine[2] | V Open vSwitch integration - via neutron integration | X Allow cloning VMs without template | ? Enable hypervisor upgrade/updates through engine[3] | V Allow engine on an oVirt hosted VM - via hosted engine[2] | V Enable guest configuration (root password, SSH keys, network) via | guest agent in engine - via cloud-init | X Integrate v2v into engine | ? Bond/extend ovirtmgmt with a second network for HA/increased | bandwidth[4] | X Integrate scheduling of snapshots and VM export for backups in | engine[5] | V Spice – support Google Chrome - via mime based launch | | | Other items mentioned in previous survey which should be covered by now: | - Fix timeout when adding local host during all-in-one configuration | - Fix engine set-up when SELinux is disabled | - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) | - Allow multiple VMs to be deployed from the same template at the same | time | - ISO domains on local/GlusterS | - Show IP addresses in Virtual Machines->Network Interfaces | - OpenStack Quantum support (now called Neutron) | - noVNC support | - Support spice.html5 and websocket proxy | - Add other guest OSes to list | - Port oVirt guest agent to Ubuntu[6] | - SLA - Allow resource time-sharing | - Spice - Mac client (via mime based launch) | - Spice - port XPI plug-in to Windows (not sure this will happen, but | mime based launch allows using firefox now) | - Spice - client for Ubuntu/Debian (should be covered via mime based | launch) | | | [2] hosted engine is in active development, but not released yet. | [3] host update is supported, but not for general yum update. | [4] a lot of improvements were done in this space, but i'm not sure if | they cover this exact use case | [5] backup api is now being pushed to master, and orchestration of | backups should probably happen via 3rd part backup vendors? | [6] I'm not sure packaging exists yet, but ubuntu is covered for the | basic functionality of the guest agent. | ___________________________________________________
| Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
| http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>> | | | -- | | Med Vänliga Hälsningar |
------------------------------____----------------------------__--__-------------------
| Karli Sjöberg | Swedish University of Agricultural Sciences | Box 7079 (Visiting Address Kronåsvägen 8) | S-750 07 Uppsala, Sweden | Phone: +46-(0)18-67 15 66 | karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>>> | | ___________________________________________________
| Users mailing list | Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
| http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>> | ___________________________________________________
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
___________________________________________________
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi everyone, On 08/20/2013 11:19 PM, Itamar Heim wrote:
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Just to follow on from Itamar's request - I would sayt that this is also your opportunity, if there are things which you would like to see included in oVirt and are working on, then this is a great opportunity to ensure that it's on the radar for the next release or two. Cheers, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13

On 08/20/2013 11:19 PM, Itamar Heim wrote:
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
I'd like to see some way to ensure redundant groups of VMs are running on physically different hosts. E.g. if I have 2 or 3 dns servers running in a VM, I'd like to have them running on different hosts if they are available. I'd like these VMs to "repel" each other w.r.t. the physical running space. I imagine sometimes one could want the opposite as well, that two VM's that depend on each other over the network are running as close together as possible... (i.e. attract each other) I'm sure this is all scriptable, but it seems to fit with the management part of oVirt very well. Cheers Simon

Hi, Thanks for starting this thread again. On Tue, 2013-08-20 at 17:19 -0400, Itamar Heim wrote:
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize
Great!
V Integrate Nagios/Zabbix monitoring - via a ui plugin
Zabbix isn't working yet with my ui plugin (is there maybe another one which integrates Zabbix?) - but it will be available in a future release. Btw, there will be a new version of the monitoring ui plugin with new features available out at the end of this week...
V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template
This is definitely one feature I would like to see.
? Enable hypervisor upgrade/updates through engine[3]
The next feature I would like to request.
V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init
Cool. Can this be done on oVirt-node as well or only on CentOS/RHEL/Fedora hosts?
X Integrate v2v into engine
Vote for this again :)
? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch
At the moment spice support is really a pain (especially on Windows as the SpiceX.cab is missing). Great to hear that spice support has improved and Chrome can be used as well!
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS
ISO and export domains on FC/iSCSI would be good to have as well.
- Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list
Good to hear that other Linux distributions will be added as well.
- Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
Additional features I would like to see are: - oVirt guest agent for Ubuntu/openSUSE/SLES/Debian * *Ubuntu is already available thanks to Zhou Zheng Sheng - Resource pools At the moment only Quotas are available (which is great, btw), but in some cases it's necessary to implement resource pools. E.g. Limit CPU, Memory and Network for group of test vms, but give full resources to production vms. Maybe this could be done with cgroups? So e.g. I don't want to give a compile machine a full cpu core, but only 1 core with 500MHz or limit the bandwith to e.g. 10Mbit/s. - Upload ISOs within the GUI to ISO domain - Use existing share for ISO domain When creating an ISO domain, oVirt creates it's own directory structure with IDs. It would be nice if an existing share could be used (e.g. an ISO share on NFS server which is used by other service, too) without creating the structure with IDs. I know that the IDs are needed internally but I think it should be possible to reuse an existing share. Last but not least I want to thank all contributors of this project for their great work so far! Regards, René
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 09/05/2013 11:31 AM, René Koch (ovido) wrote:
V Enable guest configuration (root password, SSH keys, network) via
guest agent in engine - via cloud-init
Cool. Can this be done on oVirt-node as well or only on CentOS/RHEL/Fedora hosts?
I'll compile the rest later - just on this specific question - it should work on oVirt-node as well. please report a bug if it doesn't. thanks, Itamar

----- Original Message -----
From: "René Koch (ovido)" <r.koch@ovido.at> To: "Itamar Heim" <iheim@redhat.com> Cc: "ovirt-users" <users@ovirt.org> Sent: Thursday, September 5, 2013 11:31:57 AM Subject: Re: [Users] so, what do you want next in oVirt?
Hi,
Thanks for starting this thread again.
On Tue, 2013-08-20 at 17:19 -0400, Itamar Heim wrote:
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize
Great!
V Integrate Nagios/Zabbix monitoring - via a ui plugin
Zabbix isn't working yet with my ui plugin (is there maybe another one which integrates Zabbix?) - but it will be available in a future release. Btw, there will be a new version of the monitoring ui plugin with new features available out at the end of this week...
V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template
This is definitely one feature I would like to see.
? Enable hypervisor upgrade/updates through engine[3]
The next feature I would like to request.
V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init
Cool. Can this be done on oVirt-node as well or only on CentOS/RHEL/Fedora hosts?
X Integrate v2v into engine
Vote for this again :)
? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch
At the moment spice support is really a pain (especially on Windows as the SpiceX.cab is missing). Great to hear that spice support has improved and Chrome can be used as well!
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS
ISO and export domains on FC/iSCSI would be good to have as well.
- Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list
Good to hear that other Linux distributions will be added as well.
- Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
Additional features I would like to see are: - oVirt guest agent for Ubuntu/openSUSE/SLES/Debian *
*Ubuntu is already available thanks to Zhou Zheng Sheng
- Resource pools At the moment only Quotas are available (which is great, btw), but in some cases it's necessary to implement resource pools. E.g. Limit CPU, Memory and Network for group of test vms, but give full resources to production vms. Maybe this could be done with cgroups? So e.g. I don't want to give a compile machine a full cpu core, but only 1 core with 500MHz or limit the bandwith to e.g. 10Mbit/s.
Hi René, Due to time constraints, it did not make it into the 3.3 release, however in master we already have network QoS allowing you to limit bandwidth for vNICs. Also, we managed to implement CPU shares which allows you to handle priorities. Both will be included in the next version, and hopefuly some more. So if you ever feel like taking a ride with the bleeding edge, you have both network QoS and CPU shares waiting there.
- Upload ISOs within the GUI to ISO domain
- Use existing share for ISO domain When creating an ISO domain, oVirt creates it's own directory structure with IDs. It would be nice if an existing share could be used (e.g. an ISO share on NFS server which is used by other service, too) without creating the structure with IDs. I know that the IDs are needed internally but I think it should be possible to reuse an existing share.
Last but not least I want to thank all contributors of this project for their great work so far!
Regards, René
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Sun, 2013-09-08 at 18:03 -0400, Doron Fediuck wrote:
----- Original Message -----
From: "René Koch (ovido)" <r.koch@ovido.at> To: "Itamar Heim" <iheim@redhat.com> Cc: "ovirt-users" <users@ovirt.org> Sent: Thursday, September 5, 2013 11:31:57 AM Subject: Re: [Users] so, what do you want next in oVirt?
Hi,
Thanks for starting this thread again.
On Tue, 2013-08-20 at 17:19 -0400, Itamar Heim wrote:
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize
Great!
V Integrate Nagios/Zabbix monitoring - via a ui plugin
Zabbix isn't working yet with my ui plugin (is there maybe another one which integrates Zabbix?) - but it will be available in a future release. Btw, there will be a new version of the monitoring ui plugin with new features available out at the end of this week...
V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template
This is definitely one feature I would like to see.
? Enable hypervisor upgrade/updates through engine[3]
The next feature I would like to request.
V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init
Cool. Can this be done on oVirt-node as well or only on CentOS/RHEL/Fedora hosts?
X Integrate v2v into engine
Vote for this again :)
? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch
At the moment spice support is really a pain (especially on Windows as the SpiceX.cab is missing). Great to hear that spice support has improved and Chrome can be used as well!
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS
ISO and export domains on FC/iSCSI would be good to have as well.
- Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list
Good to hear that other Linux distributions will be added as well.
- Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
Additional features I would like to see are: - oVirt guest agent for Ubuntu/openSUSE/SLES/Debian *
*Ubuntu is already available thanks to Zhou Zheng Sheng
- Resource pools At the moment only Quotas are available (which is great, btw), but in some cases it's necessary to implement resource pools. E.g. Limit CPU, Memory and Network for group of test vms, but give full resources to production vms. Maybe this could be done with cgroups? So e.g. I don't want to give a compile machine a full cpu core, but only 1 core with 500MHz or limit the bandwith to e.g. 10Mbit/s.
Hi René, Due to time constraints, it did not make it into the 3.3 release, however in master we already have network QoS allowing you to limit bandwidth for vNICs. Also, we managed to implement CPU shares which allows you to handle priorities. Both will be included in the next version, and hopefuly some more.
So if you ever feel like taking a ride with the bleeding edge, you have both network QoS and CPU shares waiting there.
Thanks a lot for the information! It's great to know that this is/will be available.
- Upload ISOs within the GUI to ISO domain
- Use existing share for ISO domain When creating an ISO domain, oVirt creates it's own directory structure with IDs. It would be nice if an existing share could be used (e.g. an ISO share on NFS server which is used by other service, too) without creating the structure with IDs. I know that the IDs are needed internally but I think it should be possible to reuse an existing share.
Last but not least I want to thank all contributors of this project for their great work so far!
Regards, René
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 09/05/2013 11:31 AM, René Koch (ovido) wrote:
- Resource pools At the moment only Quotas are available (which is great, btw), but in some cases it's necessary to implement resource pools. E.g. Limit CPU, Memory and Network for group of test vms, but give full resources to production vms. Maybe this could be done with cgroups? So e.g. I don't want to give a compile machine a full cpu core, but only 1 core with 500MHz or limit the bandwith to e.g. 10Mbit/s.
doron - doesn't cpu shared provide this?

----- Original Message -----
From: "Itamar Heim" <iheim@redhat.com> To: "\"René \"Koch (ovido)\"\"" <r.koch@ovido.at>, "Doron Fediuck" <dfediuck@redhat.com> Cc: "ovirt-users" <users@ovirt.org> Sent: Sunday, October 6, 2013 10:41:27 PM Subject: Re: [Users] so, what do you want next in oVirt?
On 09/05/2013 11:31 AM, René Koch (ovido) wrote:
- Resource pools At the moment only Quotas are available (which is great, btw), but in some cases it's necessary to implement resource pools. E.g. Limit CPU, Memory and Network for group of test vms, but give full resources to production vms. Maybe this could be done with cgroups? So e.g. I don't want to give a compile machine a full cpu core, but only 1 core with 500MHz or limit the bandwith to e.g. 10Mbit/s.
doron - doesn't cpu shared provide this?
CPU shares provide better prioritization, but not hard constraints. So for 3.3 you should be able to use it to prioritize CPU consumption and hopefully we'll get to cgroup limitations in the coming version(s). As for network, we have in master (but not in 3.3) Network QoS, which gives the limitations you were asking for. Current gaps are completing CPU limitations, memory and storage. The latter has an API in VDSM we need to verify and utilize.

Hi all, ----- Mail original -----
De: "Itamar Heim" <iheim@redhat.com> À: users@ovirt.org Envoyé: Mardi 20 Août 2013 23:19:16 Objet: [Users] so, what do you want next in oVirt?
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template ? Enable hypervisor upgrade/updates through engine[3] V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init X Integrate v2v into engine ? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS - Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list - Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Thanks for this thread ! - ISCSI EqualLogic SAN support or use standard iscsi tools/configuration - SSO for webui and cli (IPA integration) - PXE boot for nodes - VMs dependencies on startup Have a nice day. Regards. --- Baptiste

Hi all, whats about Live Snapshot delete function? Its great to create live Snapshots, but for business critical vms its also nessesary to delete online snapshots. Greetz -----Ursprüngliche Nachricht----- Von: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] Im Auftrag von Baptiste AGASSE Gesendet: Dienstag, 10. September 2013 17:58 An: Itamar Heim Cc: users@ovirt.org Betreff: Re: [Users] so, what do you want next in oVirt? Hi all, ----- Mail original -----
De: "Itamar Heim" <iheim@redhat.com> À: users@ovirt.org Envoyé: Mardi 20 Août 2013 23:19:16 Objet: [Users] so, what do you want next in oVirt?
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template ? Enable hypervisor upgrade/updates through engine[3] V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init X Integrate v2v into engine ? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS - Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list - Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Thanks for this thread ! - ISCSI EqualLogic SAN support or use standard iscsi tools/configuration - SSO for webui and cli (IPA integration) - PXE boot for nodes - VMs dependencies on startup Have a nice day. Regards. --- Baptiste _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users it-novum GmbH i. A. Kai Wagner Consultant Tel: +49 (661) 103-762 Fax: +49 (661) 103-17762 Kai.Wagner@it-novum.com it-novum GmbH * Edelzeller Straße 44 * 36043 Fulda * http://www.it-novum.com Handelsregister Amtsgericht Fulda, HRB 1934 * Geschäftsführer: Michael Kienle " Sitz der Gesellschaft: Fulda Der Inhalt dieser E-Mail ist vertraulich. Wenn Sie nicht der eigentliche Empfänger sein sollten, informieren Sie bitte sofort den Absender oder vernichten umgehend diese Mail. Jegliche unerlaubte Vervielfältigung oder Weiterleitung dieser Mail ist strengstens verboten. This e-mail may contain confidential and/or priviledged information. If you are not the intended recepient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of material in this e-mail is strictly forbidden.

i whould also vote for equallogic support. or at least the possibility to set iscsi configuration to "manual", which should include the possibility to disable the "must use multipathd". not every iscsi san uses it... On Wed, Sep 11, 2013 at 7:02 AM, Wagner, Kai <Kai.Wagner@it-novum.com>wrote:
Hi all,
whats about Live Snapshot delete function? Its great to create live Snapshots, but for business critical vms its also nessesary to delete online snapshots.
Greetz
-----Ursprüngliche Nachricht----- Von: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] Im Auftrag von Baptiste AGASSE Gesendet: Dienstag, 10. September 2013 17:58 An: Itamar Heim Cc: users@ovirt.org Betreff: Re: [Users] so, what do you want next in oVirt?
Hi all,
----- Mail original -----
De: "Itamar Heim" <iheim@redhat.com> À: users@ovirt.org Envoyé: Mardi 20 Août 2013 23:19:16 Objet: [Users] so, what do you want next in oVirt?
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template ? Enable hypervisor upgrade/updates through engine[3] V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init X Integrate v2v into engine ? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS - Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list - Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Thanks for this thread !
- ISCSI EqualLogic SAN support or use standard iscsi tools/configuration - SSO for webui and cli (IPA integration) - PXE boot for nodes - VMs dependencies on startup
Have a nice day.
Regards.
--- Baptiste _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
it-novum GmbH
i. A. Kai Wagner Consultant
Tel: +49 (661) 103-762 Fax: +49 (661) 103-17762
Kai.Wagner@it-novum.com
it-novum GmbH * Edelzeller Straße 44 * 36043 Fulda * http://www.it-novum.com Handelsregister Amtsgericht Fulda, HRB 1934 * Geschäftsführer: Michael Kienle " Sitz der Gesellschaft: Fulda
Der Inhalt dieser E-Mail ist vertraulich. Wenn Sie nicht der eigentliche Empfänger sein sollten, informieren Sie bitte sofort den Absender oder vernichten umgehend diese Mail. Jegliche unerlaubte Vervielfältigung oder Weiterleitung dieser Mail ist strengstens verboten. This e-mail may contain confidential and/or priviledged information. If you are not the intended recepient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of material in this e-mail is strictly forbidden.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sent from the Delta quadrant using Borg technology!

What about using ATA over ethernet (AoE) as an optional storage transport within ovirt? Is this feasible? It seems that the protocol is fast efficient and cleaner than than fcoe. Thanks for considering it as a possible feature request. On Sep 10, 2013 11:58 AM, "Baptiste AGASSE" < baptiste.agasse@lyra-network.com> wrote:
Hi all,
----- Mail original -----
De: "Itamar Heim" <iheim@redhat.com> À: users@ovirt.org Envoyé: Mardi 20 Août 2013 23:19:16 Objet: [Users] so, what do you want next in oVirt?
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template ? Enable hypervisor upgrade/updates through engine[3] V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init X Integrate v2v into engine ? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS - Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list - Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Thanks for this thread !
- ISCSI EqualLogic SAN support or use standard iscsi tools/configuration - SSO for webui and cli (IPA integration) - PXE boot for nodes - VMs dependencies on startup
Have a nice day.
Regards.
--- Baptiste _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Il 20/08/2013 23:19, Itamar Heim ha scritto:
earlier in the year we did a survey for feature requests / improvements / etc.
since a lot of things were added, and priorities usually change, I'd like to ask again for "what do you need the most from oVirt / what are your pain points" next?
below[1] I've listed my understanding of what already went in from previous survey requests (to various degrees of coverage).
Thanks, Itamar
[1] from the top 12 V Allow disk resize V Integrate Nagios/Zabbix monitoring - via a ui plugin V Highly Available engine - via hosted engine[2] V Open vSwitch integration - via neutron integration X Allow cloning VMs without template ? Enable hypervisor upgrade/updates through engine[3] V Allow engine on an oVirt hosted VM - via hosted engine[2] V Enable guest configuration (root password, SSH keys, network) via guest agent in engine - via cloud-init X Integrate v2v into engine ? Bond/extend ovirtmgmt with a second network for HA/increased bandwidth[4] X Integrate scheduling of snapshots and VM export for backups in engine[5] V Spice – support Google Chrome - via mime based launch
Other items mentioned in previous survey which should be covered by now: - Fix timeout when adding local host during all-in-one configuration - Fix engine set-up when SELinux is disabled - Provide packages for el6 (CentOS, Red Hat Enterprise Linux) - Allow multiple VMs to be deployed from the same template at the same time - ISO domains on local/GlusterS - Show IP addresses in Virtual Machines->Network Interfaces - OpenStack Quantum support (now called Neutron) - noVNC support - Support spice.html5 and websocket proxy - Add other guest OSes to list - Port oVirt guest agent to Ubuntu[6] - SLA - Allow resource time-sharing - Spice - Mac client (via mime based launch) - Spice - port XPI plug-in to Windows (not sure this will happen, but mime based launch allows using firefox now) - Spice - client for Ubuntu/Debian (should be covered via mime based launch)
[2] hosted engine is in active development, but not released yet. [3] host update is supported, but not for general yum update. [4] a lot of improvements were done in this space, but i'm not sure if they cover this exact use case [5] backup api is now being pushed to master, and orchestration of backups should probably happen via 3rd part backup vendors? [6] I'm not sure packaging exists yet, but ubuntu is covered for the basic functionality of the guest agent. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
I think we should consider also to ensure compatibility with OVF 2.0.1 standard: http://schemas.dmtf.org/ovf/envelope/2/dsp8023_2.0.xsd Not sure about the actual compatibility level -- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
participants (25)
-
Alex Leonhardt
-
alireza sadeh seighalan
-
Andrew Lau
-
Baptiste AGASSE
-
Chris Smith
-
Dave Neary
-
Dead Horse
-
Deepthi Dharwar
-
Doron Fediuck
-
Dotan Paz
-
Itamar Heim
-
Jakub Bittner
-
Joop van de Wege
-
Karli Sjöberg
-
Michal Skrivanek
-
NOC
-
noc
-
René Koch (ovido)
-
Sandro Bonazzola
-
Sigbjorn Lie
-
squadra
-
Steve Dainard
-
suporte@logicworks.pt
-
Wagner, Kai
-
Yair Zaslavsky