On Fri, Nov 20, 2015, at 13:54, Giuseppe Ragusa wrote:
Hi all,
I go on with my wishlist, derived from both solitary mumblings and community talks at the
the first Italian oVirt Meetup.
I offer to help in coding (work/family schedules permitting) but keep in mind that
I'm a sysadmin with mainly C and bash-scripting skills (but hoping to improve my
less-than-newbie Python too...)
I've sent separate wishlist messages for oVirt Node and VDSM.
oVirt Engine:
*) add Samba/CTDB/Ganesha capabilities (maybe in the GlusterFS management UI); there are
related wishlist items on configuring/managing Samba/CTDB/Ganesha on oVirt Node and on
VDSM
*) add the ability to manage containers (maybe initially as an exclusive cluster type but
allowing it to coexist with GlusterFS); there are related wishlist items on supporting
containers on the oVirt Node and on VDSM
*) add Open vSwitch direct support (not Neutron-mediated); there are related wishlist
items on configuring/managing Open vSwitch on oVirt Node and on VDSM
*) add DRBD9 as a supported Storage Domain type, HC/HE too, managed from the Engine UI
similarly to GlusterFS; there are related wishlist items on configuring/managing DRBD9 on
oVirt Node and on VDSM
*) add support for managing/limiting GlusterFS heal/rebalance bandwidth usage in HC setup
[1]; this is actually a GlusterFS wishlist item first and foremost, but I hope our use
case could be considered compelling enough to "force their hand" a bit ;)
I've just posted a corresponding RFE for GlusterFS on:
http://www.gluster.org/pipermail/gluster-devel/2015-November/047238.html
Upvote that, if you think it's needed ;-)
Regards,
Giuseppe
[1] bandwidth limiting seems to be supported only for geo-replication on GlusterFS side;
it is my understanding that on non-HC setups the heal/rebalance traffic could be kept
separate from hypervisor/client traffic (if a separate, Gluster-only, network is
physically available and Gluster cluster nodes have been peer-probed on those network
addresses)