On Sat, Nov 21, 2015, at 13:59, Dan Kenigsberg wrote:
On Fri, Nov 20, 2015 at 01:54:35PM +0100, Giuseppe Ragusa wrote:
> Hi all,
> I go on with my wishlist, derived from both solitary mumblings and community talks
at the the first Italian oVirt Meetup.
>
> I offer to help in coding (work/family schedules permitting) but keep in mind that
I'm a sysadmin with mainly C and bash-scripting skills (but hoping to improve my
less-than-newbie Python too...)
>
> I've sent separate wishlist messages for oVirt Node and Engine.
>
> VDSM:
>
> *) allow VDSM to configure/manage Samba, CTDB and Ganesha (specifically, I'm
thinking of the GlusterFS integration); there are related wishlist items on
configuring/managing Samba/CTDB/Ganesha on the Engine and on oVirt Node
I'd apreciate a more detailed feature definition. Vdsm (and ovirt) try
to configure only thing that are needed for their own usage. What do you
want to control? When? You're welcome to draf a feature page prior to
coding the fix ;-)
I was thinking of adding CIFS/NFSv4 functionality to an hyperconverged cluster
(GlusterFS/oVirt) which would have separate volumes for virtual machines storage (one
volume for the Engine and one for other vms, with no CIFS/NFSv4 capabilities offered) and
for data shares (directly accessible by clients on LAN and obviously from local vms too).
Think of it as a 3-node HA NetApp+VMware killer ;-)
The UI idea (but that would be the Engine part, I understand) was along the lines of
single-check enabling CIFS and/or NFSv4 sharing for a GlusterFS data volume, then
optionally adding any further specific options (hosts allowed, users/groups for read/write
access, network recycle_bin etc.); global Samba (domain/workgroup membership etc.) and
CTDB (IPs/interfaces) configuration parameters would be needed too.
I have no experience on a GaneshaNFS clustered/HA configuration with GlusterFS, but (from
superficial skimming through docs) it seems that it was not possible at all before 2.2 and
now it needs a full Pacemaker/Corosync setup too (contrary to the IBM-GPFS-backed case),
so that could be a problem.
This VDSM wishlist item was driven by the idea that all actions (and so future
GlusterFS/Samba/CTDB too) performed by the Engine through the hosts/nodes were somehow
"mediated" by VDSM and its API, but if this is not the case, then I retire my
suggestion here and I will try to pursue it only on the Engine/Node side ;)
Many thanks for your attention.
Regards,
Giuseppe
> *) add Open vSwitch direct support (not Neutron-mediated); there
are related wishlist items on configuring/managing Open vSwitch on oVirt Node and on the
Engine
That's on our immediate roadmap. Soon, vdsm-hook-ovs would be ready for
testing.
>
> *) add DRBD9 as a supported Storage Domain type; there are related wishlist items on
configuring/managing DRBD9 on the Engine and on oVirt Node
>
> *) allow VDSM to configure/manage containers (maybe extend it by use of the LXC
libvirt driver, similarly to the experimental work that has been put up to allow Xen vm
management); there are related wishlist items on configuring/managing containers on the
Engine and on oVirt Node
>
> *) add a VDSM_remote mode (for lack of a better name, but mainly inspired by
pacemaker_remote) to be used inside a guest by the above mentioned container support
(giving to the Engine the required visibility on the managed containers, but excluding the
"virtual node" from power management and other unsuitable actions)
>
> Regards,
> Giuseppe
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users