Hi,
Gobinda, great work!
One thing though - the device names (sda, sdb etc..)
On many servers, it's hard to know which disk is which. Lets say I have 10
spinning disk + 2 SSD's. Which is sda? what about NVME? worse - sometimes
replacing disks replaces the sda to something else. We used to have the
same problem with NICs and now this has been resolved on CentOS/RHEL 7.X
Could the HCI part - the disk selection part specifically - give more
details? maybe Disk ID or WWN, or anything that can identify a disk?
Also - SSD caching, most of the time it is recommended to use 2 drives if
possible for good performance. Can a user select X number of drives?
Thanks
On Fri, Sep 28, 2018 at 6:43 PM Gobinda Das <godas(a)redhat.com> wrote:
Hi All,
Status update on "Hyperconverged Gluster oVirt support"
Features Completed:
================
cockpit-ovirt
-------------
1- Asymmetric brick configuration.Brick can be configured per host basis
i.e. If the user wanted to make use of sdb from host1, sdc from host2, and
sdd from host3.
2- Dedupe and Compression integration via VDO support (see
https://github.com/dm-vdo/kvdo). Gluster bricks are created on vdo devices
3- LVM cache configuration support (Configure cache by using fast block
device such as SSD drive to imrove the performance of a larger and slower
logical volumes)
4- Auto addition of 2nd and 3rd hosts in a 3 node setup during deployment
5- Auto creation of storage domains based on gluster volumes created
during setup
6- Single node deployment support via Cockpit UI. For details on single
node deployment -
https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_no...
7- Gluster Management Dashboard (Dashboard will show the nodes in
cluster,Volumes and bricks. User can expand the cluster and also can create
new volume in existing cluster nodes )
oVirt
-------
1- Reset brick support from UI to allow users to replace a faulty brick
2- Create brick from engine now supports configuring an SSD device as
lvmcache device when bricks are created on spinning disks
3- VDO monitoring
GlusterFS
---------------
Enhancements to performance with fuse by 15x
1. Cluster after eager lock change for better detection of multiple
clients
2. Changing qemu option aio to "native" instead of "threads".
end-to-end deployment:
--------------------------------
1- End to end deployment of a Gluster + Ovirt hyperconverged environment
using ansible roles (
https://github.com/gluster/gluster-ansible/tree/master/playbooks ). The
only pre-requisite is a CentOS node/oVirt node
Future Plan:
==========
cockpit-ovirt:
1- ansible-roles integration for deployment
2- Support for different volume types
vdsm:
1- Python3 compatibility of vdsm-gluster
2- Native 4K support
--
Thanks,
Gobinda
_______________________________________________
Devel mailing list -- devel(a)ovirt.org
To unsubscribe send an email to devel-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XCFITLLQTOD...