/dev/disk-by-id could be problematic. it only showing disks that have been formatted.For example, I've just created a node with 3 disks and on Anaconda I chose only the first disk. After the node installation and reboot, I see on /dev/disk/by-id only the DM, and the DVD, not the two unformatted disks (which can be seen using lsscsi command).Anaconda, however, does see the disks, details etc...
On Sat, Sep 29, 2018 at 12:57 PM Yaniv Kaul <ykaul@redhat.com> wrote:On Fri, Sep 28, 2018, 7:16 PM Hetz Ben Hamo <hetz@hetz.biz> wrote:Hi,Gobinda, great work!One thing though - the device names (sda, sdb etc..)On many servers, it's hard to know which disk is which. Lets say I have 10 spinning disk + 2 SSD's. Which is sda? what about NVME? worse - sometimes replacing disks replaces the sda to something else. We used to have the same problem with NICs and now this has been resolved on CentOS/RHEL 7.XCould the HCI part - the disk selection part specifically - give more details? maybe Disk ID or WWN, or anything that can identify a disk?/dev/disk/by-id is the right identifier.During installation, it'd be nice if it could show as much data as possible - sdX, /dev/disk/by-id, size and perhaps manufacturer.Y.Also - SSD caching, most of the time it is recommended to use 2 drives if possible for good performance. Can a user select X number of drives?Thanks_______________________________________________On Fri, Sep 28, 2018 at 6:43 PM Gobinda Das <godas@redhat.com> wrote:_______________________________________________Hi All,Status update on "Hyperconverged Gluster oVirt support"Features Completed:================cockpit-ovirt-------------1- Asymmetric brick configuration.Brick can be configured per host basis i.e. If the user wanted to make use of sdb from host1, sdc from host2, and sdd from host3.2- Dedupe and Compression integration via VDO support (see https://github.com/dm-vdo/kvdo). Gluster bricks are created on vdo devices3- LVM cache configuration support (Configure cache by using fast block device such as SSD drive to imrove the performance of a larger and slower logical volumes)4- Auto addition of 2nd and 3rd hosts in a 3 node setup during deployment5- Auto creation of storage domains based on gluster volumes created during setup6- Single node deployment support via Cockpit UI. For details on single node deployment - https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged/7- Gluster Management Dashboard (Dashboard will show the nodes in cluster,Volumes and bricks. User can expand the cluster and also can create new volume in existing cluster nodes )oVirt-------1- Reset brick support from UI to allow users to replace a faulty brick2- Create brick from engine now supports configuring an SSD device as lvmcache device when bricks are created on spinning disks3- VDO monitoringGlusterFS---------------Enhancements to performance with fuse by 15x1. Cluster after eager lock change for better detection of multiple clients2. Changing qemu option aio to "native" instead of "threads".end-to-end deployment:--------------------------------1- End to end deployment of a Gluster + Ovirt hyperconverged environment using ansible roles ( https://github.com/gluster/gluster-ansible/tree/master/playbooks ). The only pre-requisite is a CentOS node/oVirt nodeFuture Plan:==========cockpit-ovirt:1- ansible-roles integration for deployment2- Support for different volume typesvdsm:1- Python3 compatibility of vdsm-gluster2- Native 4K support--Thanks,Gobinda
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XCFITLLQTODFK6NIRPBTRKKYCWKO6KBP/
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WTSJBFP73RTJV6EO4XYZUAHNTOVXYBLS/