/dev/disk-by-id could be problematic. it only showing disks that have
For example, I've just created a node with 3 disks and on Anaconda I chose
only the first disk. After the node installation and reboot, I see on
/dev/disk/by-id only the DM, and the DVD, not the two unformatted disks
(which can be seen using lsscsi command).
Anaconda, however, does see the disks, details etc...
That's not what I know. Might be something with udev or some filtering, but
certainly I was not aware it's related to formatting.
On Sat, Sep 29, 2018 at 12:57 PM Yaniv Kaul <ykaul(a)redhat.com> wrote:
> On Fri, Sep 28, 2018, 7:16 PM Hetz Ben Hamo <hetz(a)hetz.biz> wrote:
>> Gobinda, great work!
>> One thing though - the device names (sda, sdb etc..)
>> On many servers, it's hard to know which disk is which. Lets say I have
>> 10 spinning disk + 2 SSD's. Which is sda? what about NVME? worse -
>> sometimes replacing disks replaces the sda to something else. We used to
>> have the same problem with NICs and now this has been resolved on
>> CentOS/RHEL 7.X
>> Could the HCI part - the disk selection part specifically - give more
>> details? maybe Disk ID or WWN, or anything that can identify a disk?
> /dev/disk/by-id is the right identifier.
> During installation, it'd be nice if it could show as much data as
> possible - sdX, /dev/disk/by-id, size and perhaps manufacturer.
>> Also - SSD caching, most of the time it is recommended to use 2 drives
>> if possible for good performance. Can a user select X number of drives?
>> On Fri, Sep 28, 2018 at 6:43 PM Gobinda Das <godas(a)redhat.com> wrote:
>>> Hi All,
>>> Status update on "Hyperconverged Gluster oVirt support"
>>> Features Completed:
>>> 1- Asymmetric brick configuration.Brick can be configured per host
>>> basis i.e. If the user wanted to make use of sdb from host1, sdc from
>>> host2, and sdd from host3.
>>> 2- Dedupe and Compression integration via VDO support (see
). Gluster bricks are created on vdo
>>> 3- LVM cache configuration support (Configure cache by using fast
>>> block device such as SSD drive to imrove the performance of a larger and
>>> slower logical volumes)
>>> 4- Auto addition of 2nd and 3rd hosts in a 3 node setup during
>>> 5- Auto creation of storage domains based on gluster volumes created
>>> during setup
>>> 6- Single node deployment support via Cockpit UI. For details on
>>> single node deployment -
>>> 7- Gluster Management Dashboard (Dashboard will show the nodes in
>>> cluster,Volumes and bricks. User can expand the cluster and also can create
>>> new volume in existing cluster nodes )
>>> 1- Reset brick support from UI to allow users to replace a faulty
>>> 2- Create brick from engine now supports configuring an SSD device as
>>> lvmcache device when bricks are created on spinning disks
>>> 3- VDO monitoring
>>> Enhancements to performance with fuse by 15x
>>> 1. Cluster after eager lock change for better detection of multiple
>>> 2. Changing qemu option aio to "native" instead of
>>> end-to-end deployment:
>>> 1- End to end deployment of a Gluster + Ovirt hyperconverged
>>> environment using ansible roles (
>>> The only pre-requisite is a CentOS node/oVirt node
>>> Future Plan:
>>> 1- ansible-roles integration for deployment
>>> 2- Support for different volume types
>>> 1- Python3 compatibility of vdsm-gluster
>>> 2- Native 4K support
>>> Devel mailing list -- devel(a)ovirt.org
>>> To unsubscribe send an email to devel-leave(a)ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> List Archives:
>> Devel mailing list -- devel(a)ovirt.org
>> To unsubscribe send an email to devel-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> List Archives: