Hello,
I think both at oVirt and Red Hat official docs the steps regarding adding a new Gluster storage domain on an existing installation is quite confusing and misaligned.
Below I explain my reasons for both.
In the mean time, suppose I have a single host HCI with self hosted engine (but it could be useful to have hints for multi-hosts too) and the 3 storage domains configured during install (engine and data on a disk and vmstore on another disk).
The system initially had 3 disks, the first used for the ovirt-node-ng system, the second for engine and data, the third for vmstore initial gluster storage domains
Now I go and add a fourth disk, say 4Tb in size.
I would like to create a new Gluster storage domain on it
What are the suggested steps?
BTW: after booting, the new disk has been automatically included in multipath
eui.01000000010000005cd2e4b5e7db4d51 dm-6 NVME,INTEL SSDPEDKX040T7
size=3.6T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
`- 1:0:1:0 nvme1n1 259:1 active undef running
Coming back to the docs
- oVirt documentation
I base my considerations on page
1) the picture in that page is still based on 3.6 GUI, so the initial approach tends to make me feel the page could be not up to date
2) I would put the section "Creating a Storage Volume" before "Attaching a Gluster Storage Volume as a Storage Domain" and not the opposite as it is now
3) In "Creating a Storage Volume" there is the note that
"
You must create brick directories or mountpoints before you can add them to volumes.
"
This sentence in my opinion is not so clear (see also details at the end)... what is a user expected to have done on hypervisors? Creation of directories or file systems (and what type: prefered xfs or ext4?). Perhaps an example of commands would be useful.
4) in the workflow of creating volume, item 7:
Click the Add Bricks button to select bricks to add to the volume. Bricks must be created externally on the Gluster Storage nodes.
I would expect an indication of commands to be run instead... This implies knowledge of Gluster that the GUI functionality is aimed to hide... but so, not completely...
I can see in my existing bricks, going to their advanced details:
"
xfs
rw,seclabel,noatime,nodiratime,attr2,inode64,logbsize=128k,sunit=256,swidth=512,noquota
"
There is also the fstab portion of creating bricks...
5) In the workflow of creating volume, item 9:
It is not clear the default "*" value for access if is the recommended one or not.
I presume not
- RHV documentation
4.3 Administrator Guide downloaded today
Chapter 8, section 8.6
PREPARING AND ADDING RED HAT GLUSTER STORAGE
There is reference to Red Hat Gluster Storage, version 3.4.
I think it should be 3.5 instead?
Because at the same time in 3.4 there is reference to RHV 4.1, not 4.3.
While in "Configuring Red Hat Virtualization with Red Hat Gluster Storage" version 3.5 there is correct reference to RHV 4.3
Anyway this mix of product documentation is not optimal in my opinion.
I would directly include inside RHV docs the Gluster part related to it, without jumping between the two comlete, with the risk of misalignment in time.
Also, in Gluster Storage guide referred, there is the part related to volumes but not to bricks....
Possibly the correct reference for the brick part, that is the one missing in webadmin GUI, could be this:
and/or this
?
Thanks for reading and hoping to get commands' workflow to configure this new disk for a new gluster storage domain.
It seems inside the guides is oversimplified the process of creating bricks, that in the documentation seem actually to be xfs filesystems mounted over thin pool based logical volumes residing on top of volume groups with particular alignment settings specified during their creation....
Gianluca