On April 6, 2020 5:29:10 PM GMT+03:00, Gianluca Cecchi <gianluca.cecchi(a)gmail.com>
wrote:
Hello,
I think both at oVirt and Red Hat official docs the steps regarding
adding
a new Gluster storage domain on an existing installation is quite
confusing
and misaligned.
Below I explain my reasons for both.
In the mean time, suppose I have a single host HCI with self hosted
engine
(but it could be useful to have hints for multi-hosts too) and the 3
storage domains configured during install (engine and data on a disk
and
vmstore on another disk).
The system initially had 3 disks, the first used for the ovirt-node-ng
system, the second for engine and data, the third for vmstore initial
gluster storage domains
Now I go and add a fourth disk, say 4Tb in size.
I would like to create a new Gluster storage domain on it
What are the suggested steps?
BTW: after booting, the new disk has been automatically included in
multipath
eui.01000000010000005cd2e4b5e7db4d51 dm-6 NVME,INTEL SSDPEDKX040T7
size=3.6T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
`- 1:0:1:0 nvme1n1 259:1 active undef running
Coming back to the docs
- oVirt documentation
I base my considerations on page
https://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster...
1) the picture in that page is still based on 3.6 GUI, so the initial
approach tends to make me feel the page could be not up to date
2) I would put the section "Creating a Storage Volume" before
"Attaching a
Gluster Storage Volume as a Storage Domain" and not the opposite as it
is
now
3) In "Creating a Storage Volume" there is the note that
"
You must create brick directories or mountpoints before you can add
them to
volumes.
"
This sentence in my opinion is not so clear (see also details at the
end)... what is a user expected to have done on hypervisors? Creation
of
directories or file systems (and what type: prefered xfs or ext4?).
Perhaps
an example of commands would be useful.
4) in the workflow of creating volume, item 7:
Click the Add Bricks button to select bricks to add to the volume.
Bricks
must be created externally on the Gluster Storage nodes.
I would expect an indication of commands to be run instead... This
implies
knowledge of Gluster that the GUI functionality is aimed to hide... but
so,
not completely...
I can see in my existing bricks, going to their advanced details:
"
xfs
rw,seclabel,noatime,nodiratime,attr2,inode64,logbsize=128k,sunit=256,swidth=512,noquota
"
There is also the fstab portion of creating bricks...
5) In the workflow of creating volume, item 9:
It is not clear the default "*" value for access if is the recommended
one
or not.
I presume not
- RHV documentation
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/...
4.3 Administrator Guide downloaded today
Chapter 8, section 8.6
PREPARING AND ADDING RED HAT GLUSTER STORAGE
There is reference to Red Hat Gluster Storage, version 3.4.
I think it should be 3.5 instead?
Because at the same time in 3.4 there is reference to RHV 4.1, not 4.3.
While in "Configuring Red Hat Virtualization with Red Hat Gluster
Storage"
version 3.5 there is correct reference to RHV 4.3
Anyway this mix of product documentation is not optimal in my opinion.
I would directly include inside RHV docs the Gluster part related to
it,
without jumping between the two comlete, with the risk of misalignment
in
time.
Also, in Gluster Storage guide referred, there is the part related to
volumes but not to bricks....
Possibly the correct reference for the brick part, that is the one
missing
in webadmin GUI, could be this:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5...
and/or this
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5...
?
Thanks for reading and hoping to get commands' workflow to configure
this
new disk for a new gluster storage domain.
It seems inside the guides is oversimplified the process of creating
bricks, that in the documentation seem actually to be xfs filesystems
mounted over thin pool based logical volumes residing on top of volume
groups with particular alignment settings specified during their
creation....
Gianluca
Hi Gianluca,,
Actually the situation is just like CEPH & Openstack...
You have Openstack (in our case oVirt) that can manage basic tasks with the storage,
but many administrators do not rely on the UI for complex tasks.
In order to properly run a HCI , some gluster knowledge is "mandatory"
(personal opinion - you will never find that word anywhere :) ).
In your case, you need:
1. Blacklist the disks in multipath.conf . As it is managed by vdsm, you need to
put a special comment '# VDSM PRIVATE' (without the quotes !) in order to
prevent VDSM from modifying. I don't know if this is the best approach, yet it
works for me.
2. Create a VDO (skip if not needed)
3. Create PV from the VDO/disk/array
4. Either add to an existing VG or create a new one
5. Create a thin LVM pool and thin LV (if you want gluster-level snapshots). I use this
approach to snapshot my HostedEngine VM. For details, I can tell you in a separate
thread.
5. Create an XFS filesystem and define it either in fstab or in systemd unit (second
option is better as you can define dependencies). I would recommend you to use these
options:
noatime,nodiratime, context="system_u:object_r:glusterd_brick_t:s0"
Keep the quotes and mount the brick on all nodes.
I assumed that you are adding bricks on the same HCI nodes, but that could be a bad
assumption. If not, you will need to extend the storage pool and then to create your
volume .
6.Last, create a storage domain via API or the UI.
In the end you can use storage migration (if you are not using qemu's libgfapi
integration) to utilize the new storage without any downtime.
P.S.: Documentation contributions are welcomed and if I have some time - I will be
able to add some of my experience :)
Best Regards,
Strahil Nikolov