hi,

it works well. I have create the lvm partitions like the onode01 and I have added the bricks from onode02 and onode03.
after that t have add the hosts into ovirt and and it works well to.

but to add the nodes I have to remove the ovirt yum repos from /etc/yum.repo.d because then he will not try to download a newer version.

but I have now only one open point. the volumes with three bricks are there but only the hosted engine domain is there. the iso and vmstore domain was not there. strange. I think it was available on the single note but I try to reinstall all three servers again to be sure that it's works.

thanks .

br
marcel

Am 29. Oktober 2020 18:01:48 MEZ schrieb Strahil Nikolov via Users <users@ovirt.org>:
I guess the gluster documentation at https://docs.gluster.org/.

If you have a single brick volume of type Distributed, then you can add 2 more bricks and convert the volume to 'replica 3' or 'replica 3 arbiter 1'
 via:

Get syntax:
gluster volume add-brick help

Actual convertion:
gluster volume add-brick <VOLUME-even-Hosted-Engines> replica 3 [arbiter 1] ovirt2:/path/to-brick ovirt3:/path/to-brick

If you remove the square brackets so the type is 'replica 3 arbiter 1' the second brick will be treated as an arbiter.

WARNING: You need to create your bricks with inode size of at least 512 'mkfs.xfs -i size=512 /dev/vg/lv'. If you wish to use the snapshot functionality (I use it to snapshot my Engine's volume before patching), all bricks in the volume must use thin LV (which I think is not the default by the ansible plays).
Ovirt was quite sensitive with the block device's physical sector size (lsblk -t shows it as PHY-SEC). If yours is bigger than 512 , you might (I'm not sure about new version of oVirt) need to use VDO with 'emulate512' flag. 

Note: For an arbiter brick with XFS file system , it's worth considering to use an SSD and setting a higher percentage of inodes (option 'maxpct=' of mkfs.xfs).

Best Regards,
Strahil Nikolov




В сряда, 28 октомври 2020 г., 00:33:22 Гринуич+2, marcel d'heureuse <marcel@deheureu.se> написа:





Hi Strahil,

where can I find some documents for the conversion to replica? works this also for the engine brick?

br
marcel

Am 27. Oktober 2020 16:40:59 MEZ schrieb Strahil Nikolov via Users <users@ovirt.org>:
  Hello Gobinda,I know that gluster can easily convert distributed volume to replica volume, so why it is not possible to first convert to replica and then add the nodes as HCI ?Best Regards,Strahil NikolovВ вторник, 27 октомври 2020 г., 08:20:56 Гринуич+2, Gobinda Das <godas@redhat.com> написа: Hello Marcel, For a note, you can't expand your single gluster node cluster to 3 nodes.Only you can add compute nodes.If you want to add compute nodes then you do not need any glusterfs packages to be installed. Only ovirt packages are enough to add host as a compute node.On Tue, Oct 27, 2020 at 10:28 AM Parth Dhanjal <dparth@redhat.com> wrote:> Hey Marcel!

You have to install the required glusterfs packages and then deploy the gluster setup on the 2 new hosts. After creating the required LVs, VGs, thinpools, mount points and bricks, you'll have to expand the gluster-cluster from the current host using add-brick functionality from gluster. After this you can add the 2 new hosts to your existing ovirt-engine.

On Mon, Oct 26, 2020 at 7:40 PM Marcel d'Heureuse <marcel@deheureu.se> wrote:
Hi,

I got a problem with my Ovirt installation. Normally we deliver Ovirt as
single node installation and we told our service guys if the internal
client will have more redundancy we need two more server and add this to
the single node installation. i thought that no one would order two new
servers.

Now I have the problem to get the system running.

First item is, that this environment has no internet access. So I can't
install software by update with yum.
The Ovirt installation is running on Ovirt node 4.3.9 boot USB Stick. All
three servers have the same software installed.
On the single node I have installed the hosted Engine package 1,1 GB to
deploy the self-hosted engine without internet. That works.

Gluster, Ovirt, Self-Hosted engine are running on the server 01.

What should I do first?

Deploy the Glusterfs first and then add the two new hosts to the single
node installation?
Or should I deploy a new Ovirt System to the two new hosts and add later
the cleaned host to the new Ovirt System?

I have not found any items in this mailing list which gives me an idea
what I should do now.


Br
Marcel
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TNV5WNKTGUSU5DB2CFR67FROXMMDCPCD/


Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VXJVTPD24CQWTKS3QGINEKGT3NXVCP5/



Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPK4QQAYEILIEYQOUH4APN6MTZ5J3UR3/