Awesome, thanks! , and yes I agree, this is a great project!
I will now continue to scale the cluster from 3 to 6 nodes including the
storage...I will let y'all know how it goes and post the steps as I have
only seen examples of 3 hosts but not steps to go from 3 to 6.
regards,
AQ
On Tue, May 21, 2019 at 1:06 PM Strahil <hunter86_bg(a)yahoo.com> wrote:
> EUREKA: After doing the above I was able to get past the filter
issues,
however I am still concerned if during a reboot the disks might come up
differently. For example /dev/sdb might come up as /dev/sdx...
Even if they change , you don't have to worry about as each PV contains
LVM metadata (including VG configuration) which is read by LVM on boot
(actually everything that is not in the LVM filter is being scanned like
that).
Once all PVs are available the VG is activated and then the LVs are also
activated.
> I am trying to make sure this setup is always the same as we want to
move this to production, however seems I still don't have the full hang of
it and the RHV 4.1 course is way to old :)
>
> Thanks again for helping out with this.
It's a plain KVM with some management layer.
Just a hint:
Get your HostedEngine's configuration xml from the vdsm log (for
emergencies) and another copy with reverse boot order where DVD is booting
first.
Also get the xml for the ovirtmgmt network.
It helped me a lot of times when I wanted to recover my HostedEngine.
I'm too lazy to rebuild it.
Hint2:
Vdsm logs contain each VM's configuration xml when the VMs are powered on.
Hint3:
Get regular backups of the HostedEngine and patch it from time to time.
I would go in prod as follows:
Let's say you are on 4.2.8
Next step would be to go to 4.3.latest and then to 4.4.latest .
A test cluster (even in VMs ) is also benefitial.
Despite the hiccups I have stumbled upon, I think that the project is
great.
Best Regards,
Strahil Nikolov
--
Adrian Quintero