On Fri, Jun 30, 2017 at 1:11 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello,
I'm going to try to update to 4.1 an HC environment, currently on 4.0 with 3 nodes in CentOS 7.3 and one of them configured as arbiter

Any particular caveat in HC? 
Are the steps below, normally used for Self Hosted Engine environments the only ones to consider?

- update repos on the 3 hosts and on the engine vm
- global maintenance
- update engine
- update also os packages of engine vm
- shutdown engine vm
- disable global maintenance
- verify engine vm boots and functionality is ok

I passed from 4.0.5 to 4.1.2
All steps above went well

Then
- update hosts: preferred way will be from the gui itself that takes care of moving VMs, maintenance and such or to proceed manually?

Is there a preferred order with which I have to update the hosts, after updating the engine? Arbiter for first or as the latest or not important at all?

Any possible problem having disaligned versions of glusterfs packages until I complete all the 3 hosts? Any known bugs passing from 4.0 to 4.1 and related glusterfs components?

Thanks in advance,
Gianluca

I basically followed the instructions here:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/upgrading_the_self-hosted_engine

and had no problems.
As a sequence, I first updated the two hosts that contain the actual gluster data, one by one, and finally the arbiter one.
I only had a doubt to what select in this page when you put host into maintenance form the GUI:
https://drive.google.com/file/d/0BwoPbcrMv8mvSDRKQVo4QzVvbTQ/view?usp=sharing

I selected only the option to stop Gluster service.
What about the other one? It could be useful a contextual help perhaps, when you mouse over the 2 options....

I expected the engine vm to migrate to the newly updated hosts but it didn't happen. I don't know if I'm confusing with another scenario...
I had the engine vm running on the arbiter node that was the last one to be updated, so I manually moved the engine vm to one of the already upgraded hosts when it was its turn.

At the end I was also able to upgrade cluster and DC from 4.0 to 4.1

Only problem I would like to manage is that I have gluster network shared with ovirtmgmt one.
Can I move it now with these updated packages?

BTW: my environment is based on a single NUC6i5 with 32Gb of ram, where I have ESXi 6.0U2 installed. The 3 oVirt HCI hosts are 3 vSphere VMs, so the engine VM is an L2 guest.
But performace is quite good after installing haveged on it. Donna if it could be usefule to install haveged also in the hypervisors...
I already setup a second nic for the oVirt hosts: it is a host-only network adapter from ESXi point of view, so it lives in memory of the ESXi hypervisor.
I configured in oVirt 4 vlans on this new nic (vlan1,2,3,4).
So it could be fine to have glusterfs configured on one of these vlans, instead of the ovirtmgmt one.

Thanks in advance for any suggestion,

Gianluca