<div dir="ltr">Hello,<div>I have an environment with 3 hosts and gluster HCI on 4.1.3.</div><div>I&#39;m following this link to take it to 4.1.7</div><div><a href="https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine">https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine</a><br></div><div><br></div><div>The hosts and engine were at 7.3 prior of beginning the update.</div><div>All went ok for the engine that now is on 7.4 (not rebooted yet)</div><div>Points 4. 5. 6. for the first updated host were substituted by rebooting it </div><div><br></div><div>I&#39;m at point:</div><div><br></div><div>7. Exit the global maintenance mode: in a few minutes the engine VM should migrate to the fresh upgraded host cause it will get an higher score</div><div><br></div><div>One note: actually exiting from global maintenance doesn&#39;t imply that the host previously put into maintenance exiting from it, correct?</div><div>So in my workflow, before point 7., actually I have selected the host and activated it.</div><div><br></div><div>Currently the situation is this</div><div>- engine running on ovirt02</div><div>- update happened on ovirt03</div><div><br></div><div><div>Then after exiting from global maintenance I don&#39;t see the engine vm migrating to it.</div></div><div>And in fact (see below) the score of ovirt02 is the same (3400) as the one of ovirt03, so it seems it is correct that engine remains there...?</div><div><br></div><div>Which kind of messages should I see inside logs of engine/hosts?<br></div><div><br></div><div><div>[root@ovirt01 ~]# rpm -q vdsm<br></div><div>vdsm-4.19.20-1.el7.centos.x86_64</div><div><br></div><div>[root@ovirt02 ~]# rpm -q vdsm</div><div>vdsm-4.19.20-1.el7.centos.x86_64</div><div>[root@ovirt02 ~]# </div></div><div><br></div><div><div>[root@ovirt03 ~]# rpm -q vdsm</div><div>vdsm-4.19.37-1.el7.centos.x86_64</div></div><div><br></div><div>from host ovirt01:</div><div><br></div><div><div>[root@ovirt01 ~]# hosted-engine --vm-status</div><div><br></div><div><br></div><div>--== Host 1 status ==--</div><div><br></div><div>conf_on_shared_storage             : True</div><div>Status up-to-date                  : True</div><div>Hostname                           : ovirt01.localdomain.local</div><div>Host ID                            : 1</div><div>Engine status                      : {&quot;reason&quot;: &quot;vm not running on this host&quot;, &quot;health&quot;: &quot;bad&quot;, &quot;vm&quot;: &quot;down&quot;, &quot;detail&quot;: &quot;unknown&quot;}</div><div>Score                              : 3352</div><div>stopped                            : False</div><div>Local maintenance                  : False</div><div>crc32                              : 256f2128</div><div>local_conf_timestamp               : 12251210</div><div>Host timestamp                     : 12251178</div><div>Extra metadata (valid at timestamp):</div><div><span style="white-space:pre">        </span>metadata_parse_version=1</div><div><span style="white-space:pre">        </span>metadata_feature_version=1</div><div><span style="white-space:pre">        </span>timestamp=12251178 (Tue Nov 28 10:11:20 2017)</div><div><span style="white-space:pre">        </span>host-id=1</div><div><span style="white-space:pre">        </span>score=3352</div><div><span style="white-space:pre">        </span>vm_conf_refresh_time=12251210 (Tue Nov 28 10:11:52 2017)</div><div><span style="white-space:pre">        </span>conf_on_shared_storage=True</div><div><span style="white-space:pre">        </span>maintenance=False</div><div><span style="white-space:pre">        </span>state=EngineDown</div><div><span style="white-space:pre">        </span>stopped=False</div><div><br></div><div><br></div><div>--== Host 2 status ==--</div><div><br></div><div>conf_on_shared_storage             : True</div><div>Status up-to-date                  : True</div><div>Hostname                           : 192.168.150.103</div><div>Host ID                            : 2</div><div>Engine status                      : {&quot;health&quot;: &quot;good&quot;, &quot;vm&quot;: &quot;up&quot;, &quot;detail&quot;: &quot;up&quot;}</div><div>Score                              : 3400</div><div>stopped                            : False</div><div>Local maintenance                  : False</div><div>crc32                              : 9b8c8a6c</div><div>local_conf_timestamp               : 12219386</div><div>Host timestamp                     : 12219357</div><div>Extra metadata (valid at timestamp):</div><div><span style="white-space:pre">        </span>metadata_parse_version=1</div><div><span style="white-space:pre">        </span>metadata_feature_version=1</div><div><span style="white-space:pre">        </span>timestamp=12219357 (Tue Nov 28 10:11:23 2017)</div><div><span style="white-space:pre">        </span>host-id=2</div><div><span style="white-space:pre">        </span>score=3400</div><div><span style="white-space:pre">        </span>vm_conf_refresh_time=12219386 (Tue Nov 28 10:11:52 2017)</div><div><span style="white-space:pre">        </span>conf_on_shared_storage=True</div><div><span style="white-space:pre">        </span>maintenance=False</div><div><span style="white-space:pre">        </span>state=EngineUp</div><div><span style="white-space:pre">        </span>stopped=False</div><div><br></div><div><br></div><div>--== Host 3 status ==--</div><div><br></div><div>conf_on_shared_storage             : True</div><div>Status up-to-date                  : True</div><div>Hostname                           : ovirt03.localdomain.local</div><div>Host ID                            : 3</div><div>Engine status                      : {&quot;reason&quot;: &quot;vm not running on this host&quot;, &quot;health&quot;: &quot;bad&quot;, &quot;vm&quot;: &quot;down&quot;, &quot;detail&quot;: &quot;unknown&quot;}</div><div>Score                              : 3400</div><div>stopped                            : False</div><div>Local maintenance                  : False</div><div>crc32                              : 9f6399ef</div><div>local_conf_timestamp               : 2136</div><div>Host timestamp                     : 2136</div><div>Extra metadata (valid at timestamp):</div><div><span style="white-space:pre">        </span>metadata_parse_version=1</div><div><span style="white-space:pre">        </span>metadata_feature_version=1</div><div><span style="white-space:pre">        </span>timestamp=2136 (Tue Nov 28 10:11:56 2017)</div><div><span style="white-space:pre">        </span>host-id=3</div><div><span style="white-space:pre">        </span>score=3400</div><div><span style="white-space:pre">        </span>vm_conf_refresh_time=2136 (Tue Nov 28 10:11:56 2017)</div><div><span style="white-space:pre">        </span>conf_on_shared_storage=True</div><div><span style="white-space:pre">        </span>maintenance=False</div><div><span style="white-space:pre">        </span>state=EngineDown</div><div><span style="white-space:pre">        </span>stopped=False</div><div>[root@ovirt01 ~]# </div></div><div><br></div><div>Can I manually migrate engine vm to ovirt03?</div><div><br></div><div><br></div><div>On ovirt03:</div><div><br></div><div><div>[root@ovirt03 ~]# gluster volume info engine</div><div> </div><div>Volume Name: engine</div><div>Type: Replicate</div><div>Volume ID: 6e2bd1d7-9c8e-4c54-9d85-f36e1b871771</div><div>Status: Started</div><div>Snapshot Count: 0</div><div>Number of Bricks: 1 x (2 + 1) = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: ovirt01.localdomain.local:/gluster/brick1/engine</div><div>Brick2: ovirt02.localdomain.local:/gluster/brick1/engine</div><div>Brick3: ovirt03.localdomain.local:/gluster/brick1/engine (arbiter)</div><div>Options Reconfigured:</div><div>performance.strict-o-direct: on</div><div>nfs.disable: on</div><div>user.cifs: off</div><div>network.ping-timeout: 30</div><div>cluster.shd-max-threads: 6</div><div>cluster.shd-wait-qlength: 10000</div><div>cluster.locking-scheme: granular</div><div>cluster.data-self-heal-algorithm: full</div><div>performance.low-prio-threads: 32</div><div>features.shard-block-size: 512MB</div><div>features.shard: on</div><div>storage.owner-gid: 36</div><div>storage.owner-uid: 36</div><div>cluster.server-quorum-type: server</div><div>cluster.quorum-type: auto</div><div>network.remote-dio: off</div><div>cluster.eager-lock: enable</div><div>performance.stat-prefetch: off</div><div>performance.io-cache: off</div><div>performance.read-ahead: off</div><div>performance.quick-read: off</div><div>performance.readdir-ahead: on</div><div>transport.address-family: inet</div><div>[root@ovirt03 ~]# </div></div><div><br></div><div><div>[root@ovirt03 ~]# gluster volume heal engine info</div><div>Brick ovirt01.localdomain.local:/gluster/brick1/engine</div><div>Status: Connected</div><div>Number of entries: 0</div><div><br></div><div>Brick ovirt02.localdomain.local:/gluster/brick1/engine</div><div>Status: Connected</div><div>Number of entries: 0</div><div><br></div><div>Brick ovirt03.localdomain.local:/gluster/brick1/engine</div><div>Status: Connected</div><div>Number of entries: 0</div><div><br></div><div>[root@ovirt03 ~]# </div></div><div><br></div><div>Thanks,</div><div><br></div><div>Gianluca</div><div><br></div></div>