<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <p>Hi Simone,</p>
    <p>I'll respond inline<br>
    </p>
    <br>
    <div class="moz-cite-prefix">Il 20/03/2017 11:59, Simone Tiraboschi
      ha scritto:<br>
    </div>
    <blockquote
cite="mid:CAN8-ONr2sC7nKPnDKh1j+rGRQ+kmKfODS6AJF9Wk3kYa8ujOcA@mail.gmail.com"
      type="cite">
      <div dir="ltr"><br>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">On Mon, Mar 20, 2017 at 11:15 AM,
            Simone Tiraboschi <span dir="ltr">&lt;<a
                moz-do-not-send="true" href="mailto:stirabos@redhat.com"
                target="_blank">stirabos@redhat.com</a>&gt;</span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
              0.8ex;border-left:1px solid
              rgb(204,204,204);padding-left:1ex">
              <div dir="ltr">
                <div class="gmail_extra"><br>
                  <div class="gmail_quote">
                    <div>
                      <div class="gmail-m_-4888476689598940363gmail-h5">On
                        Mon, Mar 20, 2017 at 10:12 AM, Paolo Margara <span
                          dir="ltr">&lt;<a moz-do-not-send="true"
                            href="mailto:paolo.margara@polito.it"
                            target="_blank">paolo.margara@polito.it</a>&gt;</span>
                        wrote:<br>
                        <blockquote class="gmail_quote"
                          style="margin:0px 0px 0px
                          0.8ex;border-left:1px solid
                          rgb(204,204,204);padding-left:1ex">Hi
                          Yedidyah,<br>
                          <span
                            class="gmail-m_-4888476689598940363gmail-m_7393503524263847303gmail-"><br>
                            Il 19/03/2017 11:55, Yedidyah Bar David ha
                            scritto:<br>
                            &gt; On Sat, Mar 18, 2017 at 12:25 PM, Paolo
                            Margara &lt;<a moz-do-not-send="true"
                              href="mailto:paolo.margara@polito.it"
                              target="_blank">paolo.margara@polito.it</a>&gt;
                            wrote:<br>
                            &gt;&gt; Hi list,<br>
                            &gt;&gt;<br>
                            &gt;&gt; I'm working on a system running on
                            oVirt 3.6 and the Engine is reporting<br>
                            &gt;&gt; the warning "The Hosted Engine
                            Storage Domain doesn't exist. It should<br>
                            &gt;&gt; be imported into the setup."
                            repeatedly in the Events tab into the Admin<br>
                            &gt;&gt; Portal.<br>
                            &gt;&gt;<br>
                            &gt;&gt; I've read into the list that Hosted
                            Engine Storage Domain should be<br>
                            &gt;&gt; imported automatically into the
                            setup during the upgrade to 3.6<br>
                            &gt;&gt; (original setup was on 3.5), but
                            this not happened while the<br>
                            &gt;&gt; HostedEngine is correctly visible
                            into the VM tab after the upgrade.<br>
                            &gt; Was the upgrade to 3.6 successful and
                            clean?<br>
                          </span>The upgrade from 3.5 to 3.6 was
                          successful, as every subsequent minor<br>
                          release upgrades. I rechecked the upgrade logs
                          I haven't seen any<br>
                          relevant error.<br>
                          One addition information: I'm currently
                          running on CentOS 7 and also the<br>
                          original setup was on this release version.<br>
                          <div>
                            <div
                              class="gmail-m_-4888476689598940363gmail-m_7393503524263847303gmail-h5">&gt;<br>
                              &gt;&gt; The Hosted Engine Storage Domain
                              is on a dedicated gluster volume but<br>
                              &gt;&gt; considering that, if I remember
                              correctly, oVirt 3.5 at that time did<br>
                              &gt;&gt; not support gluster as a backend
                              for the HostedEngine at that time I had<br>
                              &gt;&gt; installed the engine using
                              gluster's NFS server using<br>
                              &gt;&gt; 'localhost:/hosted-engine' as a
                              mount point.<br>
                              &gt;&gt;<br>
                              &gt;&gt; Currently on every nodes I can
                              read into the log of the<br>
                              &gt;&gt; ovirt-hosted-engine-ha agent the
                              following lines:<br>
                              &gt;&gt;<br>
                              &gt;&gt; MainThread::<a class="moz-txt-link-freetext" href="INFO::2017-03-17">INFO::2017-03-17</a><br>
                              &gt;&gt; 14:04:17,773::hosted_engine::4<wbr>62::ovirt_hosted_engine_ha.age<wbr>nt.hosted_engine.HostedEngine:<wbr>:(start_monitoring)<br>
                              &gt;&gt; Current state EngineUp (score:
                              3400)<br>
                              &gt;&gt; MainThread::<a class="moz-txt-link-freetext" href="INFO::2017-03-17">INFO::2017-03-17</a><br>
                              &gt;&gt; 14:04:17,774::hosted_engine::4<wbr>67::ovirt_hosted_engine_ha.age<wbr>nt.hosted_engine.HostedEngine:<wbr>:(start_monitoring)<br>
                              &gt;&gt; Best remote host virtnode-0-1
                              (id: 2<br>
                              &gt;&gt; , score: 3400)<br>
                              &gt;&gt; MainThread::<a class="moz-txt-link-freetext" href="INFO::2017-03-17">INFO::2017-03-17</a><br>
                              &gt;&gt; 14:04:27,956::hosted_engine::6<wbr>13::ovirt_hosted_engine_ha.age<wbr>nt.hosted_engine.HostedEngine:<wbr>:(_initialize_vdsm)<br>
                              &gt;&gt; Initializing VDSM<br>
                              &gt;&gt; MainThread::<a class="moz-txt-link-freetext" href="INFO::2017-03-17">INFO::2017-03-17</a><br>
                              &gt;&gt; 14:04:28,055::hosted_engine::6<wbr>58::ovirt_hosted_engine_ha.age<wbr>nt.hosted_engine.HostedEngine:<wbr>:(_initialize_storage_images)<br>
                              &gt;&gt; Connecting the storage<br>
                              &gt;&gt; MainThread::<a class="moz-txt-link-freetext" href="INFO::2017-03-17">INFO::2017-03-17</a><br>
                              &gt;&gt; 14:04:28,078::storage_server::<wbr>218::<a
                                moz-do-not-send="true"
                                href="http://ovirt_hosted_engine_ha.li"
                                target="_blank">ovirt_hosted_engine_ha.li</a><wbr>b.storage_server.StorageServer<wbr>::(connect_storage_server)<br>
                              &gt;&gt; Connecting storage server<br>
                              &gt;&gt; MainThread::<a class="moz-txt-link-freetext" href="INFO::2017-03-17">INFO::2017-03-17</a><br>
                              &gt;&gt; 14:04:28,278::storage_server::<wbr>222::<a
                                moz-do-not-send="true"
                                href="http://ovirt_hosted_engine_ha.li"
                                target="_blank">ovirt_hosted_engine_ha.li</a><wbr>b.storage_server.StorageServer<wbr>::(connect_storage_server)<br>
                              &gt;&gt; Connecting storage server<br>
                              &gt;&gt; MainThread::<a class="moz-txt-link-freetext" href="INFO::2017-03-17">INFO::2017-03-17</a><br>
                              &gt;&gt; 14:04:28,398::storage_server::<wbr>230::<a
                                moz-do-not-send="true"
                                href="http://ovirt_hosted_engine_ha.li"
                                target="_blank">ovirt_hosted_engine_ha.li</a><wbr>b.storage_server.StorageServer<wbr>::(connect_storage_server)<br>
                              &gt;&gt; Refreshing the storage domain<br>
                              &gt;&gt; MainThread::<a class="moz-txt-link-freetext" href="INFO::2017-03-17">INFO::2017-03-17</a><br>
                              &gt;&gt; 14:04:28,822::hosted_engine::6<wbr>85::ovirt_hosted_engine_ha.age<wbr>nt.hosted_engine.HostedEngine:<wbr>:(_initialize_storage_images)<br>
                              &gt;&gt; Preparing images<br>
                              &gt;&gt; MainThread::<a class="moz-txt-link-freetext" href="INFO::2017-03-17">INFO::2017-03-17</a><br>
                              &gt;&gt; 14:04:28,822::image::126::ovir<wbr>t_hosted_engine_ha.lib.image.I<wbr>mage::(prepare_images)<br>
                              &gt;&gt; Preparing images<br>
                              &gt;&gt; MainThread::<a class="moz-txt-link-freetext" href="INFO::2017-03-17">INFO::2017-03-17</a><br>
                              &gt;&gt; 14:04:29,308::hosted_engine::6<wbr>88::ovirt_hosted_engine_ha.age<wbr>nt.hosted_engine.HostedEngine:<wbr>:(_initialize_storage_images)<br>
                              &gt;&gt; Reloading vm.conf from the<br>
                              &gt;&gt;  shared storage domain<br>
                              &gt;&gt; MainThread::<a class="moz-txt-link-freetext" href="INFO::2017-03-17">INFO::2017-03-17</a><br>
                              &gt;&gt; 14:04:29,309::config::206::ovi<wbr>rt_hosted_engine_ha.agent.host<wbr>ed_engine.HostedEngine.config:<wbr>:(refresh_local_conf_file)<br>
                              &gt;&gt; Trying to get a fresher copy<br>
                              &gt;&gt; of vm configuration from the
                              OVF_STORE<br>
                              &gt;&gt; MainThread::WARNING::2017-03-1<wbr>7<br>
                              &gt;&gt; 14:04:29,567::ovf_store::104::<wbr>ovirt_hosted_engine_ha.lib.ovf<wbr>.ovf_store.OVFStore::(scan)<br>
                              &gt;&gt; Unable to find OVF_STORE<br>
                              &gt;&gt; MainThread::ERROR::2017-03-17<br>
                              &gt;&gt; 14:04:29,691::config::235::ovi<wbr>rt_hosted_engine_ha.agent.host<wbr>ed_engine.HostedEngine.config:<wbr>:(refresh_local_conf_file)<br>
                              &gt;&gt; Unable to get vm.conf from O<br>
                              &gt;&gt; VF_STORE, falling back to initial
                              vm.conf<br>
                              &gt; This is normal at your current state.<br>
                              &gt;<br>
                              &gt;&gt; ...and the following lines into
                              the logfile engine.log inside the Hosted<br>
                              &gt;&gt; Engine:<br>
                              &gt;&gt;<br>
                              &gt;&gt; 2017-03-16 07:36:28,087 INFO<br>
                              &gt;&gt; [org.ovirt.engine.core.bll.Imp<wbr>ortHostedEngineStorageDomainCo<wbr>mmand]<br>
                              &gt;&gt; (org.ovirt.thread.pool-8-threa<wbr>d-38)
                              [236d315c] Lock Acquired to object<br>
                              &gt;&gt; 'EngineLock:{exclusiveLocks='[<wbr>]',
                              sharedLocks='null'}'<br>
                              &gt;&gt; 2017-03-16 07:36:28,115 WARN<br>
                              &gt;&gt; [org.ovirt.engine.core.bll.Imp<wbr>ortHostedEngineStorageDomainCo<wbr>mmand]<br>
                              &gt;&gt; (org.ovirt.thread.pool-8-threa<wbr>d-38)
                              [236d315c] CanDoAction of action<br>
                              &gt;&gt; 'ImportHostedEngineStorageDoma<wbr>in'
                              failed for user SYSTEM. Reasons:<br>
                              &gt;&gt; VAR__ACTION__ADD,VAR__TYPE__ST<wbr>ORAGE__DOMAIN,ACTION_TYPE_FAIL<wbr>ED_STORAGE_DOMAIN_NOT_EXIST<br>
                              &gt; That's the thing to debug. Did you
                              check vdsm logs on the hosts, near<br>
                              &gt; the time this happens?<br>
                            </div>
                          </div>
                          Some moments before I saw the following lines
                          into the vdsm.log of the<br>
                          host that execute the hosted engine and that
                          is the SPM, but I see the<br>
                          same lines also on the other nodes:<br>
                          <br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,412::task::595::Stora<wbr>ge.TaskManager.Task::(_updateS<wbr>tate)<br>
                          Task=`ae5af1a1-207c-432d-acfa-<wbr>f3e03e014ee6`::moving
                          from state init -&gt;<br>
                          state preparing<br>
                          Thread-1746094::<a class="moz-txt-link-freetext" href="INFO::2017-03">INFO::2017-03</a>-<wbr>16<br>
                          07:36:00,413::logUtils::48::di<wbr>spatcher::(wrapper)
                          Run and protect:<br>
                          getImagesList(sdUUID='3b5db584<wbr>-5d21-41dc-8f8d-712ce9423a27',
                          options=None)<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,413::resourceManager:<wbr>:199::Storage.ResourceManager.<wbr>Request::(__init__)<br>
                          ResName=`Storage.3b5db584-5d21<wbr>-41dc-8f8d-712ce9423a27`ReqID=<wbr>`8ea3c7f3-8ccd-4127-96b1-ec97a<wbr>3c7b8d4`::Request<br>
                          was made in '/usr/share/vdsm/storage/hsm.p<wbr>y'
                          line '3313' at 'getImagesList'<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,413::resourceManager:<wbr>:545::Storage.ResourceManager:<wbr>:(registerResource)<br>
                          Trying to register resource<br>
                          'Storage.3b5db584-5d21-41dc-8f<wbr>8d-712ce9423a27'
                          for lock type 'shared'<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,414::resourceManager:<wbr>:604::Storage.ResourceManager:<wbr>:(registerResource)<br>
                          Resource 'Storage.3b5db584-5d21-41dc-8f<wbr>8d-712ce9423a27'
                          is free. Now<br>
                          locking as 'shared' (1 active user)<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,414::resourceManager:<wbr>:239::Storage.ResourceManager.<wbr>Request::(grant)<br>
                          ResName=`Storage.3b5db584-5d21<wbr>-41dc-8f8d-712ce9423a27`ReqID=<wbr>`8ea3c7f3-8ccd-4127-96b1-ec97a<wbr>3c7b8d4`::Granted<br>
                          request<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,414::task::827::Stora<wbr>ge.TaskManager.Task::(resource<wbr>Acquired)<br>
                          Task=`ae5af1a1-207c-432d-acfa-<wbr>f3e03e014ee6`::_resourcesAcqui<wbr>red:<br>
                          Storage.3b5db584-5d21-41dc-8f8<wbr>d-712ce9423a27
                          (shared)<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,414::task::993::Stora<wbr>ge.TaskManager.Task::(_decref)<br>
                          Task=`ae5af1a1-207c-432d-acfa-<wbr>f3e03e014ee6`::ref
                          1 aborting False<br>
                          Thread-1746094::ERROR::2017-03<wbr>-16<br>
                          07:36:00,415::task::866::Stora<wbr>ge.TaskManager.Task::(_setErro<wbr>r)<br>
                          Task=`ae5af1a1-207c-432d-acfa-<wbr>f3e03e014ee6`::Unexpected
                          error<br>
                          Traceback (most recent call last):<br>
                            File "/usr/share/vdsm/storage/task.<wbr>py",
                          line 873, in _run<br>
                              return fn(*args, **kargs)<br>
                            File "/usr/share/vdsm/logUtils.py", line 49,
                          in wrapper<br>
                              res = f(*args, **kwargs)<br>
                            File "/usr/share/vdsm/storage/hsm.p<wbr>y",
                          line 3315, in getImagesList<br>
                              images = dom.getAllImages()<br>
                            File "/usr/share/vdsm/storage/fileS<wbr>D.py",
                          line 373, in getAllImages<br>
                              self.getPools()[0],<br>
                          IndexError: list index out of range<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,415::task::885::Stora<wbr>ge.TaskManager.Task::(_run)<br>
                          Task=`ae5af1a1-207c-432d-acfa-<wbr>f3e03e014ee6`::Task._run:<br>
                          ae5af1a1-207c-432d-acfa-f3e03e<wbr>014ee6<br>
                          ('3b5db584-5d21-41dc-8f8d-712c<wbr>e9423a27',)
                          {} failed - stopping task<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,415::task::1246::Stor<wbr>age.TaskManager.Task::(stop)<br>
                          Task=`ae5af1a1-207c-432d-acfa-<wbr>f3e03e014ee6`::stopping
                          in state preparing<br>
                          (force False)<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,416::task::993::Stora<wbr>ge.TaskManager.Task::(_decref)<br>
                          Task=`ae5af1a1-207c-432d-acfa-<wbr>f3e03e014ee6`::ref
                          1 aborting True<br>
                          Thread-1746094::<a class="moz-txt-link-freetext" href="INFO::2017-03">INFO::2017-03</a>-<wbr>16<br>
                          07:36:00,416::task::1171::Stor<wbr>age.TaskManager.Task::(prepare<wbr>)<br>
                          Task=`ae5af1a1-207c-432d-acfa-<wbr>f3e03e014ee6`::aborting:
                          Task is aborted:<br>
                          u'list index out of range' - code 100<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,416::task::1176::Stor<wbr>age.TaskManager.Task::(prepare<wbr>)<br>
                          Task=`ae5af1a1-207c-432d-acfa-<wbr>f3e03e014ee6`::Prepare:
                          aborted: list<br>
                          index out of range<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,416::task::993::Stora<wbr>ge.TaskManager.Task::(_decref)<br>
                          Task=`ae5af1a1-207c-432d-acfa-<wbr>f3e03e014ee6`::ref
                          0 aborting True<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,416::task::928::Stora<wbr>ge.TaskManager.Task::(_doAbort<wbr>)<br>
                          Task=`ae5af1a1-207c-432d-acfa-<wbr>f3e03e014ee6`::Task._doAbort:
                          force False<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,416::resourceManager:<wbr>:980::Storage.ResourceManager.<wbr>Owner::(cancelAll)<br>
                          Owner.cancelAll requests {}<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,417::task::595::Stora<wbr>ge.TaskManager.Task::(_updateS<wbr>tate)<br>
                          Task=`ae5af1a1-207c-432d-acfa-<wbr>f3e03e014ee6`::moving
                          from state preparing<br>
                          -&gt; state aborting<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,417::task::550::Stora<wbr>ge.TaskManager.Task::(__state_<wbr>aborting)<br>
                          Task=`ae5af1a1-207c-432d-acfa-<wbr>f3e03e014ee6`::_aborting:
                          recover policy none<br>
                          Thread-1746094::DEBUG::2017-03<wbr>-16<br>
                          07:36:00,417::task::595::Stora<wbr>ge.TaskManager.Task::(_updateS<wbr>tate)<br>
                          Task=`ae5af1a1-207c-432d-acfa-<wbr>f3e03e014ee6`::moving
                          from state aborting<br>
                          -&gt; state failed<br>
                          <br>
                          After that I tried to execute a simple query
                          on storage domains using<br>
                          vdsClient and I got the following information:<br>
                          <br>
                          # vdsClient -s 0 getStorageDomainsList<br>
                          3b5db584-5d21-41dc-8f8d-712ce9<wbr>423a27<br>
                          0966f366-b5ae-49e8-b05e-bee189<wbr>5c2d54<br>
                          35223b83-e0bd-4c8d-91a9-8c6b85<wbr>336e7d<br>
                          2c3994e3-1f93-4f2a-8a0a-0b5d38<wbr>8a2be7<br>
                          # vdsClient -s 0 getStorageDomainInfo
                          3b5db584-5d21-41dc-8f8d-712ce9<wbr>423a27<br>
                              uuid = 3b5db584-5d21-41dc-8f8d-712ce9<wbr>423a27<br>
                              version = 3<br>
                              role = Regular<br>
                              remotePath = localhost:/hosted-engine<br>
                        </blockquote>
                        <div><br>
                        </div>
                      </div>
                    </div>
                    <div>Your issue is probably here: by design all the
                      hosts of a single datacenter should be able to see
                      all the storage domains including the
                      hosted-engine one but if try to mount it as
                      localhost:/hosted-engine this will not be
                      possible.</div>
                    <span class="gmail-m_-4888476689598940363gmail-">
                      <div> </div>
                      <blockquote class="gmail_quote" style="margin:0px
                        0px 0px 0.8ex;border-left:1px solid
                        rgb(204,204,204);padding-left:1ex">
                            type = NFS<br>
                            class = Data<br>
                            pool = []<br>
                            name = default<br>
                        # vdsClient -s 0 getImagesList
                        3b5db584-5d21-41dc-8f8d-712ce9<wbr>423a27<br>
                        list index out of range<br>
                        <br>
                        All other storage domains have the pool
                        attribute defined, could be this<br>
                        the issue? How can I assign to a pool the Hosted
                        Engine Storage Domain?<br>
                      </blockquote>
                      <div><br>
                      </div>
                    </span>
                    <div>This will be the result of the auto import
                      process once feasible.</div>
                    <span class="gmail-m_-4888476689598940363gmail-">
                      <div> </div>
                      <blockquote class="gmail_quote" style="margin:0px
                        0px 0px 0.8ex;border-left:1px solid
                        rgb(204,204,204);padding-left:1ex">
                        <div
class="gmail-m_-4888476689598940363gmail-m_7393503524263847303gmail-HOEnZb">
                          <div
                            class="gmail-m_-4888476689598940363gmail-m_7393503524263847303gmail-h5">&gt;<br>
                            &gt;&gt; 2017-03-16 07:36:28,116 INFO<br>
                            &gt;&gt; [org.ovirt.engine.core.bll.Imp<wbr>ortHostedEngineStorageDomainCo<wbr>mmand]<br>
                            &gt;&gt; (org.ovirt.thread.pool-8-threa<wbr>d-38)
                            [236d315c] Lock freed to object<br>
                            &gt;&gt; 'EngineLock:{exclusiveLocks='[<wbr>]',
                            sharedLocks='null'}'<br>
                            &gt;&gt;<br>
                            &gt;&gt; How can I safely import the Hosted
                            Engine Storage Domain into my setup?<br>
                            &gt;&gt; In this situation is safe to
                            upgrade to oVirt 4.0?<br>
                          </div>
                        </div>
                      </blockquote>
                    </span></div>
                </div>
              </div>
            </blockquote>
            <div><br>
            </div>
            <div>This could be really tricky because I think that
              upgrading from an hosted-engine env from 3.5 deployed on
              hyperconverged env but mounted on NFS over a localhost
              loopback mount to 4.1 is something that is by far out of
              the paths we tested so I think you can hit a few surprises
              there.</div>
            <div><br>
            </div>
            <div>In 4.1 the expected configuration under
              /etc/ovirt-hosted-engine/<wbr>hosted-engine.conf includes:</div>
            <div>domainType=glusterfs<br>
            </div>
            <div>storage=&lt;FIRST_HOST_ADDR&gt;:/<wbr>path<br>
            </div>
            <div>mnt_options=<span class="gmail-il" style="white-space:pre-wrap">backup</span><span style="color:rgb(85,85,85);white-space:pre-wrap">-</span><span class="gmail-il" style="white-space:pre-wrap">volfile</span><span style="color:rgb(85,85,85);white-space:pre-wrap">-</span><wbr style="color:rgb(85,85,85);white-space:pre-wrap"><span class="gmail-il" style="white-space:pre-wrap">servers</span><span style="color:rgb(85,85,85);white-space:pre-wrap">=</span>&lt;SECOND_HOST_ADDR&gt;<span style="color:rgb(85,85,85);white-space:pre-wrap">:</span>&lt;THIRD_HOST_ADDR&gt;
</div><div>
</div><div>But these requires more recent vdsm and ovirt-hosted-engine-ha versions.</div><div>Then you also have to configure your engine to have both virt and gluster on the same cluster.</div><div>Nothing is going to do them automatically for you on upgrades.</div><div>
</div>I see two options here:
1. easy, but with a substantial downtime: shutdown your whole DC, start from scratch with gdeploy from 4.1 to configure a new gluster volume and anew engine over there, once you have a new engine import your existing storage domain and restart your VMs</div><div class="gmail_quote">2. a lot trickier, try to reach 4.1 status manually editing /etc/ovirt-hosted-engine/hosted-engine.conf and so on and upgrading everything to 4.1; this could be pretty risky because you are on a path we never tested since hyperconverged hosted-engine at 3.5 wasn't released.</div></div></div></blockquote>I understood, definitely bad news for me. But currently I'm running oVirt 3.6.7 that, if I remember correctly, supports hyperconverged setup, it's not possible to fix this issue with my current version?
I've installed vdsm 4.17.32-1.el7 with the vdsm-gluster package and ovirt-hosted-engine-ha 1.3.5.7-1.el7.centos on CentOS 7.2.1511 and my engine it's already configured to have both virt and gluster on the same cluster.
I cannot put the cluster in maintenance, stop the hosted-engine, stop ovirt-hosted-engine-ha, edit hosted-engine.conf by changing domainType, storage and mnt_options and restart aovirt-hosted-engine-ha and the hosted-engine?
<blockquote cite="mid:CAN8-ONr2sC7nKPnDKh1j+rGRQ+kmKfODS6AJF9Wk3kYa8ujOcA@mail.gmail.com" type="cite"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="gmail-m_-4888476689598940363gmail-"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="gmail-m_-4888476689598940363gmail-m_7393503524263847303gmail-HOEnZb"><div class="gmail-m_-4888476689598940363gmail-m_7393503524263847303gmail-h5">
&gt; I'd first try to solve this.

&gt;

&gt; What OS do you have on your hosts? Are they all upgraded to 3.6?

&gt;

&gt; See also:

&gt;

&gt; <a moz-do-not-send="true" href="https://www.ovirt.org/documentation/how-to/hosted-engine-host-OS-upgrade/" rel="noreferrer" target="_blank">https://www.ovirt.org/document<wbr>ation/how-to/hosted-engine-hos<wbr>t-OS-upgrade/</a>

&gt;

&gt; Best,

&gt;

&gt;&gt;

&gt;&gt; Greetings,

&gt;&gt;     Paolo

&gt;&gt;

&gt;&gt; ______________________________<wbr>_________________

&gt;&gt; Users mailing list

&gt;&gt; <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>

&gt;&gt; <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a>

&gt;

&gt;

Greetings,

    Paolo

______________________________<wbr>_________________

Users mailing list

<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>

<a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a>

</div></div></blockquote></span></div>
</div></div></blockquote></div></div></div></blockquote>Greetings,
    Paolo
</body></html>