Roy, would I do that in the Cluster tab, with New button, and then in
host
configurator select Hosted Engine sidebar? I noticed this option existed,
but the RHEV docs and developer blogs I've been referencing specify the
'hosted-engine --deploy' method.
on 4.0 this is the supported way of doing that. If fixes and prevents many
troubles
and more than that, its simply doing that in 1 place.
The doc will be updated for that if not already. Also please refer to this
ovirt blog post [1]
[1]
On Sun, Jul 31, 2016 at 3:04 AM Roy Golan <rgolan(a)redhat.com>
wrote:
> On 30 July 2016 at 02:48, Kenneth Bingham <w(a)qrk.us> wrote:
>
>> Aw crap. I did exactly the same thing and this could explain a lot of
>> the issues I've been pulling out my beard over. Every time I did
>> 'hosted-engine --deploy' on the RHEV-M|NODE host I entered the FQDN of
>> *that* host, not the first host, as the origin of the Gluster FS volume
>> because at the time I didn't realize that
>> a. the manager would key deduplication on the URI of the volume
>> b. that the volume would be mounted on FUSE, not NFS, and therefore no
>> single point of entry is created by using the FQDN of the first host
>> because the GFS client will persist connections with all peers
>>
>>
> If you ever want to add an hosted engine host to your setup please do
> that from UI and not from CLI. That will prevent all this confusion.
>
>
>
>> On Fri, Jul 29, 2016 at 6:08 AM Simone Tiraboschi <stirabos(a)redhat.com>
>> wrote:
>>
>>> On Fri, Jul 29, 2016 at 11:35 AM, Wee Sritippho <wee.s(a)forest.go.th>
>>> wrote:
>>> > On 29/7/2559 15:50, Simone Tiraboschi wrote:
>>> >>
>>> >> On Fri, Jul 29, 2016 at 6:31 AM, Wee Sritippho
<wee.s(a)forest.go.th>
>>> wrote:
>>> >>>
>>> >>> On 28/7/2559 15:54, Simone Tiraboschi wrote:
>>> >>>
>>> >>> On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho
<wee.s(a)forest.go.th
>>> >
>>> >>> wrote:
>>> >>>>
>>> >>>> On 21/7/2559 16:53, Simone Tiraboschi wrote:
>>> >>>>
>>> >>>> On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho <
>>> wee.s(a)forest.go.th>
>>> >>>> wrote:
>>> >>>>
>>> >>>>> Can I just follow
>>> >>>>>
>>> >>>>>
>>>
http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-e...
>>> >>>>> until step 3 and do everything else via GUI?
>>> >>>>
>>> >>>> Yes, absolutely.
>>> >>>>
>>> >>>>
>>> >>>> Hi, I upgrade a host (host02) via GUI and now its score is
0.
>>> Restarted
>>> >>>> the services but the result is still the same. Kinda lost
now. What
>>> >>>> should I
>>> >>>> do next?
>>> >>>>
>>> >>> Can you please attach ovirt-ha-agent logs?
>>> >>>
>>> >>>
>>> >>> Yes, here are the logs:
>>> >>>
https://app.box.com/s/b4urjty8dsuj98n3ywygpk3oh5o7pbsh
>>> >>
>>> >> Thanks Wee,
>>> >> your issue is here:
>>> >> MainThread::ERROR::2016-07-17
>>> >>
>>> >>
>>>
14:32:45,586::storage_server::143::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(_validate_pre_connected_path)
>>> >> The hosted-engine storage domain is already mounted on
>>> >>
>>> >> '/rhev/data-center/mnt/glusterSD/host02.ovirt.forest.go.th:
>>> _hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4'
>>> >> with a path that is not supported anymore: the right path should be
>>> >>
>>> >> '/rhev/data-center/mnt/glusterSD/host01.ovirt.forest.go.th:
>>> _hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4'.
>>> >>
>>> >> Did you manually tried to avoid the issue of a single entry point
for
>>> >> the gluster FS volume using host01.ovirt.forest.go.th:
>>> _hosted__engine
>>> >> and host02.ovirt.forest.go.th:_hosted__engine there?
>>> >> This could cause a lot of confusion since the code could not detect
>>> >> that the storage domain is the same and you can end with it mounted
>>> >> twice into different locations and a lot of issues.
>>> >> The correct solution of that issue was this one:
>>> >>
https://bugzilla.redhat.com/show_bug.cgi?id=1298693#c20
>>> >>
>>> >> Now, to have it fixed on your env you have to hack a bit.
>>> >> First step, you have to edit
>>> >> /etc/ovirt-hosted-engine/hosted-engine.conf on all your
hosted-engine
>>> >> hosts to ensure that the storage field always point to the same
entry
>>> >> point (host01 for instance)
>>> >> Then on each host you can add something like:
>>> >>
>>> >> mnt_options=backupvolfile-server=host02.ovirt.forest.go.th:h
>>> ost03.ovirt.forest.go.th
>>> ,fetch-attempts=2,log-level=WARNING,log-file=/var/log/engine_domain.log
>>> >>
>>> >> Then check the representation of your storage connection in the
table
>>> >> storage_server_connections of the engine DB and make sure that
>>> >> connection refers to the entry point you used in hosted-engine.conf
>>> on
>>> >> all your hosts, you have lastly to set the value of mount_options
>>> also
>>> >> here.
>>> >
>>> > Weird. The configuration in all hosts are already referring to host01.
>>>
>>> but for sure you have a connection pointing to host02 somewhere, did
>>> you try to manually deploy from CLI connecting the gluster volume on
>>> host02?
>>>
>>> > Also, in the storage_server_connections table:
>>> >
>>> > engine=> SELECT * FROM storage_server_connections;
>>> > id | connection |
>>> > user_name | password | iqn | port | portal | storage_type |
>>> mount_options |
>>> > vfs_type
>>> > | nfs_version | nfs_timeo | nfs_retrans
>>> >
>>>
--------------------------------------+------------------------------------------+-----------+----------+-----+------+--------+--------------+---------------+----------
>>> > -+-------------+-----------+-------------
>>> > bd78d299-c8ff-4251-8aab-432ce6443ae8 |
>>> > host01.ovirt.forest.go.th:/hosted_engine | | | |
>>> | 1
>>> > | 7 | | glusterfs
>>> > | | |
>>> > (1 row)
>>> >
>>> >
>>> >>
>>> >> Please tune also the value of network.ping-timeout for your
glusterFS
>>> >> volume to avoid this:
>>> >>
https://bugzilla.redhat.com/show_bug.cgi?id=1319657#c17
>>> >
>>> >
>>> > --
>>> > Wee
>>> >
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
>>
>>