On 29/7/2559 17:07, Simone Tiraboschi wrote:
>
> On Fri, Jul 29, 2016 at 11:35 AM, Wee Sritippho <wee.s(a)forest.go.th>
> wrote:
>>
>> On 29/7/2559 15:50, Simone Tiraboschi wrote:
>>>
>>> On Fri, Jul 29, 2016 at 6:31 AM, Wee Sritippho <wee.s(a)forest.go.th>
>>> wrote:
>>>>
>>>> On 28/7/2559 15:54, Simone Tiraboschi wrote:
>>>>
>>>> On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho
<wee.s(a)forest.go.th>
>>>> wrote:
>>>>>
>>>>> On 21/7/2559 16:53, Simone Tiraboschi wrote:
>>>>>
>>>>> On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho
<wee.s(a)forest.go.th>
>>>>> wrote:
>>>>>
>>>>>> Can I just follow
>>>>>>
>>>>>>
>>>>>>
http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-e...
>>>>>> until step 3 and do everything else via GUI?
>>>>>
>>>>> Yes, absolutely.
>>>>>
>>>>>
>>>>> Hi, I upgrade a host (host02) via GUI and now its score is 0.
>>>>> Restarted
>>>>> the services but the result is still the same. Kinda lost now. What
>>>>> should I
>>>>> do next?
>>>>>
>>>> Can you please attach ovirt-ha-agent logs?
>>>>
>>>>
>>>> Yes, here are the logs:
>>>>
https://app.box.com/s/b4urjty8dsuj98n3ywygpk3oh5o7pbsh
>>>
>>> Thanks Wee,
>>> your issue is here:
>>> MainThread::ERROR::2016-07-17
>>>
>>>
>>>
14:32:45,586::storage_server::143::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(_validate_pre_connected_path)
>>> The hosted-engine storage domain is already mounted on
>>>
>>>
>>>
'/rhev/data-center/mnt/glusterSD/host02.ovirt.forest.go.th:_hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4'
>>> with a path that is not supported anymore: the right path should be
>>>
>>>
>>>
'/rhev/data-center/mnt/glusterSD/host01.ovirt.forest.go.th:_hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4'.
>>>
>>> Did you manually tried to avoid the issue of a single entry point for
>>> the gluster FS volume using host01.ovirt.forest.go.th:_hosted__engine
>>> and host02.ovirt.forest.go.th:_hosted__engine there?
>>> This could cause a lot of confusion since the code could not detect
>>> that the storage domain is the same and you can end with it mounted
>>> twice into different locations and a lot of issues.
>>> The correct solution of that issue was this one:
>>>
https://bugzilla.redhat.com/show_bug.cgi?id=1298693#c20
>>>
>>> Now, to have it fixed on your env you have to hack a bit.
>>> First step, you have to edit
>>> /etc/ovirt-hosted-engine/hosted-engine.conf on all your hosted-engine
>>> hosts to ensure that the storage field always point to the same entry
>>> point (host01 for instance)
>>> Then on each host you can add something like:
>>>
>>>
>>>
mnt_options=backupvolfile-server=host02.ovirt.forest.go.th:host03.ovirt.forest.go.th,fetch-attempts=2,log-level=WARNING,log-file=/var/log/engine_domain.log
>>>
>>> Then check the representation of your storage connection in the table
>>> storage_server_connections of the engine DB and make sure that
>>> connection refers to the entry point you used in hosted-engine.conf on
>>> all your hosts, you have lastly to set the value of mount_options also
>>> here.
>>
>> Weird. The configuration in all hosts are already referring to host01.
>
> but for sure you have a connection pointing to host02 somewhere, did
> you try to manually deploy from CLI connecting the gluster volume on
> host02?
If I recall correctly, yes.
Ok, so please reboot your host before trying again to make sure that
every reference get cleaned.
>> Also, in the storage_server_connections table:
>>
>> engine=> SELECT * FROM storage_server_connections;
>> id | connection |
>> user_name | password | iqn | port | portal | storage_type | mount_options
>> |
>> vfs_type
>> | nfs_version | nfs_timeo | nfs_retrans
>>
>>
--------------------------------------+------------------------------------------+-----------+----------+-----+------+--------+--------------+---------------+----------
>> -+-------------+-----------+-------------
>> bd78d299-c8ff-4251-8aab-432ce6443ae8 |
>> host01.ovirt.forest.go.th:/hosted_engine | | | | | 1
>> | 7 | | glusterfs
>> | | |
>> (1 row)
>>
>>
>>> Please tune also the value of network.ping-timeout for your glusterFS
>>> volume to avoid this:
>>>
https://bugzilla.redhat.com/show_bug.cgi?id=1319657#c17
>>
>>
>> --
>> Wee
>>
--
Wee