[ovirt-users] Self hosted engine fails after 4.2 upgrade

Jiffin Tony Thottan jthottan at redhat.com
Fri Dec 22 11:46:00 UTC 2017


On Friday 22 December 2017 03:08 PM, Sahina Bose wrote:
>
>
> On Fri, Dec 22, 2017 at 2:45 PM, Sandro Bonazzola <sbonazzo at redhat.com 
> <mailto:sbonazzo at redhat.com>> wrote:
>
>
>
>     2017-12-21 17:01 GMT+01:00 Stefano Danzi <s.danzi at hawai.it
>     <mailto:s.danzi at hawai.it>>:
>
>
>
>         Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:
>>
>>
>>         2017-12-21 14:26 GMT+01:00 Stefano Danzi <s.danzi at hawai.it
>>         <mailto:s.danzi at hawai.it>>:
>>
>>             Sloved installing glusterfs-gnfs package.
>>             Anyway could be nice to move hosted engine to gluster....
>>
>>
>>         Adding some gluster folks. Are we missing a dependency somewhere?
>>         During the upgrade nfs on gluster stopped to work here and
>>         adding the missing dep solved.
>>         Stefano please confirm, you were on gluster 3.8 (oVirt 4.1)
>>         and now you are on gluster 3.12 (ovirt 4.2)
>>
>         Sandro I confirm the version.
>         Host are running CentOS 7.4.1708
>         before the upgrade there was gluster 3.8 in oVirt 4.1
>         now I have gluster 3.12 in oVirt 4.2
>
>
>     Thanks Stefano, I alerted glusterfs team, they'll have a look.
>
>
>
> [Adding Jiffin to take a look and confirm]
>
> I think this is to do with the separation of nfs components in gluster 
> 3.12 (see https://bugzilla.redhat.com/show_bug.cgi?id=1326219). The 
> recommended nfs solution with gluster is nfs-ganesha, and hence the 
> gluster nfs is no longer installed by default.
>
Hi,

For gluster nfs u need to install gluster-gnfs package. As Sahina , it 
is change from 3.12 onwards I guess

Regards,
Jiffin

>
>
>>
>>             Il 21/12/2017 11:37, Stefano Danzi ha scritto:
>>>
>>>
>>>             Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
>>>>
>>>>
>>>>             On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi
>>>>             <s.danzi at hawai.it <mailto:s.danzi at hawai.it>> wrote:
>>>>
>>>>                 Hello!
>>>>                 I have a test system with one phisical host and
>>>>                 hosted engine running on it.
>>>>                 Storage is gluster but hosted engine mount it as nfs.
>>>>
>>>>                 After the upgrade gluster no longer activate nfs.
>>>>                 The command "gluster volume set engine nfs.disable
>>>>                 off" doesn't help.
>>>>
>>>>                 How I can re-enable nfs? O better how I can migrate
>>>>                 self hosted engine to native glusterfs?
>>>>
>>>>
>>>>
>>>>             Ciao Stefano,
>>>>             could you please attach the output of
>>>>               gluster volume info engine
>>>>
>>>>             adding Kasturi here
>>>
>>>             [root at ovirt01 ~]# gluster volume info engine
>>>
>>>             Volume Name: engine
>>>             Type: Distribute
>>>             Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8
>>>             Status: Started
>>>             Snapshot Count: 0
>>>             Number of Bricks: 1
>>>             Transport-type: tcp
>>>             Bricks:
>>>             Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick
>>>             Options Reconfigured:
>>>             server.event-threads: 4
>>>             client.event-threads: 4
>>>             network.ping-timeout: 30
>>>             server.allow-insecure: on
>>>             storage.owner-gid: 36
>>>             storage.owner-uid: 36
>>>             cluster.server-quorum-type: server
>>>             cluster.quorum-type: auto
>>>             network.remote-dio: enable
>>>             cluster.eager-lock: enable
>>>             performance.stat-prefetch: off
>>>             performance.io-cache: off
>>>             performance.read-ahead: off
>>>             performance.quick-read: off
>>>             nfs.disable: off
>>>             performance.low-prio-threads: 32
>>>             cluster.data-self-heal-algorithm: full
>>>             cluster.locking-scheme: granular
>>>             cluster.shd-max-threads: 8
>>>             cluster.shd-wait-qlength: 10000
>>>             features.shard: on
>>>             user.cifs: off
>>>             features.shard-block-size: 512MB
>>>
>>>
>>>
>>>>
>>>>                 _______________________________________________
>>>>                 Users mailing list
>>>>                 Users at ovirt.org <mailto:Users at ovirt.org>
>>>>                 http://lists.ovirt.org/mailman/listinfo/users
>>>>                 <http://lists.ovirt.org/mailman/listinfo/users>
>>>>
>>>>
>>>
>>>
>>>
>>>             _______________________________________________
>>>             Users mailing list
>>>             Users at ovirt.org <mailto:Users at ovirt.org>
>>>             http://lists.ovirt.org/mailman/listinfo/users
>>>             <http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
>>             _______________________________________________
>>             Users mailing list
>>             Users at ovirt.org <mailto:Users at ovirt.org>
>>             http://lists.ovirt.org/mailman/listinfo/users
>>             <http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
>>
>>
>>         -- 
>>
>>         SANDRO BONAZZOLA
>>
>>         ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG
>>         VIRTUALIZATION R&D
>>
>>         Red Hat EMEA <https://www.redhat.com/>
>>
>>         <https://red.ht/sig> 	
>>         TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>
>>
>
>
>
>
>     -- 
>
>     SANDRO BONAZZOLA
>
>     ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>
>     Red Hat EMEA <https://www.redhat.com/>
>
>     <https://red.ht/sig> 	
>     TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171222/2d1ef88f/attachment.html>


More information about the Users mailing list