On Fri, Dec 22, 2017 at 2:45 PM, Sandro Bonazzola <sbonazzo(a)redhat.com>
wrote:
2017-12-21 17:01 GMT+01:00 Stefano Danzi <s.danzi(a)hawai.it>:
>
>
> Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:
>
>
>
> 2017-12-21 14:26 GMT+01:00 Stefano Danzi <s.danzi(a)hawai.it>:
>
>> Sloved installing glusterfs-gnfs package.
>> Anyway could be nice to move hosted engine to gluster....
>>
>>
> Adding some gluster folks. Are we missing a dependency somewhere?
> During the upgrade nfs on gluster stopped to work here and adding the
> missing dep solved.
> Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now you
> are on gluster 3.12 (ovirt 4.2)
>
> Sandro I confirm the version.
> Host are running CentOS 7.4.1708
> before the upgrade there was gluster 3.8 in oVirt 4.1
> now I have gluster 3.12 in oVirt 4.2
>
>
Thanks Stefano, I alerted glusterfs team, they'll have a look.
[Adding Jiffin to take a look and confirm]
I think this is to do with the separation of nfs components in gluster 3.12
(see
https://bugzilla.redhat.com/show_bug.cgi?id=1326219). The recommended
nfs solution with gluster is nfs-ganesha, and hence the gluster nfs is no
longer installed by default.
>
>
>>
>> Il 21/12/2017 11:37, Stefano Danzi ha scritto:
>>
>>
>>
>> Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
>>
>>
>>
>> On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.danzi(a)hawai.it>
>> wrote:
>>
>>> Hello!
>>> I have a test system with one phisical host and hosted engine running
>>> on it.
>>> Storage is gluster but hosted engine mount it as nfs.
>>>
>>> After the upgrade gluster no longer activate nfs.
>>> The command "gluster volume set engine nfs.disable off" doesn't
help.
>>>
>>> How I can re-enable nfs? O better how I can migrate self hosted engine
>>> to native glusterfs?
>>>
>>
>>
>> Ciao Stefano,
>> could you please attach the output of
>> gluster volume info engine
>>
>> adding Kasturi here
>>
>>
>> [root@ovirt01 ~]# gluster volume info engine
>>
>> Volume Name: engine
>> Type: Distribute
>> Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick
>> Options Reconfigured:
>> server.event-threads: 4
>> client.event-threads: 4
>> network.ping-timeout: 30
>> server.allow-insecure: on
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> network.remote-dio: enable
>> cluster.eager-lock: enable
>> performance.stat-prefetch: off
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> nfs.disable: off
>> performance.low-prio-threads: 32
>> cluster.data-self-heal-algorithm: full
>> cluster.locking-scheme: granular
>> cluster.shd-max-threads: 8
>> cluster.shd-wait-qlength: 10000
>> features.shard: on
>> user.cifs: off
>> features.shard-block-size: 512MB
>>
>>
>>
>>
>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>>
>>
>> _______________________________________________
>> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>
> Red Hat EMEA <
https://www.redhat.com/>
> <
https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <
https://redhat.com/trusted>
>
>
>
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <
https://www.redhat.com/>
<
https://red.ht/sig>
TRIED. TESTED. TRUSTED. <
https://redhat.com/trusted>