[ovirt-users] Self hosted engine fails after 4.2 upgrade
Stefano Danzi
s.danzi at hawai.it
Thu Dec 21 10:37:12 UTC 2017
Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
>
>
> On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.danzi at hawai.it
> <mailto:s.danzi at hawai.it>> wrote:
>
> Hello!
> I have a test system with one phisical host and hosted engine
> running on it.
> Storage is gluster but hosted engine mount it as nfs.
>
> After the upgrade gluster no longer activate nfs.
> The command "gluster volume set engine nfs.disable off" doesn't help.
>
> How I can re-enable nfs? O better how I can migrate self hosted
> engine to native glusterfs?
>
>
>
> Ciao Stefano,
> could you please attach the output of
> gluster volume info engine
>
> adding Kasturi here
[root at ovirt01 ~]# gluster volume info engine
Volume Name: engine
Type: Distribute
Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick
Options Reconfigured:
server.event-threads: 4
client.event-threads: 4
network.ping-timeout: 30
server.allow-insecure: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
nfs.disable: off
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
features.shard-block-size: 512MB
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org <mailto:Users at ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171221/88f52d75/attachment.html>
More information about the Users
mailing list