Self hosted engine fails after 4.2 upgrade

Hello! I have a test system with one phisical host and hosted engine running on it. Storage is gluster but hosted engine mount it as nfs. After the upgrade gluster no longer activate nfs. The command "gluster volume set engine nfs.disable off" doesn't help. How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?

On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hello! I have a test system with one phisical host and hosted engine running on it. Storage is gluster but hosted engine mount it as nfs.
After the upgrade gluster no longer activate nfs. The command "gluster volume set engine nfs.disable off" doesn't help.
How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?
Ciao Stefano, could you please attach the output of gluster volume info engine adding Kasturi here _______________________________________________
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------B948E31EB0D1D37ECC6709C3 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.danzi@hawai.it <mailto:s.danzi@hawai.it>> wrote:
Hello! I have a test system with one phisical host and hosted engine running on it. Storage is gluster but hosted engine mount it as nfs.
After the upgrade gluster no longer activate nfs. The command "gluster volume set engine nfs.disable off" doesn't help.
How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?
Ciao Stefano, could you please attach the output of gluster volume info engine
adding Kasturi here
[root@ovirt01 ~]# gluster volume info engine Volume Name: engine Type: Distribute Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick Options Reconfigured: server.event-threads: 4 client.event-threads: 4 network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off nfs.disable: off performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off features.shard-block-size: 512MB
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
--------------B948E31EB0D1D37ECC6709C3 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> <br> <br> <div class="moz-cite-prefix">Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:<br> </div> <blockquote type="cite" cite="mid:CAN8-ONr+8QcNq5uRAQnvTh0==Js4SqNd5MF72zrQT4L6ffS4fg@mail.gmail.com"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <span dir="ltr"><<a href="mailto:s.danzi@hawai.it" target="_blank" moz-do-not-send="true">s.danzi@hawai.it</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello!<br> I have a test system with one phisical host and hosted engine running on it.<br> Storage is gluster but hosted engine mount it as nfs.<br> <br> After the upgrade gluster no longer activate nfs.<br> The command "gluster volume set engine nfs.disable off" doesn't help.<br> <br> How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?<br> </blockquote> <div><br> </div> <div><br> </div> <div>Ciao Stefano,<br> could you please attach the output of <br> gluster volume info engine <br> </div> <div><br> </div> <div>adding Kasturi here</div> </div> </div> </div> </blockquote> <br> [root@ovirt01 ~]# gluster volume info engine<br> <br> Volume Name: engine<br> Type: Distribute<br> Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8<br> Status: Started<br> Snapshot Count: 0<br> Number of Bricks: 1<br> Transport-type: tcp<br> Bricks:<br> Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick<br> Options Reconfigured:<br> server.event-threads: 4<br> client.event-threads: 4<br> network.ping-timeout: 30<br> server.allow-insecure: on<br> storage.owner-gid: 36<br> storage.owner-uid: 36<br> cluster.server-quorum-type: server<br> cluster.quorum-type: auto<br> network.remote-dio: enable<br> cluster.eager-lock: enable<br> performance.stat-prefetch: off<br> performance.io-cache: off<br> performance.read-ahead: off<br> performance.quick-read: off<br> nfs.disable: off<br> performance.low-prio-threads: 32<br> cluster.data-self-heal-algorithm: full<br> cluster.locking-scheme: granular<br> cluster.shd-max-threads: 8<br> cluster.shd-wait-qlength: 10000<br> features.shard: on<br> user.cifs: off<br> features.shard-block-size: 512MB<br> <br> <br> <br> <blockquote type="cite" cite="mid:CAN8-ONr+8QcNq5uRAQnvTh0==Js4SqNd5MF72zrQT4L6ffS4fg@mail.gmail.com"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <div><br> </div> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> ______________________________<wbr>_________________<br> Users mailing list<br> <a href="mailto:Users@ovirt.org" target="_blank" moz-do-not-send="true">Users@ovirt.org</a><br> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br> </blockquote> </div> <br> </div> </div> </blockquote> <br> </body> </html> --------------B948E31EB0D1D37ECC6709C3--

This is a multi-part message in MIME format. --------------79E721D612490AACBBE44E7E Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Sloved installing glusterfs-gnfs package. Anyway could be nice to move hosted engine to gluster.... Il 21/12/2017 11:37, Stefano Danzi ha scritto:
Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.danzi@hawai.it <mailto:s.danzi@hawai.it>> wrote:
Hello! I have a test system with one phisical host and hosted engine running on it. Storage is gluster but hosted engine mount it as nfs.
After the upgrade gluster no longer activate nfs. The command "gluster volume set engine nfs.disable off" doesn't help.
How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?
Ciao Stefano, could you please attach the output of gluster volume info engine
adding Kasturi here
[root@ovirt01 ~]# gluster volume info engine
Volume Name: engine Type: Distribute Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick Options Reconfigured: server.event-threads: 4 client.event-threads: 4 network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off nfs.disable: off performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off features.shard-block-size: 512MB
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------79E721D612490AACBBE44E7E Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> Sloved installing glusterfs-gnfs package.<br> Anyway could be nice to move hosted engine to gluster....<br> <br> <div class="moz-cite-prefix">Il 21/12/2017 11:37, Stefano Danzi ha scritto:<br> </div> <blockquote type="cite" cite="mid:83ce3aa1-56fe-1ba8-087f-ff4e4cb0f757@hawai.it"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <br> <br> <div class="moz-cite-prefix">Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:<br> </div> <blockquote type="cite" cite="mid:CAN8-ONr+8QcNq5uRAQnvTh0==Js4SqNd5MF72zrQT4L6ffS4fg@mail.gmail.com"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <span dir="ltr"><<a href="mailto:s.danzi@hawai.it" target="_blank" moz-do-not-send="true">s.danzi@hawai.it</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello!<br> I have a test system with one phisical host and hosted engine running on it.<br> Storage is gluster but hosted engine mount it as nfs.<br> <br> After the upgrade gluster no longer activate nfs.<br> The command "gluster volume set engine nfs.disable off" doesn't help.<br> <br> How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?<br> </blockquote> <div><br> </div> <div><br> </div> <div>Ciao Stefano,<br> could you please attach the output of <br> gluster volume info engine <br> </div> <div><br> </div> <div>adding Kasturi here</div> </div> </div> </div> </blockquote> <br> [root@ovirt01 ~]# gluster volume info engine<br> <br> Volume Name: engine<br> Type: Distribute<br> Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8<br> Status: Started<br> Snapshot Count: 0<br> Number of Bricks: 1<br> Transport-type: tcp<br> Bricks:<br> Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick<br> Options Reconfigured:<br> server.event-threads: 4<br> client.event-threads: 4<br> network.ping-timeout: 30<br> server.allow-insecure: on<br> storage.owner-gid: 36<br> storage.owner-uid: 36<br> cluster.server-quorum-type: server<br> cluster.quorum-type: auto<br> network.remote-dio: enable<br> cluster.eager-lock: enable<br> performance.stat-prefetch: off<br> performance.io-cache: off<br> performance.read-ahead: off<br> performance.quick-read: off<br> nfs.disable: off<br> performance.low-prio-threads: 32<br> cluster.data-self-heal-algorithm: full<br> cluster.locking-scheme: granular<br> cluster.shd-max-threads: 8<br> cluster.shd-wait-qlength: 10000<br> features.shard: on<br> user.cifs: off<br> features.shard-block-size: 512MB<br> <br> <br> <br> <blockquote type="cite" cite="mid:CAN8-ONr+8QcNq5uRAQnvTh0==Js4SqNd5MF72zrQT4L6ffS4fg@mail.gmail.com"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <div><br> </div> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> ______________________________<wbr>_________________<br> Users mailing list<br> <a href="mailto:Users@ovirt.org" target="_blank" moz-do-not-send="true">Users@ovirt.org</a><br> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br> </blockquote> </div> <br> </div> </div> </blockquote> <br> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------79E721D612490AACBBE44E7E--

2017-12-21 14:26 GMT+01:00 Stefano Danzi <s.danzi@hawai.it>:
Sloved installing glusterfs-gnfs package. Anyway could be nice to move hosted engine to gluster....
Adding some gluster folks. Are we missing a dependency somewhere? During the upgrade nfs on gluster stopped to work here and adding the missing dep solved. Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now you are on gluster 3.12 (ovirt 4.2)
Il 21/12/2017 11:37, Stefano Danzi ha scritto:
Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hello! I have a test system with one phisical host and hosted engine running on it. Storage is gluster but hosted engine mount it as nfs.
After the upgrade gluster no longer activate nfs. The command "gluster volume set engine nfs.disable off" doesn't help.
How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?
Ciao Stefano, could you please attach the output of gluster volume info engine
adding Kasturi here
[root@ovirt01 ~]# gluster volume info engine
Volume Name: engine Type: Distribute Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick Options Reconfigured: server.event-threads: 4 client.event-threads: 4 network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off nfs.disable: off performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off features.shard-block-size: 512MB
_______________________________________________
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

This is a multi-part message in MIME format. --------------A305FDD5D3821CB19BF6D36F Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:
2017-12-21 14:26 GMT+01:00 Stefano Danzi <s.danzi@hawai.it <mailto:s.danzi@hawai.it>>:
Sloved installing glusterfs-gnfs package. Anyway could be nice to move hosted engine to gluster....
Adding some gluster folks. Are we missing a dependency somewhere? During the upgrade nfs on gluster stopped to work here and adding the missing dep solved. Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now you are on gluster 3.12 (ovirt 4.2)
Sandro I confirm the version. Host are running CentOS 7.4.1708 before the upgrade there was gluster 3.8 in oVirt 4.1 now I have gluster 3.12 in oVirt 4.2
Il 21/12/2017 11:37, Stefano Danzi ha scritto:
Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.danzi@hawai.it <mailto:s.danzi@hawai.it>> wrote:
Hello! I have a test system with one phisical host and hosted engine running on it. Storage is gluster but hosted engine mount it as nfs.
After the upgrade gluster no longer activate nfs. The command "gluster volume set engine nfs.disable off" doesn't help.
How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?
Ciao Stefano, could you please attach the output of gluster volume info engine
adding Kasturi here
[root@ovirt01 ~]# gluster volume info engine
Volume Name: engine Type: Distribute Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick Options Reconfigured: server.event-threads: 4 client.event-threads: 4 network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off nfs.disable: off performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off features.shard-block-size: 512MB
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
--------------A305FDD5D3821CB19BF6D36F Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> <br> <br> <div class="moz-cite-prefix">Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:<br> </div> <blockquote type="cite" cite="mid:CAPQRNTn+uTesJPZYfce-4P7GPJ0W6tLRgLpvtkGbqJ=HxB7vNg@mail.gmail.com"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote">2017-12-21 14:26 GMT+01:00 Stefano Danzi <span dir="ltr"><<a href="mailto:s.danzi@hawai.it" target="_blank" moz-do-not-send="true">s.danzi@hawai.it</a>></span>:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div text="#000000" bgcolor="#FFFFFF"> Sloved installing glusterfs-gnfs package.<br> Anyway could be nice to move hosted engine to gluster.... <div> <div class="h5"><br> </div> </div> </div> </blockquote> <div><br> </div> <div>Adding some gluster folks. Are we missing a dependency somewhere?</div> <div>During the upgrade nfs on gluster stopped to work here and adding the missing dep solved.</div> <div>Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now you are on gluster 3.12 (ovirt 4.2)</div> <div><br> </div> </div> </div> </div> </blockquote> Sandro I confirm the version.<br> Host are running CentOS 7.4.1708<br> before the upgrade there was gluster 3.8 in oVirt 4.1<br> now I have gluster 3.12 in oVirt 4.2<br> <blockquote type="cite" cite="mid:CAPQRNTn+uTesJPZYfce-4P7GPJ0W6tLRgLpvtkGbqJ=HxB7vNg@mail.gmail.com"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <div> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div text="#000000" bgcolor="#FFFFFF"> <div> <div class="h5"> <br> <div class="m_3282951391865352856moz-cite-prefix">Il 21/12/2017 11:37, Stefano Danzi ha scritto:<br> </div> <blockquote type="cite"> <br> <br> <div class="m_3282951391865352856moz-cite-prefix">Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:<br> </div> <blockquote type="cite"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <span dir="ltr"><<a href="mailto:s.danzi@hawai.it" target="_blank" moz-do-not-send="true">s.danzi@hawai.it</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello!<br> I have a test system with one phisical host and hosted engine running on it.<br> Storage is gluster but hosted engine mount it as nfs.<br> <br> After the upgrade gluster no longer activate nfs.<br> The command "gluster volume set engine nfs.disable off" doesn't help.<br> <br> How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?<br> </blockquote> <div><br> </div> <div><br> </div> <div>Ciao Stefano,<br> could you please attach the output of <br> gluster volume info engine <br> </div> <div><br> </div> <div>adding Kasturi here</div> </div> </div> </div> </blockquote> <br> [root@ovirt01 ~]# gluster volume info engine<br> <br> Volume Name: engine<br> Type: Distribute<br> Volume ID: 565951c8-977e-4674-b6b2-<wbr>b4f60551c1d8<br> Status: Started<br> Snapshot Count: 0<br> Number of Bricks: 1<br> Transport-type: tcp<br> Bricks:<br> Brick1: ovirt01.hawai.lan:/home/<wbr>glusterfs/engine/brick<br> Options Reconfigured:<br> server.event-threads: 4<br> client.event-threads: 4<br> network.ping-timeout: 30<br> server.allow-insecure: on<br> storage.owner-gid: 36<br> storage.owner-uid: 36<br> cluster.server-quorum-type: server<br> cluster.quorum-type: auto<br> network.remote-dio: enable<br> cluster.eager-lock: enable<br> performance.stat-prefetch: off<br> performance.io-cache: off<br> performance.read-ahead: off<br> performance.quick-read: off<br> nfs.disable: off<br> performance.low-prio-threads: 32<br> cluster.data-self-heal-<wbr>algorithm: full<br> cluster.locking-scheme: granular<br> cluster.shd-max-threads: 8<br> cluster.shd-wait-qlength: 10000<br> features.shard: on<br> user.cifs: off<br> features.shard-block-size: 512MB<br> <br> <br> <br> <blockquote type="cite"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <div><br> </div> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> ______________________________<wbr>_________________<br> Users mailing list<br> <a href="mailto:Users@ovirt.org" target="_blank" moz-do-not-send="true">Users@ovirt.org</a><br> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br> </blockquote> </div> <br> </div> </div> </blockquote> <br> <br> <fieldset class="m_3282951391865352856mimeAttachmentHeader"></fieldset> <br> <pre>______________________________<wbr>_________________ Users mailing list <a class="m_3282951391865352856moz-txt-link-abbreviated" href="mailto:Users@ovirt.org" target="_blank" moz-do-not-send="true">Users@ovirt.org</a> <a class="m_3282951391865352856moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank" moz-do-not-send="true">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a> </pre> </blockquote> <br> </div> </div> </div> <br> ______________________________<wbr>_________________<br> Users mailing list<br> <a href="mailto:Users@ovirt.org" moz-do-not-send="true">Users@ovirt.org</a><br> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br> <br> </blockquote> </div> <br> <br clear="all"> <div><br> </div> -- <br> <div class="gmail_signature" data-smartmail="gmail_signature"> <div dir="ltr"> <div> <div dir="ltr"> <div> <div dir="ltr"> <div> <div dir="ltr"> <div> <div dir="ltr"> <div dir="ltr"> <div dir="ltr"> <div dir="ltr"> <p style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-weight:bold;margin:0px;padding:0px;font-size:14px;text-transform:uppercase"><span>SANDRO</span> <span>BONAZZOLA</span></p> <p style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-size:10px;margin:0px 0px 4px;text-transform:uppercase"><span>ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D</span></p> <p style="font-family:overpass,sans-serif;margin:0px;font-size:10px;color:rgb(153,153,153)"><a href="https://www.redhat.com/" style="color:rgb(0,136,206);margin:0px" target="_blank" moz-do-not-send="true">Red Hat <span>EMEA</span></a></p> <table style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-size:medium" border="0"> <tbody> <tr> <td width="100px"><a href="https://red.ht/sig" target="_blank" moz-do-not-send="true"><img src="https://www.redhat.com/profiles/rh/themes/redhatdotcom/img/logo-red-hat-blac..." moz-do-not-send="true" height="auto" width="90"></a></td> <td style="font-size:10px"> <div><a href="https://redhat.com/trusted" style="color:rgb(204,0,0);font-weight:bold" target="_blank" moz-do-not-send="true">TRIED. TESTED. TRUSTED.</a></div> </td> </tr> </tbody> </table> <br> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </blockquote> <br> </body> </html> --------------A305FDD5D3821CB19BF6D36F--

2017-12-21 17:01 GMT+01:00 Stefano Danzi <s.danzi@hawai.it>:
Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:
2017-12-21 14:26 GMT+01:00 Stefano Danzi <s.danzi@hawai.it>:
Sloved installing glusterfs-gnfs package. Anyway could be nice to move hosted engine to gluster....
Adding some gluster folks. Are we missing a dependency somewhere? During the upgrade nfs on gluster stopped to work here and adding the missing dep solved. Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now you are on gluster 3.12 (ovirt 4.2)
Sandro I confirm the version. Host are running CentOS 7.4.1708 before the upgrade there was gluster 3.8 in oVirt 4.1 now I have gluster 3.12 in oVirt 4.2
Thanks Stefano, I alerted glusterfs team, they'll have a look.
Il 21/12/2017 11:37, Stefano Danzi ha scritto:
Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hello! I have a test system with one phisical host and hosted engine running on it. Storage is gluster but hosted engine mount it as nfs.
After the upgrade gluster no longer activate nfs. The command "gluster volume set engine nfs.disable off" doesn't help.
How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?
Ciao Stefano, could you please attach the output of gluster volume info engine
adding Kasturi here
[root@ovirt01 ~]# gluster volume info engine
Volume Name: engine Type: Distribute Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick Options Reconfigured: server.event-threads: 4 client.event-threads: 4 network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off nfs.disable: off performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off features.shard-block-size: 512MB
_______________________________________________
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
-- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On Fri, Dec 22, 2017 at 2:45 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
2017-12-21 17:01 GMT+01:00 Stefano Danzi <s.danzi@hawai.it>:
Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:
2017-12-21 14:26 GMT+01:00 Stefano Danzi <s.danzi@hawai.it>:
Sloved installing glusterfs-gnfs package. Anyway could be nice to move hosted engine to gluster....
Adding some gluster folks. Are we missing a dependency somewhere? During the upgrade nfs on gluster stopped to work here and adding the missing dep solved. Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now you are on gluster 3.12 (ovirt 4.2)
Sandro I confirm the version. Host are running CentOS 7.4.1708 before the upgrade there was gluster 3.8 in oVirt 4.1 now I have gluster 3.12 in oVirt 4.2
Thanks Stefano, I alerted glusterfs team, they'll have a look.
[Adding Jiffin to take a look and confirm] I think this is to do with the separation of nfs components in gluster 3.12 (see https://bugzilla.redhat.com/show_bug.cgi?id=1326219). The recommended nfs solution with gluster is nfs-ganesha, and hence the gluster nfs is no longer installed by default.
Il 21/12/2017 11:37, Stefano Danzi ha scritto:
Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hello! I have a test system with one phisical host and hosted engine running on it. Storage is gluster but hosted engine mount it as nfs.
After the upgrade gluster no longer activate nfs. The command "gluster volume set engine nfs.disable off" doesn't help.
How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?
Ciao Stefano, could you please attach the output of gluster volume info engine
adding Kasturi here
[root@ovirt01 ~]# gluster volume info engine
Volume Name: engine Type: Distribute Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick Options Reconfigured: server.event-threads: 4 client.event-threads: 4 network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off nfs.disable: off performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off features.shard-block-size: 512MB
_______________________________________________
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

This is a multi-part message in MIME format. --------------981DBE5A20E20CCC2DD52DF7 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit On Friday 22 December 2017 03:08 PM, Sahina Bose wrote:
On Fri, Dec 22, 2017 at 2:45 PM, Sandro Bonazzola <sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>> wrote:
2017-12-21 17:01 GMT+01:00 Stefano Danzi <s.danzi@hawai.it <mailto:s.danzi@hawai.it>>:
Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:
2017-12-21 14:26 GMT+01:00 Stefano Danzi <s.danzi@hawai.it <mailto:s.danzi@hawai.it>>:
Sloved installing glusterfs-gnfs package. Anyway could be nice to move hosted engine to gluster....
Adding some gluster folks. Are we missing a dependency somewhere? During the upgrade nfs on gluster stopped to work here and adding the missing dep solved. Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now you are on gluster 3.12 (ovirt 4.2)
Sandro I confirm the version. Host are running CentOS 7.4.1708 before the upgrade there was gluster 3.8 in oVirt 4.1 now I have gluster 3.12 in oVirt 4.2
Thanks Stefano, I alerted glusterfs team, they'll have a look.
[Adding Jiffin to take a look and confirm]
I think this is to do with the separation of nfs components in gluster 3.12 (see https://bugzilla.redhat.com/show_bug.cgi?id=1326219). The recommended nfs solution with gluster is nfs-ganesha, and hence the gluster nfs is no longer installed by default.
Hi, For gluster nfs u need to install gluster-gnfs package. As Sahina , it is change from 3.12 onwards I guess Regards, Jiffin
Il 21/12/2017 11:37, Stefano Danzi ha scritto:
Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.danzi@hawai.it <mailto:s.danzi@hawai.it>> wrote:
Hello! I have a test system with one phisical host and hosted engine running on it. Storage is gluster but hosted engine mount it as nfs.
After the upgrade gluster no longer activate nfs. The command "gluster volume set engine nfs.disable off" doesn't help.
How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?
Ciao Stefano, could you please attach the output of gluster volume info engine
adding Kasturi here
[root@ovirt01 ~]# gluster volume info engine
Volume Name: engine Type: Distribute Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick Options Reconfigured: server.event-threads: 4 client.event-threads: 4 network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off nfs.disable: off performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off features.shard-block-size: 512MB
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
--------------981DBE5A20E20CCC2DD52DF7 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> <div class="moz-cite-prefix">On Friday 22 December 2017 03:08 PM, Sahina Bose wrote:<br> </div> <blockquote type="cite" cite="mid:CACjzOvc7m2VGQFmkafEKuK9r_AU7u_fv6kfhGK7ijKY08OWiQg@mail.gmail.com"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote">On Fri, Dec 22, 2017 at 2:45 PM, Sandro Bonazzola <span dir="ltr"><<a target="_blank" href="mailto:sbonazzo@redhat.com" moz-do-not-send="true">sbonazzo@redhat.com</a>></span> wrote:<br> <blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote"><span>2017-12-21 17:01 GMT+01:00 Stefano Danzi <span dir="ltr"><<a target="_blank" href="mailto:s.danzi@hawai.it" moz-do-not-send="true">s.danzi@hawai.it</a>></span>:<br> <blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"> <div bgcolor="#FFFFFF"><span> <br> <br> <div class="gmail-m_3315009811054631481m_-1306409091726000618m_-5703098973210207633moz-cite-prefix">Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:<br> </div> <blockquote type="cite"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote">2017-12-21 14:26 GMT+01:00 Stefano Danzi <span dir="ltr"><<a target="_blank" href="mailto:s.danzi@hawai.it" moz-do-not-send="true">s.danzi@hawai.it</a>></span>:<br> <blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"> <div bgcolor="#FFFFFF"> Sloved installing glusterfs-gnfs package.<br> Anyway could be nice to move hosted engine to gluster.... <div> <div class="gmail-m_3315009811054631481m_-1306409091726000618m_-5703098973210207633h5"><br> </div> </div> </div> </blockquote> <div><br> </div> <div>Adding some gluster folks. Are we missing a dependency somewhere?</div> <div>During the upgrade nfs on gluster stopped to work here and adding the missing dep solved.</div> <div>Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now you are on gluster 3.12 (ovirt 4.2)</div> <div><br> </div> </div> </div> </div> </blockquote> </span> Sandro I confirm the version.<br> Host are running CentOS 7.4.1708<br> before the upgrade there was gluster 3.8 in oVirt 4.1<br> now I have gluster 3.12 in oVirt 4.2 <div> <div class="gmail-m_3315009811054631481m_-1306409091726000618h5"><br> </div> </div> </div> </blockquote> <div><br> </div> </span> <div>Thanks Stefano, I alerted glusterfs team, they'll have a look.</div> </div> </div> </div> </blockquote> <div><br> <br> </div> <div>[Adding Jiffin to take a look and confirm]<br> <br> I think this is to do with the separation of nfs components in gluster 3.12 (see <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1326219" moz-do-not-send="true">https://bugzilla.redhat.com/show_bug.cgi?id=1326219</a>). The recommended nfs solution with gluster is nfs-ganesha, and hence the gluster nfs is no longer installed by default.<br> </div> <div><br> </div> </div> </div> </div> </blockquote> Hi,<br> <br> For gluster nfs u need to install gluster-gnfs package. As Sahina , it is change from 3.12 onwards I guess<br> <br> Regards,<br> Jiffin<br> <br> <blockquote type="cite" cite="mid:CACjzOvc7m2VGQFmkafEKuK9r_AU7u_fv6kfhGK7ijKY08OWiQg@mail.gmail.com"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <div><br> </div> <div><br> </div> <blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <div> <div class="gmail-m_3315009811054631481h5"> <div> </div> <blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"> <div bgcolor="#FFFFFF"> <div> <div class="gmail-m_3315009811054631481m_-1306409091726000618h5"> <blockquote type="cite"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <div> </div> <blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"> <div bgcolor="#FFFFFF"> <div> <div class="gmail-m_3315009811054631481m_-1306409091726000618m_-5703098973210207633h5"> <br> <div class="gmail-m_3315009811054631481m_-1306409091726000618m_-5703098973210207633m_3282951391865352856moz-cite-prefix">Il 21/12/2017 11:37, Stefano Danzi ha scritto:<br> </div> <blockquote type="cite"> <br> <br> <div class="gmail-m_3315009811054631481m_-1306409091726000618m_-5703098973210207633m_3282951391865352856moz-cite-prefix">Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:<br> </div> <blockquote type="cite"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <span dir="ltr"><<a target="_blank" href="mailto:s.danzi@hawai.it" moz-do-not-send="true">s.danzi@hawai.it</a>></span> wrote:<br> <blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">Hello!<br> I have a test system with one phisical host and hosted engine running on it.<br> Storage is gluster but hosted engine mount it as nfs.<br> <br> After the upgrade gluster no longer activate nfs.<br> The command "gluster volume set engine nfs.disable off" doesn't help.<br> <br> How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?<br> </blockquote> <div><br> </div> <div><br> </div> <div>Ciao Stefano,<br> could you please attach the output of <br> gluster volume info engine <br> </div> <div><br> </div> <div>adding Kasturi here</div> </div> </div> </div> </blockquote> <br> [root@ovirt01 ~]# gluster volume info engine<br> <br> Volume Name: engine<br> Type: Distribute<br> Volume ID: 565951c8-977e-4674-b6b2-b4f605<wbr>51c1d8<br> Status: Started<br> Snapshot Count: 0<br> Number of Bricks: 1<br> Transport-type: tcp<br> Bricks:<br> Brick1: ovirt01.hawai.lan:/home/gluste<wbr>rfs/engine/brick<br> Options Reconfigured:<br> server.event-threads: 4<br> client.event-threads: 4<br> network.ping-timeout: 30<br> server.allow-insecure: on<br> storage.owner-gid: 36<br> storage.owner-uid: 36<br> cluster.server-quorum-type: server<br> cluster.quorum-type: auto<br> network.remote-dio: enable<br> cluster.eager-lock: enable<br> performance.stat-prefetch: off<br> performance.io-cache: off<br> performance.read-ahead: off<br> performance.quick-read: off<br> nfs.disable: off<br> performance.low-prio-threads: 32<br> cluster.data-self-heal-algorit<wbr>hm: full<br> cluster.locking-scheme: granular<br> cluster.shd-max-threads: 8<br> cluster.shd-wait-qlength: 10000<br> features.shard: on<br> user.cifs: off<br> features.shard-block-size: 512MB<br> <br> <br> <br> <blockquote type="cite"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <div><br> </div> <blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"> ______________________________<wbr>_________________<br> Users mailing list<br> <a target="_blank" href="mailto:Users@ovirt.org" moz-do-not-send="true">Users@ovirt.org</a><br> <a target="_blank" rel="noreferrer" href="http://lists.ovirt.org/mailman/listinfo/users" moz-do-not-send="true">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br> </blockquote> </div> <br> </div> </div> </blockquote> <br> <br> <fieldset class="gmail-m_3315009811054631481m_-1306409091726000618m_-5703098973210207633m_3282951391865352856mimeAttachmentHeader"></fieldset> <br> <pre>______________________________<wbr>_________________ Users mailing list <a target="_blank" href="mailto:Users@ovirt.org" class="gmail-m_3315009811054631481m_-1306409091726000618m_-5703098973210207633m_3282951391865352856moz-txt-link-abbreviated" moz-do-not-send="true">Users@ovirt.org</a> <a target="_blank" href="http://lists.ovirt.org/mailman/listinfo/users" class="gmail-m_3315009811054631481m_-1306409091726000618m_-5703098973210207633m_3282951391865352856moz-txt-link-freetext" moz-do-not-send="true">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a> </pre> </blockquote> <br> </div> </div> </div> <br> ______________________________<wbr>_________________<br> Users mailing list<br> <a target="_blank" href="mailto:Users@ovirt.org" moz-do-not-send="true">Users@ovirt.org</a><br> <a target="_blank" rel="noreferrer" href="http://lists.ovirt.org/mailman/listinfo/users" moz-do-not-send="true">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br> <br> </blockquote> </div> <br> <br clear="all"> <div><br> </div> -- <br> <div class="gmail-m_3315009811054631481m_-1306409091726000618m_-5703098973210207633gmail_signature"> <div dir="ltr"> <div> <div dir="ltr"> <div> <div dir="ltr"> <div> <div dir="ltr"> <div> <div dir="ltr"> <div dir="ltr"> <div dir="ltr"> <div dir="ltr"> <p style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-weight:bold;margin:0px;padding:0px;font-size:14px;text-transform:uppercase"><span>SANDRO</span> <span>BONAZZOLA</span></p> <p style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-size:10px;margin:0px 0px 4px;text-transform:uppercase"><span>ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D</span></p> <p style="font-family:overpass,sans-serif;margin:0px;font-size:10px;color:rgb(153,153,153)"><a target="_blank" style="color:rgb(0,136,206);margin:0px" href="https://www.redhat.com/" moz-do-not-send="true">Red Hat <span>EMEA</span></a></p> <table style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-size:medium" border="0"> <tbody> <tr> <td width="100px"><a target="_blank" href="https://red.ht/sig" moz-do-not-send="true"><img moz-do-not-send="true" height="auto" width="90"></a></td> <td style="font-size:10px"> <div><a target="_blank" style="color:rgb(204,0,0);font-weight:bold" href="https://redhat.com/trusted" moz-do-not-send="true">TRIED. TESTED. TRUSTED.</a></div> </td> </tr> </tbody> </table> <br> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </blockquote> <br> </div> </div> </div> </blockquote> </div> </div> </div> <div> <div class="gmail-m_3315009811054631481h5"><br> <br clear="all"> <div><br> </div> -- <br> <div class="gmail-m_3315009811054631481m_-1306409091726000618gmail_signature"> <div dir="ltr"> <div> <div dir="ltr"> <div> <div dir="ltr"> <div> <div dir="ltr"> <div> <div dir="ltr"> <div dir="ltr"> <div dir="ltr"> <div dir="ltr"> <p style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-weight:bold;margin:0px;padding:0px;font-size:14px;text-transform:uppercase"><span>SANDRO</span> <span>BONAZZOLA</span></p> <p style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-size:10px;margin:0px 0px 4px;text-transform:uppercase"><span>ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D</span></p> <p style="font-family:overpass,sans-serif;margin:0px;font-size:10px;color:rgb(153,153,153)"><a target="_blank" style="color:rgb(0,136,206);margin:0px" href="https://www.redhat.com/" moz-do-not-send="true">Red Hat <span>EMEA</span></a></p> <table style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-size:medium" border="0"> <tbody> <tr> <td width="100px"><a target="_blank" href="https://red.ht/sig" moz-do-not-send="true"><img src="https://www.redhat.com/profiles/rh/themes/redhatdotcom/img/logo-red-hat-blac..." moz-do-not-send="true" height="auto" width="90"></a></td> <td style="font-size:10px"> <div><a target="_blank" style="color:rgb(204,0,0);font-weight:bold" href="https://redhat.com/trusted" moz-do-not-send="true">TRIED. TESTED. TRUSTED.</a></div> </td> </tr> </tbody> </table> <br> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </blockquote> </div> <br> </div> </div> </blockquote> <br> </body> </html> --------------981DBE5A20E20CCC2DD52DF7--

2017-12-22 12:46 GMT+01:00 Jiffin Tony Thottan <jthottan@redhat.com>:
On Friday 22 December 2017 03:08 PM, Sahina Bose wrote:
On Fri, Dec 22, 2017 at 2:45 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
2017-12-21 17:01 GMT+01:00 Stefano Danzi <s.danzi@hawai.it>:
Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:
2017-12-21 14:26 GMT+01:00 Stefano Danzi <s.danzi@hawai.it>:
Sloved installing glusterfs-gnfs package. Anyway could be nice to move hosted engine to gluster....
Adding some gluster folks. Are we missing a dependency somewhere? During the upgrade nfs on gluster stopped to work here and adding the missing dep solved. Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now you are on gluster 3.12 (ovirt 4.2)
Sandro I confirm the version. Host are running CentOS 7.4.1708 before the upgrade there was gluster 3.8 in oVirt 4.1 now I have gluster 3.12 in oVirt 4.2
Thanks Stefano, I alerted glusterfs team, they'll have a look.
[Adding Jiffin to take a look and confirm]
I think this is to do with the separation of nfs components in gluster 3.12 (see https://bugzilla.redhat.com/show_bug.cgi?id=1326219). The recommended nfs solution with gluster is nfs-ganesha, and hence the gluster nfs is no longer installed by default.
Hi,
For gluster nfs u need to install gluster-gnfs package. As Sahina , it is change from 3.12 onwards I guess
Sahina, can you please ensure that if glusterfs with nfs support is selected in ovirt-engine, gluster-gnfs is installed when deploying the host?
Regards, Jiffin
Il 21/12/2017 11:37, Stefano Danzi ha scritto:
Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hello! I have a test system with one phisical host and hosted engine running on it. Storage is gluster but hosted engine mount it as nfs.
After the upgrade gluster no longer activate nfs. The command "gluster volume set engine nfs.disable off" doesn't help.
How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?
Ciao Stefano, could you please attach the output of gluster volume info engine
adding Kasturi here
[root@ovirt01 ~]# gluster volume info engine
Volume Name: engine Type: Distribute Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick Options Reconfigured: server.event-threads: 4 client.event-threads: 4 network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off nfs.disable: off performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off features.shard-block-size: 512MB
_______________________________________________
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
-- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

2017-12-22 12:57 GMT+01:00 Sandro Bonazzola <sbonazzo@redhat.com>:
2017-12-22 12:46 GMT+01:00 Jiffin Tony Thottan <jthottan@redhat.com>:
On Friday 22 December 2017 03:08 PM, Sahina Bose wrote:
On Fri, Dec 22, 2017 at 2:45 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
2017-12-21 17:01 GMT+01:00 Stefano Danzi <s.danzi@hawai.it>:
Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:
2017-12-21 14:26 GMT+01:00 Stefano Danzi <s.danzi@hawai.it>:
Sloved installing glusterfs-gnfs package. Anyway could be nice to move hosted engine to gluster....
Adding some gluster folks. Are we missing a dependency somewhere? During the upgrade nfs on gluster stopped to work here and adding the missing dep solved. Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now you are on gluster 3.12 (ovirt 4.2)
Sandro I confirm the version. Host are running CentOS 7.4.1708 before the upgrade there was gluster 3.8 in oVirt 4.1 now I have gluster 3.12 in oVirt 4.2
Thanks Stefano, I alerted glusterfs team, they'll have a look.
[Adding Jiffin to take a look and confirm]
I think this is to do with the separation of nfs components in gluster 3.12 (see https://bugzilla.redhat.com/show_bug.cgi?id=1326219). The recommended nfs solution with gluster is nfs-ganesha, and hence the gluster nfs is no longer installed by default.
Hi,
For gluster nfs u need to install gluster-gnfs package. As Sahina , it is change from 3.12 onwards I guess
Sahina, can you please ensure that if glusterfs with nfs support is selected in ovirt-engine, gluster-gnfs is installed when deploying the host?
Tracking on https://bugzilla.redhat.com/show_bug.cgi?id=1528615
Regards, Jiffin
Il 21/12/2017 11:37, Stefano Danzi ha scritto:
Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hello! I have a test system with one phisical host and hosted engine running on it. Storage is gluster but hosted engine mount it as nfs.
After the upgrade gluster no longer activate nfs. The command "gluster volume set engine nfs.disable off" doesn't help.
How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?
Ciao Stefano, could you please attach the output of gluster volume info engine
adding Kasturi here
[root@ovirt01 ~]# gluster volume info engine
Volume Name: engine Type: Distribute Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick Options Reconfigured: server.event-threads: 4 client.event-threads: 4 network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off nfs.disable: off performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off features.shard-block-size: 512MB
_______________________________________________
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
-- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
participants (5)
-
Jiffin Tony Thottan
-
Sahina Bose
-
Sandro Bonazzola
-
Simone Tiraboschi
-
Stefano Danzi