
This is a multi-part message in MIME format. --------------79E721D612490AACBBE44E7E Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Sloved installing glusterfs-gnfs package. Anyway could be nice to move hosted engine to gluster.... Il 21/12/2017 11:37, Stefano Danzi ha scritto:
Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:
On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.danzi@hawai.it <mailto:s.danzi@hawai.it>> wrote:
Hello! I have a test system with one phisical host and hosted engine running on it. Storage is gluster but hosted engine mount it as nfs.
After the upgrade gluster no longer activate nfs. The command "gluster volume set engine nfs.disable off" doesn't help.
How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?
Ciao Stefano, could you please attach the output of gluster volume info engine
adding Kasturi here
[root@ovirt01 ~]# gluster volume info engine
Volume Name: engine Type: Distribute Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick Options Reconfigured: server.event-threads: 4 client.event-threads: 4 network.ping-timeout: 30 server.allow-insecure: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off nfs.disable: off performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off features.shard-block-size: 512MB
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------79E721D612490AACBBE44E7E Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> Sloved installing glusterfs-gnfs package.<br> Anyway could be nice to move hosted engine to gluster....<br> <br> <div class="moz-cite-prefix">Il 21/12/2017 11:37, Stefano Danzi ha scritto:<br> </div> <blockquote type="cite" cite="mid:83ce3aa1-56fe-1ba8-087f-ff4e4cb0f757@hawai.it"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <br> <br> <div class="moz-cite-prefix">Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:<br> </div> <blockquote type="cite" cite="mid:CAN8-ONr+8QcNq5uRAQnvTh0==Js4SqNd5MF72zrQT4L6ffS4fg@mail.gmail.com"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <span dir="ltr"><<a href="mailto:s.danzi@hawai.it" target="_blank" moz-do-not-send="true">s.danzi@hawai.it</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello!<br> I have a test system with one phisical host and hosted engine running on it.<br> Storage is gluster but hosted engine mount it as nfs.<br> <br> After the upgrade gluster no longer activate nfs.<br> The command "gluster volume set engine nfs.disable off" doesn't help.<br> <br> How I can re-enable nfs? O better how I can migrate self hosted engine to native glusterfs?<br> </blockquote> <div><br> </div> <div><br> </div> <div>Ciao Stefano,<br> could you please attach the output of <br> gluster volume info engine <br> </div> <div><br> </div> <div>adding Kasturi here</div> </div> </div> </div> </blockquote> <br> [root@ovirt01 ~]# gluster volume info engine<br> <br> Volume Name: engine<br> Type: Distribute<br> Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8<br> Status: Started<br> Snapshot Count: 0<br> Number of Bricks: 1<br> Transport-type: tcp<br> Bricks:<br> Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick<br> Options Reconfigured:<br> server.event-threads: 4<br> client.event-threads: 4<br> network.ping-timeout: 30<br> server.allow-insecure: on<br> storage.owner-gid: 36<br> storage.owner-uid: 36<br> cluster.server-quorum-type: server<br> cluster.quorum-type: auto<br> network.remote-dio: enable<br> cluster.eager-lock: enable<br> performance.stat-prefetch: off<br> performance.io-cache: off<br> performance.read-ahead: off<br> performance.quick-read: off<br> nfs.disable: off<br> performance.low-prio-threads: 32<br> cluster.data-self-heal-algorithm: full<br> cluster.locking-scheme: granular<br> cluster.shd-max-threads: 8<br> cluster.shd-wait-qlength: 10000<br> features.shard: on<br> user.cifs: off<br> features.shard-block-size: 512MB<br> <br> <br> <br> <blockquote type="cite" cite="mid:CAN8-ONr+8QcNq5uRAQnvTh0==Js4SqNd5MF72zrQT4L6ffS4fg@mail.gmail.com"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <div><br> </div> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> ______________________________<wbr>_________________<br> Users mailing list<br> <a href="mailto:Users@ovirt.org" target="_blank" moz-do-not-send="true">Users@ovirt.org</a><br> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br> </blockquote> </div> <br> </div> </div> </blockquote> <br> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------79E721D612490AACBBE44E7E--