<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sat, Aug 13, 2016 at 11:00 AM, David Gossage <span dir="ltr"><<a href="mailto:dgossage@carouselchecks.com" target="_blank">dgossage@carouselchecks.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sat, Aug 13, 2016 at 8:19 AM, Scott <span dir="ltr"><<a href="mailto:romracer@gmail.com" target="_blank">romracer@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Had a chance to upgrade my cluster to Gluster 3.7.14 and can confirm it works for me too where 3.7.12/13 did not.<div><br></div><div>I did find that you should NOT turn off network.remote-dio or turn on performance.strict-o-direct as suggested earlier in the thread. They will prevent dd (using direct flag) and other things from working properly. I'd leave those at network.remote-dio=enabled and performance.strict-o-direc<wbr>t=off.</div></div></blockquote><div><br></div><div>Those were actually just suggested during a testing phase trying to trace down the issue. Neither of those 2 I think have ever been suggested as good practice. At least not for VM storage.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div>Hopefully we can see Gluster 3.7.14 moved out of testing repo soon.</div></div></blockquote></div></div></div></blockquote><div><br></div><div>Is it still in testing repo? I updated my production cluster I think 2 weeks ago from default repo on centos7.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div>Scott</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Aug 2, 2016 at 9:05 AM, David Gossage <span dir="ltr"><<a href="mailto:dgossage@carouselchecks.com" target="_blank">dgossage@carouselchecks.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">So far gluster 3.7.14 seems to have resolved issues at least on my test box. dd commands that failed previously now work with sharding on zfs backend,<div><br></div><div>Where before I couldn't even mount a new storage domain it now mounted and I have a test vm being created. </div><div><br></div><div>Still have to let VM run for a few days and make sure no locking freezing occurs but looks hopeful so far.</div></div><div class="gmail_extra"><span><br clear="all"><div><div data-smartmail="gmail_signature"><div dir="ltr"><span><font color="#888888"><span style="color:rgb(0,0,0)"><b><i>David Gossage</i></b></span><font><i><span style="color:rgb(51,51,51)"><b><br>
</b></span></i></font></font></span><div><span><font color="#888888"><font><i><span style="color:rgb(51,51,51)"></span></i><font size="1"><b style="color:rgb(153,0,0)">Carousel Checks Inc.<span style="color:rgb(204,204,204)"> | System Administrator</span></b></font></font><font style="color:rgb(153,153,153)"><font size="1"><br>
</font></font><font><font size="1"><span style="color:rgb(51,51,51)"><b style="color:rgb(153,153,153)">Office</b><span style="color:rgb(153,153,153)"> <a value="+17086132426">708.613.2284<font color="#888888"><font size="1"><br></font></font></a></span></span></font></font></font></span></div></div></div></div>
<br></span><div><div><div class="gmail_quote">On Tue, Jul 26, 2016 at 8:15 AM, David Gossage <span dir="ltr"><<a href="mailto:dgossage@carouselchecks.com" target="_blank">dgossage@carouselchecks.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div><div><div dir="ltr"><div>On Tue, Jul 26, 2016 at 4:37 AM, Krutika Dhananjay <span dir="ltr"><<a href="mailto:kdhananj@redhat.com" target="_blank">kdhananj@redhat.com</a>></span> wrote:<br></div></div></div></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div>Hi,<br><br></div>1. Could you please attach the glustershd logs from all three nodes?<br></div></div></div></div></div></div></blockquote><div><br></div><div>Here are ccgl1 and ccgl2. as previously mentioned ccgl3 third node was down from bad nic so no relevant logs would be on that node.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><div><div><br></div>2. Also, so far what we know is that the 'Operation not permitted' errors are on the main vm image itself and not its individual shards (ex deb61291-5176-4b81-8315-3f1cf8<wbr>e3534d). Could you do the following:<br></div>Get the inode number of .glusterfs/de/b6/deb61291-5176<wbr>-4b81-8315-3f1cf8e3534d (ls -li) from the first brick. I'll call this number INODE_NUMBER.<br></div>Execute `find . -inum INODE_NUMBER` from the brick root on first brick to print the hard links against the file in the prev step and share the output.<br></div></div></div></blockquote><div><div>[dgossage@ccgl1 ~]$ sudo ls -li /gluster1/BRICK1/1/.glusterfs/<wbr>de/b6/deb61291-5176-4b81-8315-<wbr>3f1cf8e3534d</div><div>16407 -rw-r--r--. 2 36 36 466 Jun 5 16:52 /gluster1/BRICK1/1/.glusterfs/<wbr>de/b6/deb61291-5176-4b81-8315-<wbr>3f1cf8e3534d</div><div>[dgossage@ccgl1 ~]$ cd /gluster1/BRICK1/1/</div><div>[dgossage@ccgl1 1]$ sudo find . -inum 16407</div><div>./7c73a8dd-a72e-4556-ac88-7f68<wbr>13131e64/dom_md/metadata</div><div>./.glusterfs/de/b6/deb61291-51<wbr>76-4b81-8315-3f1cf8e3534d</div></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><br></div>3. Did you delete any vms at any point before or after the upgrade?<br></div></div></blockquote><div><br></div><div>Immediately before or after on same day pretty sure I deleted nothing. During week prior I deleted a few dev vm's that were never setup and some the week after upgrade as I was preparing for moving disks off and on storage to get them sharded and felt it would be easier to just recreate some disks that had no data yet rather than move them off and on later. </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div>-Krutika<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jul 25, 2016 at 11:30 PM, David Gossage <span dir="ltr"><<a href="mailto:dgossage@carouselchecks.com" target="_blank">dgossage@carouselchecks.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><span><div><div><div dir="ltr"><div>On Mon, Jul 25, 2016 at 9:58 AM, Krutika Dhananjay <span dir="ltr"><<a href="mailto:kdhananj@redhat.com" target="_blank">kdhananj@redhat.com</a>></span> wrote:<br></div></div></div></div></span><div class="gmail_quote"><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div>OK, could you try the following:<br><br></div>i. Set network.remote-dio to off<br></div> # gluster volume set <VOL> network.remote-dio off<br><br></div>ii. Set performance.strict-o-direct to on<br></div><div> # gluster volume set <VOL> performance.strict-o-direct on<br></div><div><br></div>iii. Stop the affected vm(s) and start again<br><br></div>and tell me if you notice any improvement?<br><br></div></div></blockquote><div><br></div></span><div>Previous instll I had issue with is still on gluster 3.7.11</div><div><br></div><div>My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a locak disk right now isn't allowing me to add the gluster storage at all.</div><div><br></div><div>Keep getting some type of UI error</div><div><br></div><div>2016-07-25 12:49:09,277 ERROR [org.ovirt.engine.ui.frontend.<wbr>server.gwt.OvirtRemoteLoggingS<wbr>ervice] (default task-33) [] Permutation name: 430985F23DFC1C8BE1C7FDD91EDAA7<wbr>85</div><div>2016-07-25 12:49:09,277 ERROR [org.ovirt.engine.ui.frontend.<wbr>server.gwt.OvirtRemoteLoggingS<wbr>ervice] (default task-33) [] Uncaught exception: : java.lang.ClassCastException</div><div> at Unknown.ps(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@3837" target="_blank">https://ccengine2.c<wbr>arouselchecks.local/ovirt-engi<wbr>ne/webadmin/430985F23DFC1C8BE1<wbr>C7FDD91EDAA785.cache.html@3837</a><wbr>) at Unknown.ts(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@20" target="_blank">https://ccengine2.c<wbr>arouselchecks.local/ovirt-engi<wbr>ne/webadmin/430985F23DFC1C8BE1<wbr>C7FDD91EDAA785.cache.html@20</a>) at Unknown.vs(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@18" target="_blank">https://ccengine2.c<wbr>arouselchecks.local/ovirt-engi<wbr>ne/webadmin/430985F23DFC1C8BE1<wbr>C7FDD91EDAA785.cache.html@18</a>) at Unknown.iJf(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@19" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@19</a>) at Unknown.Xab(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@48" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@48</a>) at Unknown.P8o(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@4447" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@444<wbr>7</a>) at Unknown.jQr(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@21" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@21</a>) at Unknown.A8o(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@51" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@51</a>) at Unknown.u8o(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@101" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@101</a><wbr>) at Unknown.Eap(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@10718" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@107<wbr>18</a>) at Unknown.p8n(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@161" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@161</a><wbr>) at Unknown.Cao(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@31" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@31</a>) at Unknown.Bap(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@10469" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@104<wbr>69</a>) at Unknown.kRn(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@49" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@49</a>) at Unknown.nRn(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@438" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@438</a><wbr>) at Unknown.eVn(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@40" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@40</a>) at Unknown.hVn(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@25827" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@258<wbr>27</a>) at Unknown.MTn(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@25" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@25</a>) at Unknown.PTn(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@24052" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@240<wbr>52</a>) at Unknown.KJe(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@21125" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@211<wbr>25</a>) at Unknown.Izk(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@10384" target="_blank">https://ccengine2.<wbr>carouselchecks.local/ovirt-eng<wbr>ine/webadmin/430985F23DFC1C8BE<wbr>1C7FDD91EDAA785.cache.html@103<wbr>84</a>) at Unknown.P3(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@137" target="_blank">https://ccengine2.c<wbr>arouselchecks.local/ovirt-engi<wbr>ne/webadmin/430985F23DFC1C8BE1<wbr>C7FDD91EDAA785.cache.html@137</a>) at Unknown.g4(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@8271" target="_blank">https://ccengine2.c<wbr>arouselchecks.local/ovirt-engi<wbr>ne/webadmin/430985F23DFC1C8BE1<wbr>C7FDD91EDAA785.cache.html@8271</a><wbr>) at Unknown.<anonymous>(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@65" target="_blank">https://cc<wbr>engine2.carouselchecks.local/o<wbr>virt-engine/webadmin/430985F23<wbr>DFC1C8BE1C7FDD91EDAA785.cache.<wbr>html@65</a>) at Unknown._t(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@29" target="_blank">https://ccengine2.c<wbr>arouselchecks.local/ovirt-engi<wbr>ne/webadmin/430985F23DFC1C8BE1<wbr>C7FDD91EDAA785.cache.html@29</a>) at Unknown.du(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@57" target="_blank">https://ccengine2.c<wbr>arouselchecks.local/ovirt-engi<wbr>ne/webadmin/430985F23DFC1C8BE1<wbr>C7FDD91EDAA785.cache.html@57</a>) at Unknown.<anonymous>(<a href="https://ccengine2.carouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@54" target="_blank">https://cc<wbr>engine2.carouselchecks.local/o<wbr>virt-engine/webadmin/430985F23<wbr>DFC1C8BE1C7FDD91EDAA785.cache.<wbr>html@54</a>)</div><div><div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div></div>-Krutika<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jul 25, 2016 at 4:57 PM, Samuli Heinonen <span dir="ltr"><<a href="mailto:samppah@neutraali.net" target="_blank">samppah@neutraali.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">Hi,<br>
<span><br>
> On 25 Jul 2016, at 12:34, David Gossage <<a href="mailto:dgossage@carouselchecks.com" target="_blank">dgossage@carouselchecks.com</a>> wrote:<br>
><br>
> On Mon, Jul 25, 2016 at 1:01 AM, Krutika Dhananjay <<a href="mailto:kdhananj@redhat.com" target="_blank">kdhananj@redhat.com</a>> wrote:<br>
> Hi,<br>
><br>
> Thanks for the logs. So I have identified one issue from the logs for which the fix is this: <a href="http://review.gluster.org/#/c/14669/" rel="noreferrer" target="_blank">http://review.gluster.org/#/c/<wbr>14669/</a>. Because of a bug in the code, ENOENT was getting converted to EPERM and being propagated up the stack causing the reads to bail out early with 'Operation not permitted' errors.<br>
> I still need to find out two things:<br>
> i) why there was a readv() sent on a non-existent (ENOENT) file (this is important since some of the other users have not faced or reported this issue on gluster-users with 3.7.13)<br>
> ii) need to see if there's a way to work around this issue.<br>
><br>
> Do you mind sharing the steps needed to be executed to run into this issue? This is so that we can apply our patches, test and ensure they fix the problem.<br>
<br>
<br>
</span>Unfortunately I can’t test this right away nor give exact steps how to test this. This is just a theory but please correct me if you see some mistakes.<br>
<br>
oVirt uses cache=none settings for VM’s by default which requires direct I/O. oVirt also uses dd with iflag=direct to check that storage has direct I/O enabled. Problems exist with GlusterFS with sharding enabled and bricks running on ZFS on Linux. Everything seems to be fine with GlusterFS 3.7.11 and problems exist at least with version .12 and .13. There has been some posts saying that GlusterFS 3.8.x is also affected.<br>
<br>
Steps to reproduce:<br>
1. Sharded file is created with GlusterFS 3.7.11. Everything works ok.<br>
2. GlusterFS is upgraded to 3.7.12+<br>
3. Sharded file cannot be read or written with direct I/O enabled. (Ie. oVirt uses to check storage connection with command "dd if=/rhev/data-center/00000001-<wbr>0001-0001-0001-0000000002b6/ma<wbr>stersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000”)<br>
<br>
Please let me know if you need more information.<br>
<span><font color="#888888"><br>
-samuli<br>
</font></span><div><div><br>
> Well after upgrade of gluster all I did was start ovirt hosts up which launched and started their ha-agent and broker processes. I don't believe I started getting any errors till it mounted GLUSTER1. I had enabled sharding but had no sharded disk images yet. Not sure if the check for shards would have caused that. Unfortunately I can't just update this cluster and try and see what caused it as it has sme VM's users expect to be available in few hours.<br>
><br>
> I can see if I can get my test setup to recreate it. I think I'll need to de-activate data center so I can detach the storage thats on xfs and attach the one thats over zfs with sharding enabled. My test is 3 bricks on same local machine, with 3 different volumes but I think im running into sanlock issue or something as it won't mount more than one volume that was created locally.<br>
><br>
><br>
> -Krutika<br>
><br>
> On Fri, Jul 22, 2016 at 7:17 PM, David Gossage <<a href="mailto:dgossage@carouselchecks.com" target="_blank">dgossage@carouselchecks.com</a>> wrote:<br>
> Trimmed out the logs to just about when I was shutting down ovirt servers for updates which was 14:30 UTC 2016-07-09<br>
><br>
> Pre-update settings were<br>
><br>
> Volume Name: GLUSTER1<br>
> Type: Replicate<br>
> Volume ID: 167b8e57-28c3-447a-95cc-8410cb<wbr>df3f7f<br>
> Status: Started<br>
> Number of Bricks: 1 x 3 = 3<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: ccgl1.gl.local:/gluster1/BRICK<wbr>1/1<br>
> Brick2: ccgl2.gl.local:/gluster1/BRICK<wbr>1/1<br>
> Brick3: ccgl3.gl.local:/gluster1/BRICK<wbr>1/1<br>
> Options Reconfigured:<br>
> performance.readdir-ahead: on<br>
> storage.owner-uid: 36<br>
> storage.owner-gid: 36<br>
> performance.quick-read: off<br>
> performance.read-ahead: off<br>
> performance.io-cache: off<br>
> performance.stat-prefetch: off<br>
> cluster.eager-lock: enable<br>
> network.remote-dio: enable<br>
> cluster.quorum-type: auto<br>
> cluster.server-quorum-type: server<br>
> server.allow-insecure: on<br>
> cluster.self-heal-window-size: 1024<br>
> cluster.background-self-heal-c<wbr>ount: 16<br>
> performance.strict-write-order<wbr>ing: off<br>
> nfs.disable: on<br>
> nfs.addr-namelookup: off<br>
> nfs.enable-ino32: off<br>
><br>
> At the time of updates ccgl3 was offline from bad nic on server but had been so for about a week with no issues in volume<br>
><br>
> Shortly after update I added these settings to enable sharding but did not as of yet have any VM images sharded.<br>
> features.shard-block-size: 64MB<br>
> features.shard: on<br>
><br>
><br>
><br>
><br>
> David Gossage<br>
> Carousel Checks Inc. | System Administrator<br>
> Office <a href="tel:708.613.2284" value="+17086132284" target="_blank">708.613.2284</a><br>
><br>
> On Fri, Jul 22, 2016 at 5:00 AM, Krutika Dhananjay <<a href="mailto:kdhananj@redhat.com" target="_blank">kdhananj@redhat.com</a>> wrote:<br>
> Hi David,<br>
><br>
> Could you also share the brick logs from the affected volume? They're located at /var/log/glusterfs/bricks/<hyp<wbr>henated-path-to-the-brick-dire<wbr>ctory>.log.<br>
><br>
> Also, could you share the volume configuration (output of `gluster volume info <VOL>`) for the affected volume(s) AND at the time you actually saw this issue?<br>
><br>
> -Krutika<br>
><br>
><br>
><br>
><br>
> On Thu, Jul 21, 2016 at 11:23 PM, David Gossage <<a href="mailto:dgossage@carouselchecks.com" target="_blank">dgossage@carouselchecks.com</a>> wrote:<br>
> On Thu, Jul 21, 2016 at 11:47 AM, Scott <<a href="mailto:romracer@gmail.com" target="_blank">romracer@gmail.com</a>> wrote:<br>
> Hi David,<br>
><br>
> My backend storage is ZFS.<br>
><br>
> I thought about moving from FUSE to NFS mounts for my Gluster volumes to help test. But since I use hosted engine this would be a real pain. Its difficult to modify the storage domain type/path in the hosted-engine.conf. And I don't want to go through the process of re-deploying hosted engine.<br>
><br>
><br>
> I found this<br>
><br>
> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1347553" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/sh<wbr>ow_bug.cgi?id=1347553</a><br>
><br>
> Not sure if related.<br>
><br>
> But I also have zfs backend, another user in gluster mailing list had issues and used zfs backend although she used proxmox and got it working by changing disk to writeback cache I think it was.<br>
><br>
> I also use hosted engine, but I run my gluster volume for HE actually on a LVM separate from zfs on xfs and if i recall it did not have the issues my gluster on zfs did. I'm wondering now if the issue was zfs settings.<br>
><br>
> Hopefully should have a test machone up soon I can play around with more.<br>
><br>
> Scott<br>
><br>
> On Thu, Jul 21, 2016 at 11:36 AM David Gossage <<a href="mailto:dgossage@carouselchecks.com" target="_blank">dgossage@carouselchecks.com</a>> wrote:<br>
> What back end storage do you run gluster on? xfs/zfs/ext4 etc?<br>
><br>
> David Gossage<br>
> Carousel Checks Inc. | System Administrator<br>
> Office <a href="tel:708.613.2284" value="+17086132284" target="_blank">708.613.2284</a><br>
><br>
> On Thu, Jul 21, 2016 at 8:18 AM, Scott <<a href="mailto:romracer@gmail.com" target="_blank">romracer@gmail.com</a>> wrote:<br>
> I get similar problems with oVirt 4.0.1 and hosted engine. After upgrading all my hosts to Gluster 3.7.13 (client and server), I get the following:<br>
><br>
> $ sudo hosted-engine --set-maintenance --mode=none<br>
> Traceback (most recent call last):<br>
> File "/usr/lib64/python2.7/runpy.py<wbr>", line 162, in _run_module_as_main<br>
> "__main__", fname, loader, pkg_name)<br>
> File "/usr/lib64/python2.7/runpy.py<wbr>", line 72, in _run_code<br>
> exec code in run_globals<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_setup/<wbr>set_maintenance.py", line 73, in <module><br>
> if not maintenance.set_mode(sys.argv[<wbr>1]):<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_setup/<wbr>set_maintenance.py", line 61, in set_mode<br>
> value=m_global,<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/cli<wbr>ent/client.py", line 259, in set_maintenance_mode<br>
> str(value))<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/cli<wbr>ent/client.py", line 204, in set_global_md_flag<br>
> all_stats = broker.get_stats_from_storage(<wbr>service)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/lib<wbr>/brokerlink.py", line 232, in get_stats_from_storage<br>
> result = self._checked_communicate(requ<wbr>est)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_hosted_engine_ha/lib<wbr>/brokerlink.py", line 260, in _checked_communicate<br>
> .format(message or response))<br>
> ovirt_hosted_engine_ha.lib.exc<wbr>eptions.RequestError: Request failed: failed to read metadata: [Errno 1] Operation not permitted<br>
><br>
> If I only upgrade one host, then things will continue to work but my nodes are constantly healing shards. My logs are also flooded with:<br>
><br>
> [2016-07-21 13:15:14.137734] W [fuse-bridge.c:2227:fuse_readv<wbr>_cbk] 0-glusterfs-fuse: 274714: READ => -1 gfid=4<br>
> 41f2789-f6b1-4918-a280-1b9905a<wbr>11429 fd=0x7f19bc0041d0 (Operation not permitted)<br>
> The message "W [MSGID: 114031] [client-rpc-fops.c:3050:client<wbr>3_3_readv_cbk] 0-data-client-0: remote operation failed [Operation not permitted]" repeated 6 times between [2016-07-21 13:13:24.134985] and [2016-07-21 13:15:04.132226]<br>
> The message "W [MSGID: 114031] [client-rpc-fops.c:3050:client<wbr>3_3_readv_cbk] 0-data-client-1: remote operation failed [Operation not permitted]" repeated 8 times between [2016-07-21 13:13:34.133116] and [2016-07-21 13:15:14.137178]<br>
> The message "W [MSGID: 114031] [client-rpc-fops.c:3050:client<wbr>3_3_readv_cbk] 0-data-client-2: remote operation failed [Operation not permitted]" repeated 7 times between [2016-07-21 13:13:24.135071] and [2016-07-21 13:15:14.137666]<br>
> [2016-07-21 13:15:24.134647] W [MSGID: 114031] [client-rpc-fops.c:3050:client<wbr>3_3_readv_cbk] 0-data-client-0: remote operation failed [Operation not permitted]<br>
> [2016-07-21 13:15:24.134764] W [MSGID: 114031] [client-rpc-fops.c:3050:client<wbr>3_3_readv_cbk] 0-data-client-2: remote operation failed [Operation not permitted]<br>
> [2016-07-21 13:15:24.134793] W [fuse-bridge.c:2227:fuse_readv<wbr>_cbk] 0-glusterfs-fuse: 274741: READ => -1 gfid=441f2789-f6b1-4918-a280-1<wbr>b9905a11429 fd=0x7f19bc0038f4 (Operation not permitted)<br>
> [2016-07-21 13:15:34.135413] W [fuse-bridge.c:2227:fuse_readv<wbr>_cbk] 0-glusterfs-fuse: 274756: READ => -1 gfid=441f2789-f6b1-4918-a280-1<wbr>b9905a11429 fd=0x7f19bc0041d0 (Operation not permitted)<br>
> [2016-07-21 13:15:44.141062] W [fuse-bridge.c:2227:fuse_readv<wbr>_cbk] 0-glusterfs-fuse: 274818: READ => -1 gfid=441f2789-f6b1-4918-a280-1<wbr>b9905a11429 fd=0x7f19bc0038f4 (Operation not permitted)<br>
> [2016-07-21 13:15:54.133582] W [MSGID: 114031] [client-rpc-fops.c:3050:client<wbr>3_3_readv_cbk] 0-data-client-1: remote operation failed [Operation not permitted]<br>
> [2016-07-21 13:15:54.133629] W [fuse-bridge.c:2227:fuse_readv<wbr>_cbk] 0-glusterfs-fuse: 274853: READ => -1 gfid=441f2789-f6b1-4918-a280-1<wbr>b9905a11429 fd=0x7f19bc0036d8 (Operation not permitted)<br>
> [2016-07-21 13:16:04.133666] W [fuse-bridge.c:2227:fuse_readv<wbr>_cbk] 0-glusterfs-fuse: 274879: READ => -1 gfid=441f2789-f6b1-4918-a280-1<wbr>b9905a11429 fd=0x7f19bc0041d0 (Operation not permitted)<br>
> [2016-07-21 13:16:14.134954] W [fuse-bridge.c:2227:fuse_readv<wbr>_cbk] 0-glusterfs-fuse: 274894: READ => -1 gfid=441f2789-f6b1-4918-a280-1<wbr>b9905a11429 fd=0x7f19bc0036d8 (Operation not permitted)<br>
><br>
> Scott<br>
><br>
><br>
> On Thu, Jul 21, 2016 at 6:57 AM Frank Rothenstein <<a href="mailto:f.rothenstein@bodden-kliniken.de" target="_blank">f.rothenstein@bodden-kliniken<wbr>.de</a>> wrote:<br>
> Hey Devid,<br>
><br>
> I have the very same problem on my test-cluster, despite on running ovirt 4.0.<br>
> If you access your volumes via NFS all is fine, problem is FUSE. I stayed on 3.7.13, but have no solution yet, now I use NFS.<br>
><br>
> Frank<br>
><br>
> Am Donnerstag, den 21.07.2016, 04:28 -0500 schrieb David Gossage:<br>
>> Anyone running one of recent 3.6.x lines and gluster using 3.7.13? I am looking to upgrade gluster from 3.7.11->3.7.13 for some bug fixes, but have been told by users on gluster mail list due to some gluster changes I'd need to change the disk parameters to use writeback cache. Something to do with aio support being removed.<br>
>><br>
>> I believe this could be done with custom parameters? But I believe strage tests are done using dd and would they fail with current settings then? Last upgrade to 3.7.13 I had to rollback to 3.7.11 due to stability isues where gluster storage would go into down state and always show N/A as space available/used. Even if hosts saw storage still and VM's were running on it on all 3 hosts.<br>
>><br>
>> Saw a lot of messages like these that went away once gluster rollback finished<br>
>><br>
>> [2016-07-09 15:27:46.935694] I [fuse-bridge.c:4083:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel 7.22<br>
>> [2016-07-09 15:27:49.555466] W [MSGID: 114031] [client-rpc-fops.c:3050:client<wbr>3_3_readv_cbk] 0-GLUSTER1-client-1: remote operation failed [Operation not permitted]<br>
>> [2016-07-09 15:27:49.556574] W [MSGID: 114031] [client-rpc-fops.c:3050:client<wbr>3_3_readv_cbk] 0-GLUSTER1-client-0: remote operation failed [Operation not permitted]<br>
>> [2016-07-09 15:27:49.556659] W [fuse-bridge.c:2227:fuse_readv<wbr>_cbk] 0-glusterfs-fuse: 80: READ => -1 gfid=deb61291-5176-4b81-8315-3<wbr>f1cf8e3534d fd=0x7f5224002f68 (Operation not permitted)<br>
>> [2016-07-09 15:27:59.612477] W [MSGID: 114031] [client-rpc-fops.c:3050:client<wbr>3_3_readv_cbk] 0-GLUSTER1-client-1: remote operation failed [Operation not permitted]<br>
>> [2016-07-09 15:27:59.613700] W [MSGID: 114031] [client-rpc-fops.c:3050:client<wbr>3_3_readv_cbk] 0-GLUSTER1-client-0: remote operation failed [Operation not permitted]<br>
>> [2016-07-09 15:27:59.613781] W [fuse-bridge.c:2227:fuse_readv<wbr>_cbk] 0-glusterfs-fuse: 168: READ => -1 gfid=deb61291-5176-4b81-8315-3<wbr>f1cf8e3534d fd=0x7f5224002f68 (Operation not permitted)<br>
>><br>
>> David Gossage<br>
>> Carousel Checks Inc. | System Administrator<br>
>> Office <a href="tel:708.613.2284" value="+17086132284" target="_blank">708.613.2284</a><br>
>> ______________________________<wbr>_________________<br>
>> Users mailing list<br>
>><br>
>> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
>> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
><br>
><br>
><br>
><br>
><br>
</div></div><span>> ______________________________<wbr>______________________________<wbr>__________________<br>
> BODDEN-KLINIKEN Ribnitz-Damgarten GmbH<br>
> Sandhufe 2<br>
> 18311 Ribnitz-Damgarten<br>
><br>
> Telefon: 03821-700-0<br>
> Fax: 03821-700-240<br>
><br>
> E-Mail: <a href="mailto:info@bodden-kliniken.de" target="_blank">info@bodden-kliniken.de</a> Internet: <a href="http://www.bodden-kliniken.de" rel="noreferrer" target="_blank">http://www.bodden-kliniken.de</a><br>
><br>
> Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.: 079/133/40188<br>
> Aufsichtsratsvorsitzende: Carmen Schröter, Geschäftsführer: Dr. Falko Milski<br>
><br>
> Der Inhalt dieser E-Mail ist ausschließlich für den bezeichneten Adressaten bestimmt. Wenn Sie nicht der vorge-<br>
> sehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, beachten Sie bitte, dass jede Form der Veröf-<br>
> fentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail unzulässig ist. Wir bitten Sie, sofort den<br>
> Absender zu informieren und die E-Mail zu löschen.<br>
><br>
><br>
> Bodden-Kliniken Ribnitz-Damgarten GmbH 2016<br>
> *** Virenfrei durch Kerio Mail Server und Sophos Antivirus ***<br>
> ______________________________<wbr>_________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
><br>
><br>
> ______________________________<wbr>_________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
><br>
><br>
><br>
><br>
><br>
</span><div><div>> ______________________________<wbr>_________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br>
</div></div></blockquote></div><br></div>
</blockquote></div></div></div><br></div></div>
</blockquote></div><br></div>
</blockquote></div><br></div></div>
</blockquote></div><br></div></div></div>
<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div>
</blockquote></div><br></div></div>
</blockquote></div><br></div></div>