
This is a multi-part message in MIME format. --------------010103050402020505080306 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit On 03/03/2016 02:53 PM, paf1@email.cz wrote:
This is replica 2, only , with following settings
Options Reconfigured: performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: fixed Not sure why you have set this option. Ideally replica 3 or arbiter volumes are recommended for gluster+ovirt use. (client) quorum does not make sense for a 2 node setup. I have a detailed write up here which explains things http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-volum... which explains things.
cluster.server-quorum-type: none storage.owner-uid: 36 storage.owner-gid: 36 cluster.quorum-count: 1 cluster.self-heal-daemon: enable
If I'll create "ids" file manually ( eg. " sanlock direct init -s 3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0 " ) on both bricks, vdsm is writing only to half of them ( that with 2 links = correct ) "ids" file has correct permittions, owner, size on both bricks. brick 1: -rw-rw---- 1 vdsm kvm 1048576 2. bře 18.56 /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - not updated
Okay, so this one has link count =1 which means the .glusterfs hardlink is missing. Can you try deleting this file from the brick and perform a stat on the file from the mount? That should heal (i.e recreate it ) on this brick from the other brick with the appropriate .glusterfs hard link.
brick 2: -rw-rw---- 2 vdsm kvm 1048576 3. bře 10.16 /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - is continually updated
What happens when I'll restart vdsm ? Will oVirt storages go to "disable " state ??? = disconnect VMs storages ?
No idea on this one... -Ravi
regs.Pa.
On 3.3.2016 02:02, Ravishankar N wrote:
On 03/03/2016 12:43 AM, Nir Soffer wrote:
PS: # find /STORAGES -samefile /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids -print /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids = missing "shadowfile" in " .gluster " dir. How can I fix it ?? - online !
Ravi?
Is this the case in all 3 bricks of the replica? BTW, you can just stat the file on the brick and see the link count (it must be 2) instead of running the more expensive find command.
--------------010103050402020505080306 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> <div class="moz-cite-prefix">On 03/03/2016 02:53 PM, <a class="moz-txt-link-abbreviated" href="mailto:paf1@email.cz">paf1@email.cz</a> wrote:<br> </div> <blockquote cite="mid:56D80297.6080407@email.cz" type="cite"> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> This is replica 2, only , with following settings<br> <br> Options Reconfigured:<br> performance.quick-read: off<br> performance.read-ahead: off<br> performance.io-cache: off<br> performance.stat-prefetch: off<br> cluster.eager-lock: enable<br> network.remote-dio: enable<br> cluster.quorum-type: fixed<br> </blockquote> Not sure why you have set this option.<br> Ideally replica 3 or arbiter volumes are recommended for gluster+ovirt use. (client) quorum does not make sense for a 2 node setup. I have a detailed write up here which explains things <a class="moz-txt-link-freetext" href="http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/">http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/</a> which explains things.<br> <br> <blockquote cite="mid:56D80297.6080407@email.cz" type="cite"> cluster.server-quorum-type: none<br> storage.owner-uid: 36<br> storage.owner-gid: 36<br> cluster.quorum-count: 1<br> cluster.self-heal-daemon: enable<br> <br> If I'll create "ids" file manually ( eg. " sanlock direct init -s 3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0 " ) on both bricks,<br> vdsm is writing only to half of them ( that with 2 links = correct )<br> "ids" file has correct permittions, owner, size on both bricks.<br> brick 1: -rw-rw---- 1 vdsm kvm 1048576 2. bře 18.56 /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - not updated<br> </blockquote> <br> Okay, so this one has link count =1 which means the .glusterfs hardlink is missing. Can you try deleting this file from the brick and perform a stat on the file from the mount? That should heal (i.e recreate it ) on this brick from the other brick with the appropriate .glusterfs hard link.<br> <br> <br> <blockquote cite="mid:56D80297.6080407@email.cz" type="cite"> brick 2: -rw-rw---- 2 vdsm kvm 1048576 3. bře 10.16 /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - is continually updated<br> <br> What happens when I'll restart vdsm ? Will oVirt storages go to "disable " state ??? = disconnect VMs storages ?<br> </blockquote> <br> No idea on this one...<br> -Ravi<br> <blockquote cite="mid:56D80297.6080407@email.cz" type="cite"> <br> regs.Pa.<br> <br> <div class="moz-cite-prefix">On 3.3.2016 02:02, Ravishankar N wrote:<br> </div> <blockquote cite="mid:56D78D3B.9060101@redhat.com" type="cite"> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> <div class="moz-cite-prefix">On 03/03/2016 12:43 AM, Nir Soffer wrote:<br> </div> <blockquote cite="mid:CAMRbyysFqJwke8vdmYL7NGUuzpFNJppZC3riLR8jC7pd33DVdQ@mail.gmail.com" type="cite"> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div text="#000066" bgcolor="#FFFFFF">PS: # find /STORAGES -samefile /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids -print<br> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids<br> = missing "shadowfile" in " .gluster " dir.<br> How can I fix it ?? - online !</div> </blockquote> <div><br> </div> <div>Ravi?</div> </blockquote> Is this the case in all 3 bricks of the replica? <br> BTW, you can just stat the file on the brick and see the link count (it must be 2) instead of running the more expensive find command.<br> <br> </blockquote> <br> </blockquote> <br> <br> </body> </html> --------------010103050402020505080306--