<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000066" bgcolor="#FFFFFF">
OK, <br>
will extend replica 2 to replica 3 ( arbiter ) ASAP .<br>
<br>
If is deleted "untouching" ids file on brick , healing of this file
doesn't work .<br>
<br>
regs.Pa.<br>
<br>
<div class="moz-cite-prefix">On 3.3.2016 12:19, Nir Soffer wrote:<br>
</div>
<blockquote
cite="mid:CAMRbyyth7DEGXAGnbpxtNX73RBK73e7_Yb_z-wpbnkZHX=yMSQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On Thu, Mar 3, 2016 at 11:23 AM, <a
moz-do-not-send="true" href="mailto:paf1@email.cz">paf1@email.cz</a>
<span dir="ltr"><<a moz-do-not-send="true"
href="mailto:paf1@email.cz" target="_blank">paf1@email.cz</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000066" bgcolor="#FFFFFF"> This is replica 2,
only , with following settings<br>
</div>
</blockquote>
<div><br>
</div>
<div>Replica 2 is not supported. Even if you "fix" this now,
you will have the same issue</div>
<div>soon.</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000066" bgcolor="#FFFFFF"> <br>
Options Reconfigured:<br>
performance.quick-read: off<br>
performance.read-ahead: off<br>
performance.io-cache: off<br>
performance.stat-prefetch: off<br>
cluster.eager-lock: enable<br>
network.remote-dio: enable<br>
cluster.quorum-type: fixed<br>
cluster.server-quorum-type: none<br>
storage.owner-uid: 36<br>
storage.owner-gid: 36<br>
cluster.quorum-count: 1<br>
cluster.self-heal-daemon: enable<br>
<br>
If I'll create "ids" file manually ( eg. " sanlock
direct init -s
3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0
" ) on both bricks,<br>
vdsm is writing only to half of them ( that with 2 links
= correct )<br>
"ids" file has correct permittions, owner, size on both
bricks.<br>
brick 1: -rw-rw---- 1 vdsm kvm 1048576 2. bře 18.56
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
- not updated<br>
brick 2: -rw-rw---- 2 vdsm kvm 1048576 3. bře 10.16
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
- is continually updated<br>
<br>
What happens when I'll restart vdsm ? Will oVirt
storages go to "disable " state ??? = disconnect VMs
storages ?<br>
</div>
</blockquote>
<div><br>
</div>
<div> Nothing will happen, the vms will continue to run
normally.</div>
<div><br>
</div>
<div>On block storage, stopping vdsm will prevent automatic
extending of vm disks</div>
<div>when the disk become too full, but on file based
storage (like gluster) there is no issue.</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000066" bgcolor="#FFFFFF"> <br>
regs.Pa.
<div>
<div class="h5"><br>
<br>
<div>On 3.3.2016 02:02, Ravishankar N wrote:<br>
</div>
<blockquote type="cite">
<div>On 03/03/2016 12:43 AM, Nir Soffer wrote:<br>
</div>
<blockquote type="cite">
<blockquote class="gmail_quote" style="margin:0
0 0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div text="#000066" bgcolor="#FFFFFF">PS: #
find /STORAGES -samefile
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
-print<br>
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids<br>
= missing "shadowfile" in " .gluster " dir.<br>
How can I fix it ?? - online !</div>
</blockquote>
<div><br>
</div>
<div>Ravi?</div>
</blockquote>
Is this the case in all 3 bricks of the replica? <br>
BTW, you can just stat the file on the brick and
see the link count (it must be 2) instead of
running the more expensive find command.<br>
<br>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</body>
</html>