<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<br>
<div class="moz-cite-prefix">On 03/02/2016 03:45 AM, Nir Soffer
wrote:<br>
</div>
<blockquote
cite="mid:CAMRbyyu9gwPfVpPxpDa4_gKWyXq1PavTm2V2rG2cU0AvE=JJPA@mail.gmail.com"
type="cite">
<div dir="ltr">On Tue, Mar 1, 2016 at 10:51 PM, <a
moz-do-not-send="true" href="mailto:paf1@email.cz"><a class="moz-txt-link-abbreviated" href="mailto:paf1@email.cz">paf1@email.cz</a></a>
<<a moz-do-not-send="true" href="mailto:paf1@email.cz">paf1@email.cz</a>>
wrote:<br>
><br>
> HI,<br>
> requested output:<br>
><br>
> # ls -lh
/rhev/data-center/mnt/glusterSD/localhost:*/*/dom_md<br>
> <br>
>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md:<br>
> total 2,1M<br>
> -rw-rw---- 1 vdsm kvm 1,0M 1. bře 21.28 ids <--
good<br>
> -rw-rw---- 1 vdsm kvm 16M 7. lis 22.16 inbox<br>
> -rw-rw---- 1 vdsm kvm 2,0M 7. lis 22.17 leases<br>
> -rw-r--r-- 1 vdsm kvm 335 7. lis 22.17 metadata<br>
> -rw-rw---- 1 vdsm kvm 16M 7. lis 22.16 outbox<br>
><br>
>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P1/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md:<br>
> total 1,1M<br>
> -rw-r--r-- 1 vdsm kvm 0 24. úno 07.41 ids <--
bad (sanlock cannot write, other can read)<br>
> -rw-rw---- 1 vdsm kvm 16M 7. lis 00.14 inbox<br>
> -rw-rw---- 1 vdsm kvm 2,0M 7. lis 03.56 leases<br>
> -rw-r--r-- 1 vdsm kvm 333 7. lis 03.56 metadata<br>
> -rw-rw---- 1 vdsm kvm 16M 7. lis 00.14 outbox<br>
><br>
>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md:<br>
> total 1,1M<br>
> -rw-r--r-- 1 vdsm kvm 0 24. úno 07.43 ids <--
bad (sanlock cannot write, other can read)<br>
> -rw-rw---- 1 vdsm kvm 16M 7. lis 00.15 inbox<br>
> -rw-rw---- 1 vdsm kvm 2,0M 7. lis 22.14 leases<br>
> -rw-r--r-- 1 vdsm kvm 333 7. lis 22.14 metadata<br>
> -rw-rw---- 1 vdsm kvm 16M 7. lis 00.15 outbox<br>
><br>
>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P3/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md:<br>
> total 1,1M<br>
> -rw-r--r-- 1 vdsm kvm 0 24. úno 07.43 ids <--
bad (sanlock cannot write, other can read)<br>
> -rw-rw---- 1 vdsm kvm 16M 23. úno 22.51 inbox<br>
> -rw-rw---- 1 vdsm kvm 2,0M 23. úno 23.12 leases<br>
> -rw-r--r-- 1 vdsm kvm 998 25. úno 00.35 metadata<br>
> -rw-rw---- 1 vdsm kvm 16M 7. lis 00.16 outbox<br>
><br>
>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md:<br>
> total 1,1M<br>
> -rw-r--r-- 1 vdsm kvm 0 24. úno 07.44 ids <--
bad (sanlock cannot write, other can read)<br>
> -rw-rw---- 1 vdsm kvm 16M 7. lis 00.17 inbox<br>
> -rw-rw---- 1 vdsm kvm 2,0M 7. lis 00.18 leases<br>
> -rw-r--r-- 1 vdsm kvm 333 7. lis 00.18 metadata<br>
> -rw-rw---- 1 vdsm kvm 16M 7. lis 00.17 outbox<br>
><br>
>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P1/42d710a9-b844-43dc-be41-77002d1cd553/dom_md:<br>
> total 1,1M<br>
> -rw-rw-r-- 1 vdsm kvm 0 24. úno 07.32 ids <--
bad (other can read)<br>
> -rw-rw---- 1 vdsm kvm 16M 7. lis 22.18 inbox<br>
> -rw-rw---- 1 vdsm kvm 2,0M 7. lis 22.18 leases<br>
> -rw-r--r-- 1 vdsm kvm 333 7. lis 22.18 metadata<br>
> -rw-rw---- 1 vdsm kvm 16M 7. lis 22.18 outbox<br>
><br>
>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md:<br>
> total 3,0M<br>
> -rw-rw-r-- 1 vdsm kvm 1,0M 1. bře 21.28 ids <--
bad (other can read)<br>
> -rw-rw---- 1 vdsm kvm 16M 25. úno 00.42 inbox <br>
> -rw-rw---- 1 vdsm kvm 2,0M 25. úno 00.44 leases<br>
> -rw-r--r-- 1 vdsm kvm 997 24. úno 02.46 metadata<br>
> -rw-rw---- 1 vdsm kvm 16M 25. úno 00.44 outbox<br>
><br>
>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P3/ef010d08-aed1-41c4-ba9a-e6d9bdecb4b4/dom_md:<br>
> total 2,1M<br>
> -rw-r--r-- 1 vdsm kvm 0 24. úno 07.34 ids <--
bad (sanlock cannot write, other can read)<br>
> -rw-rw---- 1 vdsm kvm 16M 23. úno 22.35 inbox<br>
> -rw-rw---- 1 vdsm kvm 2,0M 23. úno 22.38 leases<br>
> -rw-r--r-- 1 vdsm kvm 1,1K 24. úno 19.07 metadata<br>
> -rw-rw---- 1 vdsm kvm 16M 23. úno 22.27 outbox<br>
><br>
>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12__P4/300e9ac8-3c2f-4703-9bb1-1df2130c7c97/dom_md:<br>
> total 3,0M<br>
> -rw-rw-r-- 1 vdsm kvm 1,0M 1. bře 21.28 ids <--
bad (other can read)<br>
> -rw-rw-r-- 1 vdsm kvm 16M 6. lis 23.50 inbox
<-- bad (other can read)
<div>> -rw-rw-r-- 1 vdsm kvm 2,0M 6. lis 23.51 leases
<-- bad (other can read)<br>
> -rw-rw-r-- 1 vdsm kvm 734 7. lis 02.13 metadata
<-- bad (group can write, other can read)<br>
> -rw-rw-r-- 1 vdsm kvm 16M 6. lis 16.55 outbox
<-- bad (other can read)<br>
><br>
>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P5/1ca56b45-701e-4c22-9f59-3aebea4d8477/dom_md:<br>
> total 1,1M<br>
> -rw-rw-r-- 1 vdsm kvm 0 24. úno 07.35 ids
<-- bad (other can read)<br>
> -rw-rw-r-- 1 vdsm kvm 16M 24. úno 01.06 inbox<br>
> -rw-rw-r-- 1 vdsm kvm 2,0M 24. úno 02.44 leases<br>
> -rw-r--r-- 1 vdsm kvm 998 24. úno 19.07 metadata<br>
> -rw-rw-r-- 1 vdsm kvm 16M 7. lis 22.20 outbox<br>
<br>
<br>
It should look like this:<br>
<br>
-rw-rw----. 1 vdsm kvm 1.0M Mar 1 23:36 ids<br>
-rw-rw----. 1 vdsm kvm 2.0M Mar 1 23:35 leases<br>
-rw-r--r--. 1 vdsm kvm 353 Mar 1 23:35 metadata<br>
-rw-rw----. 1 vdsm kvm 16M Mar 1 23:34 outbox<br>
-rw-rw----. 1 vdsm kvm 16M Mar 1 23:34 inbox<br>
<br>
This explains the EACCES error.<br>
<br>
You can start by fixing the permissions manually, you can do
this online.<br>
<br>
> The ids files was generated by "touch" command after
deleting them due "sanlock locking hang" gluster crash &
reboot<br>
> I expected that they will be filled automaticaly after
gluster reboot ( the shadow copy from ".gluster "
directory was deleted & created empty too )<br>
<br>
I don't know about gluster shadow copy, I would not play with
gluster internals.</div>
<div>Adding Sahina for advice.<br>
</div>
</div>
</blockquote>
<br>
Did you generate the ids file on the mount point.<br>
<br>
Ravi, can you help here?<br>
<br>
<blockquote
cite="mid:CAMRbyyu9gwPfVpPxpDa4_gKWyXq1PavTm2V2rG2cU0AvE=JJPA@mail.gmail.com"
type="cite">
<div dir="ltr">
<div><br>
> OK, it looks that sanlock can't work with empty file or
rewrite them .<br>
> Am I right ??<br>
<br>
Yes, the files must be initialized before sanlock can use
them.<br>
<br>
You can initialize the file like this:<br>
<br>
sanlock direct init -s
<sd_uuid>:0:repair/<sd_uuid>/dom_md/ids:0<br>
<br>
Taken from <a moz-do-not-send="true"
href="http://lists.ovirt.org/pipermail/users/2016-February/038046.html">http://lists.ovirt.org/pipermail/users/2016-February/038046.html</a><br>
<br>
> The last point - about "ids" workaround - this is offline
version = VMs have to be moved out from for continual running
with maintenance volume mode<br>
> But this is not acceptable in current situation, so the
question again, is it safe to do it online ?? ( YES / NO )</div>
<div><br>
</div>
<div>The ids file is accessed only by sanlock. I guess that you
don't have a running</div>
<div>SPM on this DC, since sanlock fails to acquire a host id,
so you are pretty safe</div>
<div>to fix the permissions and initialize the ids files.</div>
<div><br>
</div>
<div>I would do this:</div>
<div><br>
</div>
<div>1. Stop engine, so it will not try to start vdsm</div>
<div>2. Stop vdsm on all hosts, so they do not try to acquire a
host id with sanlock</div>
<div> This does not affect running vms</div>
<div>3. Fix the permissions on the ids file, via glusterfs mount</div>
<div>4. Initialize the ids files from one of the hosts, via the
glusterfs mount</div>
<div> This should fix the ids files on all replicas</div>
<div>5. Start vdsm on all hosts</div>
<div>6. Start engine</div>
<div><br>
</div>
<div>Engine will connect to all hosts, hosts will connect to
storage and try to acquire a host id.</div>
<div>Then Engine will start the SPM on one of the hosts, and
your DC should become up.</div>
<div><br>
</div>
<div>David, Sahina, can you confirm that this procedure is safe?</div>
</div>
</blockquote>
<br>
Yes, correcting from the mount point should fix it on all replicas<br>
<br>
<br>
<blockquote
cite="mid:CAMRbyyu9gwPfVpPxpDa4_gKWyXq1PavTm2V2rG2cU0AvE=JJPA@mail.gmail.com"
type="cite">
<div dir="ltr">
<div><br>
</div>
<div>Nir</div>
<div><br>
</div>
<div>><br>
> regs.<br>
> Pavel<br>
><br>
><br>
><br>
> On 1.3.2016 18:38, Nir Soffer wrote:<br>
><br>
> On Tue, Mar 1, 2016 at 5:07 PM, <a
moz-do-not-send="true" href="mailto:paf1@email.cz"><a class="moz-txt-link-abbreviated" href="mailto:paf1@email.cz">paf1@email.cz</a></a>
<<a moz-do-not-send="true" href="mailto:paf1@email.cz">paf1@email.cz</a>>
wrote:<br>
>><br>
>> Hello, can anybody explain this error no.13 ( open
file ) in sanlock.log .<br>
><br>
><br>
> This is EACCES<br>
><br>
> Can you share the outoput of:<br>
><br>
> ls -lh
/rhev/data-center/mnt/<server>:<_path>/<sd_uuid>/dom_md<br>
> <br>
>><br>
>><br>
>> The size of "ids" file is zero (0)<br>
><br>
><br>
> This is how we create the ids file when initializing it.<br>
><br>
> But then we use sanlock to initialize the ids file, and
it should be 1MiB after that.<br>
><br>
> Is this ids files created by vdsm, or one you created
yourself?<br>
> <br>
>><br>
>> 2016-02-28 03:25:46+0100 269626 [1951]: open error
-13
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids<br>
>> 2016-02-28 03:25:46+0100 269626 [1951]: s187985
open_disk
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids
error -13<br>
>> 2016-02-28 03:25:56+0100 269636 [11304]: s187992
lockspace
7f52b697-c199-4f58-89aa-102d44327124:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids:0<br>
>><br>
>> If the main problem is about zero file size, can I
regenerate this file online securely , with no VM dependence
????<br>
><br>
><br>
> Yes, I think I already referred to the instructions how
to do that in a previous mail.<br>
><br>
>><br>
>><br>
>> dist = RHEL - 7 - 2.1511<br>
>> kernel = 3.10.0 - 327.10.1.el7.x86_64<br>
>> KVM = 2.3.0 - 29.1.el7<br>
>> libvirt = libvirt-1.2.17-13.el7_2.3<br>
>> vdsm = vdsm-4.16.30-0.el7<br>
>> GlusterFS = glusterfs-3.7.8-1.el7<br>
>><br>
>><br>
>> regs.<br>
>> Pavel<br>
>><br>
>> _______________________________________________<br>
>> Users mailing list<br>
>> <a moz-do-not-send="true"
href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>> <a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br>
>><br>
><br>
><br>
</div>
</div>
</blockquote>
<br>
</body>
</html>