This is a multi-part message in MIME format.
--------------080108070503070703050100
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
HI,
requested output:
# ls -lh /rhev/data-center/mnt/glusterSD/localhost:*/*/dom_md
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md:
total 2,1M
-rw-rw---- 1 vdsm kvm 1,0M 1. bře 21.28 ids
-rw-rw---- 1 vdsm kvm 16M 7. lis 22.16 inbox
-rw-rw---- 1 vdsm kvm 2,0M 7. lis 22.17 leases
-rw-r--r-- 1 vdsm kvm 335 7. lis 22.17 metadata
-rw-rw---- 1 vdsm kvm 16M 7. lis 22.16 outbox
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P1/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md:
total 1,1M
-rw-r--r-- 1 vdsm kvm 0 24. úno 07.41 ids
-rw-rw---- 1 vdsm kvm 16M 7. lis 00.14 inbox
-rw-rw---- 1 vdsm kvm 2,0M 7. lis 03.56 leases
-rw-r--r-- 1 vdsm kvm 333 7. lis 03.56 metadata
-rw-rw---- 1 vdsm kvm 16M 7. lis 00.14 outbox
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md:
total 1,1M
-rw-r--r-- 1 vdsm kvm 0 24. úno 07.43 ids
-rw-rw---- 1 vdsm kvm 16M 7. lis 00.15 inbox
-rw-rw---- 1 vdsm kvm 2,0M 7. lis 22.14 leases
-rw-r--r-- 1 vdsm kvm 333 7. lis 22.14 metadata
-rw-rw---- 1 vdsm kvm 16M 7. lis 00.15 outbox
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P3/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md:
total 1,1M
-rw-r--r-- 1 vdsm kvm 0 24. úno 07.43 ids
-rw-rw---- 1 vdsm kvm 16M 23. úno 22.51 inbox
-rw-rw---- 1 vdsm kvm 2,0M 23. úno 23.12 leases
-rw-r--r-- 1 vdsm kvm 998 25. úno 00.35 metadata
-rw-rw---- 1 vdsm kvm 16M 7. lis 00.16 outbox
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md:
total 1,1M
-rw-r--r-- 1 vdsm kvm 0 24. úno 07.44 ids
-rw-rw---- 1 vdsm kvm 16M 7. lis 00.17 inbox
-rw-rw---- 1 vdsm kvm 2,0M 7. lis 00.18 leases
-rw-r--r-- 1 vdsm kvm 333 7. lis 00.18 metadata
-rw-rw---- 1 vdsm kvm 16M 7. lis 00.17 outbox
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P1/42d710a9-b844-43dc-be41-77002d1cd553/dom_md:
total 1,1M
-rw-rw-r-- 1 vdsm kvm 0 24. úno 07.32 ids
-rw-rw---- 1 vdsm kvm 16M 7. lis 22.18 inbox
-rw-rw---- 1 vdsm kvm 2,0M 7. lis 22.18 leases
-rw-r--r-- 1 vdsm kvm 333 7. lis 22.18 metadata
-rw-rw---- 1 vdsm kvm 16M 7. lis 22.18 outbox
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md:
total 3,0M
-rw-rw-r-- 1 vdsm kvm 1,0M 1. bře 21.28 ids
-rw-rw---- 1 vdsm kvm 16M 25. úno 00.42 inbox
-rw-rw---- 1 vdsm kvm 2,0M 25. úno 00.44 leases
-rw-r--r-- 1 vdsm kvm 997 24. úno 02.46 metadata
-rw-rw---- 1 vdsm kvm 16M 25. úno 00.44 outbox
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P3/ef010d08-aed1-41c4-ba9a-e6d9bdecb4b4/dom_md:
total 2,1M
-rw-r--r-- 1 vdsm kvm 0 24. úno 07.34 ids
-rw-rw---- 1 vdsm kvm 16M 23. úno 22.35 inbox
-rw-rw---- 1 vdsm kvm 2,0M 23. úno 22.38 leases
-rw-r--r-- 1 vdsm kvm 1,1K 24. úno 19.07 metadata
-rw-rw---- 1 vdsm kvm 16M 23. úno 22.27 outbox
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12__P4/300e9ac8-3c2f-4703-9bb1-1df2130c7c97/dom_md:
total 3,0M
-rw-rw-r-- 1 vdsm kvm 1,0M 1. bře 21.28 ids
-rw-rw-r-- 1 vdsm kvm 16M 6. lis 23.50 inbox
-rw-rw-r-- 1 vdsm kvm 2,0M 6. lis 23.51 leases
-rw-rw-r-- 1 vdsm kvm 734 7. lis 02.13 metadata
-rw-rw-r-- 1 vdsm kvm 16M 6. lis 16.55 outbox
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P5/1ca56b45-701e-4c22-9f59-3aebea4d8477/dom_md:
total 1,1M
-rw-rw-r-- 1 vdsm kvm 0 24. úno 07.35 ids
-rw-rw-r-- 1 vdsm kvm 16M 24. úno 01.06 inbox
-rw-rw-r-- 1 vdsm kvm 2,0M 24. úno 02.44 leases
-rw-r--r-- 1 vdsm kvm 998 24. úno 19.07 metadata
-rw-rw-r-- 1 vdsm kvm 16M 7. lis 22.20 outbox
The ids files was generated by "touch" command after deleting them due
"sanlock locking hang" gluster crash & reboot
I expected that they will be filled automaticaly after gluster reboot (
the shadow copy from ".gluster " directory was deleted & created
empty too )
OK, it looks that sanlock can't work with empty file or rewrite them .
Am I right ??
The last point - about "ids" workaround - this is offline version = VMs
have to be moved out from for continual running with maintenance volume mode
But this is not acceptable in current situation, so the question again,
is it safe to do it online ?? ( YES / NO )
regs.
Pavel
On 1.3.2016 18:38, Nir Soffer wrote:
On Tue, Mar 1, 2016 at 5:07 PM, paf1(a)email.cz
<mailto:paf1@email.cz>
<paf1(a)email.cz <mailto:paf1@email.cz>> wrote:
Hello, can anybody explain this error no.13 ( open file ) in
sanlock.log .
This is EACCES
Can you share the outoput of:
ls -lh /rhev/data-center/mnt/<server>:<_path>/<sd_uuid>/dom_md
The size of "ids" file is zero (0)
This is how we create the ids file when initializing it.
But then we use sanlock to initialize the ids file, and it should be
1MiB after that.
Is this ids files created by vdsm, or one you created yourself?
2016-02-28 03:25:46+0100 269626 [1951]: open error -13
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids
2016-02-28 03:25:46+0100 269626 [1951]: s187985 open_disk
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids
error -13
2016-02-28 03:25:56+0100 269636 [11304]: s187992 lockspace
7f52b697-c199-4f58-89aa-102d44327124:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids:0
If the main problem is about zero file size, can I regenerate
this file online securely , with no VM dependence ????
Yes, I think I already referred to the instructions how to do that in
a previous mail.
dist = RHEL - 7 - 2.1511
kernel = 3.10.0 - 327.10.1.el7.x86_64
KVM = 2.3.0 - 29.1.el7
libvirt = libvirt-1.2.17-13.el7_2.3
vdsm = vdsm-4.16.30-0.el7
GlusterFS = glusterfs-3.7.8-1.el7
regs.
Pavel
_______________________________________________
Users mailing list
Users(a)ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
--------------080108070503070703050100
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
</head>
<body text="#000066" bgcolor="#FFFFFF">
HI,<br>
requested output:<br>
<br>
# ls -lh /rhev/data-center/mnt/glusterSD/localhost:*/*/dom_md <br>
<br>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md:<br>
total 2,1M<br>
-rw-rw---- 1 vdsm kvm 1,0M 1. bře 21.28 ids<br>
-rw-rw---- 1 vdsm kvm 16M 7. lis 22.16 inbox<br>
-rw-rw---- 1 vdsm kvm 2,0M 7. lis 22.17 leases<br>
-rw-r--r-- 1 vdsm kvm 335 7. lis 22.17 metadata<br>
-rw-rw---- 1 vdsm kvm 16M 7. lis 22.16 outbox<br>
<br>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P1/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md:<br>
total 1,1M<br>
-rw-r--r-- 1 vdsm kvm 0 24. úno 07.41 ids<br>
-rw-rw---- 1 vdsm kvm 16M 7. lis 00.14 inbox<br>
-rw-rw---- 1 vdsm kvm 2,0M 7. lis 03.56 leases<br>
-rw-r--r-- 1 vdsm kvm 333 7. lis 03.56 metadata<br>
-rw-rw---- 1 vdsm kvm 16M 7. lis 00.14 outbox<br>
<br>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md:<br>
total 1,1M<br>
-rw-r--r-- 1 vdsm kvm 0 24. úno 07.43 ids<br>
-rw-rw---- 1 vdsm kvm 16M 7. lis 00.15 inbox<br>
-rw-rw---- 1 vdsm kvm 2,0M 7. lis 22.14 leases<br>
-rw-r--r-- 1 vdsm kvm 333 7. lis 22.14 metadata<br>
-rw-rw---- 1 vdsm kvm 16M 7. lis 00.15 outbox<br>
<br>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P3/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md:<br>
total 1,1M<br>
-rw-r--r-- 1 vdsm kvm 0 24. úno 07.43 ids<br>
-rw-rw---- 1 vdsm kvm 16M 23. úno 22.51 inbox<br>
-rw-rw---- 1 vdsm kvm 2,0M 23. úno 23.12 leases<br>
-rw-r--r-- 1 vdsm kvm 998 25. úno 00.35 metadata<br>
-rw-rw---- 1 vdsm kvm 16M 7. lis 00.16 outbox<br>
<br>
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md:<br>
total 1,1M<br>
-rw-r--r-- 1 vdsm kvm 0 24. úno 07.44 ids<br>
-rw-rw---- 1 vdsm kvm 16M 7. lis 00.17 inbox<br>
-rw-rw---- 1 vdsm kvm 2,0M 7. lis 00.18 leases<br>
-rw-r--r-- 1 vdsm kvm 333 7. lis 00.18 metadata<br>
-rw-rw---- 1 vdsm kvm 16M 7. lis 00.17 outbox<br>
<br>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P1/42d710a9-b844-43dc-be41-77002d1cd553/dom_md:<br>
total 1,1M<br>
-rw-rw-r-- 1 vdsm kvm 0 24. úno 07.32 ids<br>
-rw-rw---- 1 vdsm kvm 16M 7. lis 22.18 inbox<br>
-rw-rw---- 1 vdsm kvm 2,0M 7. lis 22.18 leases<br>
-rw-r--r-- 1 vdsm kvm 333 7. lis 22.18 metadata<br>
-rw-rw---- 1 vdsm kvm 16M 7. lis 22.18 outbox<br>
<br>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md:<br>
total 3,0M<br>
-rw-rw-r-- 1 vdsm kvm 1,0M 1. bře 21.28 ids<br>
-rw-rw---- 1 vdsm kvm 16M 25. úno 00.42 inbox<br>
-rw-rw---- 1 vdsm kvm 2,0M 25. úno 00.44 leases<br>
-rw-r--r-- 1 vdsm kvm 997 24. úno 02.46 metadata<br>
-rw-rw---- 1 vdsm kvm 16M 25. úno 00.44 outbox<br>
<br>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P3/ef010d08-aed1-41c4-ba9a-e6d9bdecb4b4/dom_md:<br>
total 2,1M<br>
-rw-r--r-- 1 vdsm kvm 0 24. úno 07.34 ids<br>
-rw-rw---- 1 vdsm kvm 16M 23. úno 22.35 inbox<br>
-rw-rw---- 1 vdsm kvm 2,0M 23. úno 22.38 leases<br>
-rw-r--r-- 1 vdsm kvm 1,1K 24. úno 19.07 metadata<br>
-rw-rw---- 1 vdsm kvm 16M 23. úno 22.27 outbox<br>
<br>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12__P4/300e9ac8-3c2f-4703-9bb1-1df2130c7c97/dom_md:<br>
total 3,0M<br>
-rw-rw-r-- 1 vdsm kvm 1,0M 1. bře 21.28 ids<br>
-rw-rw-r-- 1 vdsm kvm 16M 6. lis 23.50 inbox<br>
-rw-rw-r-- 1 vdsm kvm 2,0M 6. lis 23.51 leases<br>
-rw-rw-r-- 1 vdsm kvm 734 7. lis 02.13 metadata<br>
-rw-rw-r-- 1 vdsm kvm 16M 6. lis 16.55 outbox<br>
<br>
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P5/1ca56b45-701e-4c22-9f59-3aebea4d8477/dom_md:<br>
total 1,1M<br>
-rw-rw-r-- 1 vdsm kvm 0 24. úno 07.35 ids<br>
-rw-rw-r-- 1 vdsm kvm 16M 24. úno 01.06 inbox<br>
-rw-rw-r-- 1 vdsm kvm 2,0M 24. úno 02.44 leases<br>
-rw-r--r-- 1 vdsm kvm 998 24. úno 19.07 metadata<br>
-rw-rw-r-- 1 vdsm kvm 16M 7. lis 22.20 outbox<br>
<br>
The ids files was generated by "touch" command after deleting them
due "sanlock locking hang" gluster crash & reboot<br>
I expected that they will be filled automaticaly after gluster
reboot ( the shadow copy from ".gluster " directory was
deleted & created empty too )<br>
<br>
OK, it looks that sanlock can't work with empty file or rewrite
them .<br>
Am I right ??<br>
<br>
The last point - about "ids" workaround - this is offline version =
VMs have to be moved out from for continual running with maintenance
volume mode<br>
But this is not acceptable in current situation, so the question
again, is it safe to do it online ?? ( YES / NO )<br>
<br>
regs.<br>
Pavel<br>
<br>
<br>
<div class="moz-cite-prefix">On 1.3.2016 18:38, Nir Soffer
wrote:<br>
</div>
<blockquote
cite="mid:CAMRbyyus-rS0RtOWzePwrWp7xoOH8yRgT0-_wtcuM8WiWdO4VA@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On Tue, Mar 1, 2016 at 5:07 PM, <a
moz-do-not-send="true"
href="mailto:paf1@email.cz">paf1@email.cz</a>
<span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:paf1@email.cz"
target="_blank">paf1(a)email.cz</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000066" bgcolor="#FFFFFF"> Hello,
can
anybody explain this error no.13 ( open file ) in
sanlock.log .<br>
</div>
</blockquote>
<div><br>
</div>
<div>This is EACCES</div>
<div><br>
</div>
<div>Can you share the outoput of:</div>
<div><br>
</div>
<div> ls -lh
/rhev/data-center/mnt/<server>:<_path>/<sd_uuid>/dom_md</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000066" bgcolor="#FFFFFF"> <br>
The size of "ids" file is zero (0)<br>
</div>
</blockquote>
<div><br>
</div>
<div>This is how we create the ids file when initializing
it.</div>
<div><br>
</div>
<div>But then we use sanlock to initialize the ids file, and
it should be 1MiB after that.</div>
<div><br>
</div>
<div>Is this ids files created by vdsm, or one you created
yourself?</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000066" bgcolor="#FFFFFF"> 2016-02-28
03:25:46+0100 269626 [1951]: open error -13
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids<br>
2016-02-28 03:25:46+0100 269626 [1951]: s187985
open_disk
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids
error -13<br>
2016-02-28 03:25:56+0100 269636 [11304]: s187992
lockspace
7f52b697-c199-4f58-89aa-102d44327124:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids:0<br>
<br>
If the main problem is about zero file size, can I
regenerate this file online securely , with no VM
dependence ????<br>
</div>
</blockquote>
<div><br>
</div>
<div>Yes, I think I already referred to the instructions how
to do that in a previous mail.<br>
</div>
<div><br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000066" bgcolor="#FFFFFF"> <br>
<br>
dist = RHEL - 7 - 2.1511<br>
kernel = 3.10.0 - 327.10.1.el7.x86_64<br>
KVM = 2.3.0 - 29.1.el7<br>
libvirt = libvirt-1.2.17-13.el7_2.3<br>
vdsm = vdsm-4.16.30-0.el7<br>
GlusterFS = glusterfs-3.7.8-1.el7<br>
<br>
<br>
regs.<br>
Pavel<br>
</div>
<br>
_______________________________________________<br>
Users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer"
target="_blank">http://lists.ovirt.org/mailman/listinfo/user...
<br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</body>
</html>
--------------080108070503070703050100--