<div dir="ltr"><div><div>Also I see this article from Redhat, that mentions if 4K sectors are supported, but I am not able to read it as I don't have a subscription: <br><br><a href="https://access.redhat.com/solutions/56494">https://access.redhat.com/solutions/56494</a><br><br></div>Its hard to believe that 4K drives have not been used from others on oVirt deployments.<br><br></div>Alex<br><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jun 6, 2017 at 3:18 PM, Abi Askushi <span dir="ltr"><<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div>Hi Krutika, <br><br></div>My comments inline. <br><br></div>Also attached the strace of: <br><span style="font-family:monospace,monospace">strace -y -ff -o /root/512-trace-on-root.log dd if=/dev/zero of=/mnt/test2.img oflag=direct bs=512 count=1</span><br></div><div><br>and of: <br><span style="font-family:monospace,monospace">strace -y -ff -o /root/4096-trace-on-root.log dd if=/dev/zero of=/mnt/test2.img oflag=direct bs=4096 count=16</span></div><br>I have mounted gluster volume at /mnt. <br></div>The dd with bs=4096 is successful. <br></div><div><div><br><div>The gluster mount log gives only the following: <br><span style="font-family:monospace,monospace">[2017-06-06 12:04:54.102576] W [MSGID: 114031] [client-rpc-fops.c:854:<wbr>client3_3_writev_cbk] 0-engine-client-0: remote operation failed [Invalid argument]<br>[2017-06-06 12:04:54.102591] W [MSGID: 114031] [client-rpc-fops.c:854:<wbr>client3_3_writev_cbk] 0-engine-client-1: remote operation failed [Invalid argument]<br>[2017-06-06 12:04:54.103355] W [fuse-bridge.c:2312:fuse_<wbr>writev_cbk] 0-glusterfs-fuse: 205: WRITE => -1 gfid=075ab3a5-0274-4f07-a075-<wbr>2748c3b4d394 fd=0x7faf1d08706c (Transport endpoint is not connected)<br></span><br></div><div>The gluster brick log gives: <br><span style="font-family:monospace,monospace">[2017-06-06 12:07:03.793080] E [MSGID: 113072] [posix.c:3453:posix_writev] 0-engine-posix: write failed: offset 0, [Invalid argument]<br>[2017-06-06 12:07:03.793172] E [MSGID: 115067] [server-rpc-fops.c:1346:<wbr>server_writev_cbk] 0-engine-server: 291: WRITEV 0 (075ab3a5-0274-4f07-a075-<wbr>2748c3b4d394) ==> (Invalid argument) [Invalid argument]</span><br><br><br></div><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Tue, Jun 6, 2017 at 12:50 PM, Krutika Dhananjay <span dir="ltr"><<a href="mailto:kdhananj@redhat.com" target="_blank">kdhananj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>OK.<br><br></div><div>So for the 'Transport endpoint is not connected' issue, could you share the mount and brick logs? <br><br></div><div>Hmmm.. 'Invalid argument' error even on the root partition. What if you change bs to 4096 and run?<br></div></div></blockquote></span><div><span style="color:rgb(11,83,148)">If I use bs=4096 the dd is successful on /root and at gluster mounted volume. </span><br></div><span class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>The logs I showed in my earlier mail shows that gluster is merely returning the error it got from the disk file system where the<br></div><div>brick is hosted. But you're right about the fact that the offset 127488 is not 4K-aligned.<br></div><div><br></div><div>If the dd on /root worked for you with bs=4096, could you try the same directly on gluster mount point on a dummy file and capture the strace output of dd?<br></div><div>You can perhaps reuse your existing gluster volume by mounting it at another location and doing the dd.<br></div><div>Here's what you need to execute:<br></div><div><pre class="m_-3779732689882285843gmail-m_8844496162100933094gmail-bz_comment_text m_-3779732689882285843gmail-m_8844496162100933094gmail-bz_wrap_comment_text" id="m_-3779732689882285843gmail-m_8844496162100933094gmail-comment_text_19">strace -ff -T -p <pid-of-mount-process> -o <path-to-the-file-where-you-wa<wbr>nt-the-output-saved>`<br></pre></div><div>FWIW, here's something I found in man(2) open:<br><i><br>Under Linux 2.4, transfer sizes, and the alignment of the user buffer and the file offset must all be multiples of the logical block size of the filesystem. Since Linux 2.6.0, alignment to the logical block size of the<br> underlying storage (typically 512 bytes) suffices. The logical block size can be determined using the ioctl(2) BLKSSZGET operation or from the shell using the command:<br><br> blockdev --getss</i><span class="m_-3779732689882285843gmail-HOEnZb"><font color="#888888"><br></font></span></div></div></blockquote></span><div><span style="color:rgb(11,83,148)">Please note also that the physical disks are of 4K sector size (native).
Thus OS is having 4096/4096 local/physical sector size. <br>[root@v0 ~]# blockdev --getss /dev/sda<br>4096<br>[root@v0 ~]# blockdev --getpbsz /dev/sda<br>4096</span> <br></div><div><div class="h5"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><span class="m_-3779732689882285843gmail-HOEnZb"><font color="#888888"><br></font></span></div><span class="m_-3779732689882285843gmail-HOEnZb"><font color="#888888"><div><br></div><div>-Krutika<br><br></div></font></span></div><div class="m_-3779732689882285843gmail-HOEnZb"><div class="m_-3779732689882285843gmail-h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jun 6, 2017 at 1:18 AM, Abi Askushi <span dir="ltr"><<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div>Also when testing with dd i get the following: <br><br><b>Testing on the gluster mount: </b><br><span style="font-family:monospace,monospace">dd if=/dev/zero of=/rhev/data-center/mnt/glust<wbr>erSD/10.100.100.1:_engine/test<wbr>2.img oflag=direct bs=512 count=1<br>dd: error writing β/rhev/data-center/mnt/gluster<wbr>SD/10.100.100.1:_engine/test2.<wbr>imgβ: <b>Transport endpoint is not connected</b><br>1+0 records in<br>0+0 records out<br>0 bytes (0 B) copied, 0.00336755 s, 0.0 kB/s<br></span><br></div><b>Testing on the /root directory (XFS): </b><br><span style="font-family:monospace,monospace">dd if=/dev/zero of=/test2.img oflag=direct bs=512 count=1<br>dd: error writing β/test2.imgβ:<b> Invalid argument</b><br>1+0 records in<br>0+0 records out<br>0 bytes (0 B) copied, 0.000321239 s, 0.0 kB/s</span><br><br></div>Seems that the gluster is trying to do the same and fails. <br><br><br></div><div class="m_-3779732689882285843gmail-m_8844496162100933094HOEnZb"><div class="m_-3779732689882285843gmail-m_8844496162100933094h5"><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jun 5, 2017 at 10:10 PM, Abi Askushi <span dir="ltr"><<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>The question that rises is what is needed to make gluster aware of the 4K physical sectors presented to it (the logical sector is also 4K). The offset (127488) at the log does not seem aligned at 4K. <br><br></div>Alex<br></div><div class="m_-3779732689882285843gmail-m_8844496162100933094m_-8403635108812105612HOEnZb"><div class="m_-3779732689882285843gmail-m_8844496162100933094m_-8403635108812105612h5"><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jun 5, 2017 at 2:47 PM, Abi Askushi <span dir="ltr"><<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">Hi Krutika,<div dir="auto"><br></div><div dir="auto">I am saying that I am facing this issue with 4k drives. I never encountered this issue with 512 drives.</div><div dir="auto"><br></div><div dir="auto">Alex </div></div><div class="m_-3779732689882285843gmail-m_8844496162100933094m_-8403635108812105612m_-8130987819934704788HOEnZb"><div class="m_-3779732689882285843gmail-m_8844496162100933094m_-8403635108812105612m_-8130987819934704788h5"><div class="gmail_extra"><br><div class="gmail_quote">On Jun 5, 2017 14:26, "Krutika Dhananjay" <<a href="mailto:kdhananj@redhat.com" target="_blank">kdhananj@redhat.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><div>This seems like a case of O_DIRECT reads and writes gone wrong, judging by the 'Invalid argument' errors.<br><br></div>The two operations that have failed on gluster bricks are:<br><br>[2017-06-05 09:40:39.428979] E [MSGID: 113072] [posix.c:3453:posix_writev] 0-engine-posix: write failed: offset 0, [Invalid argument]<br>[2017-06-05 09:41:00.865760] E [MSGID: 113040] [posix.c:3178:posix_readv] 0-engine-posix: read failed on gfid=8c94f658-ac3c-4e3a-b368-8<wbr>c038513a914, fd=0x7f408584c06c, offset=127488 size=512, buf=0x7f4083c0b000 [Invalid argument]<br><br></div>But then, both the write and the read have 512byte-aligned offset, size and buf address (which is correct).<br><br></div>Are you saying you don't see this issue with 4K block-size?<br><br></div>-Krutika<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jun 5, 2017 at 3:21 PM, Abi Askushi <span dir="ltr"><<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div>Hi Sahina, <br><br></div>Attached are the logs. Let me know if sth else is needed.<br></div><span style="font-family:monospace,monospace"></span><span style="font-family:monospace,monospace"></span><br></div><div>I have 5 disks (with 4K physical sector) in RAID5. The RAID has 64K stripe size at the moment. <br></div><div>I have prepared the storage as below: <br><br>pvcreate --dataalignment 256K /dev/sda4<br>vgcreate --physicalextentsize 256K gluster /dev/sda4 <br><br>lvcreate -n engine --size 120G gluster<br>mkfs.xfs -f -i size=512 /dev/gluster/engine<br><br></div><div>Thanx,<br></div><div>Alex<br></div></div><div class="m_-3779732689882285843gmail-m_8844496162100933094m_-8403635108812105612m_-8130987819934704788m_985830033054947336m_5823298957084478634HOEnZb"><div class="m_-3779732689882285843gmail-m_8844496162100933094m_-8403635108812105612m_-8130987819934704788m_985830033054947336m_5823298957084478634h5"><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jun 5, 2017 at 12:14 PM, Sahina Bose <span dir="ltr"><<a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Can we have the gluster mount logs and brick logs to check if it's the same issue?<br></div><div class="m_-3779732689882285843gmail-m_8844496162100933094m_-8403635108812105612m_-8130987819934704788m_985830033054947336m_5823298957084478634m_-1789763873530452509HOEnZb"><div class="m_-3779732689882285843gmail-m_8844496162100933094m_-8403635108812105612m_-8130987819934704788m_985830033054947336m_5823298957084478634m_-1789763873530452509h5"><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Jun 4, 2017 at 11:21 PM, Abi Askushi <span dir="ltr"><<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div>I clean installed everything and ran into the same. <br></div>I then ran gdeploy and encountered the same issue when deploying engine. <br></div>Seems that gluster (?) doesn't like 4K sector drives. I am not sure if it has to do with alignment. The weird thing is that gluster volumes are all ok, replicating normally and no split brain is reported. <br><br></div>The solution to the mentioned bug (<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1386443" rel="noreferrer" target="_blank">1386443</a>) was to format with 512 sector size, which for my case is not an option: <br><br>mkfs.xfs -f -i size=512 -s size=512 /dev/gluster/engine<br>illegal sector size 512; hw sector is 4096<br><br></div>Is there any workaround to address this?<br><br></div>Thanx, <br></div>Alex<br><div><div><div><div><div><div><div><br></div></div></div></div></div></div></div></div><div class="m_-3779732689882285843gmail-m_8844496162100933094m_-8403635108812105612m_-8130987819934704788m_985830033054947336m_5823298957084478634m_-1789763873530452509m_7504149271760730892HOEnZb"><div class="m_-3779732689882285843gmail-m_8844496162100933094m_-8403635108812105612m_-8130987819934704788m_985830033054947336m_5823298957084478634m_-1789763873530452509m_7504149271760730892h5"><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Jun 4, 2017 at 5:48 PM, Abi Askushi <span dir="ltr"><<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div>Hi Maor, <br><br></div>My disk are of 4K block size and from this bug seems that gluster replica needs 512B block size. <br></div><div>Is there a way to make gluster function with 4K drives?<br><br></div><div>Thank you!<br></div></div><div class="m_-3779732689882285843gmail-m_8844496162100933094m_-8403635108812105612m_-8130987819934704788m_985830033054947336m_5823298957084478634m_-1789763873530452509m_7504149271760730892m_-790959479825030166HOEnZb"><div class="m_-3779732689882285843gmail-m_8844496162100933094m_-8403635108812105612m_-8130987819934704788m_985830033054947336m_5823298957084478634m_-1789763873530452509m_7504149271760730892m_-790959479825030166h5"><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk <span dir="ltr"><<a href="mailto:mlipchuk@redhat.com" target="_blank">mlipchuk@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Alex,<br>
<br>
I saw a bug that might be related to the issue you encountered at<br>
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1386443" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/sh<wbr>ow_bug.cgi?id=1386443</a><br>
<br>
Sahina, maybe you have any advise? Do you think that BZ1386443is related?<br>
<br>
Regards,<br>
Maor<br>
<div><div class="m_-3779732689882285843gmail-m_8844496162100933094m_-8403635108812105612m_-8130987819934704788m_985830033054947336m_5823298957084478634m_-1789763873530452509m_7504149271760730892m_-790959479825030166m_-5617071787451257698h5"><br>
On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi <<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>> wrote:<br>
> Hi All,<br>
><br>
> I have installed successfully several times oVirt (version 4.1) with 3 nodes<br>
> on top glusterfs.<br>
><br>
> This time, when trying to configure the same setup, I am facing the<br>
> following issue which doesn't seem to go away. During installation i get the<br>
> error:<br>
><br>
> Failed to execute stage 'Misc configuration': Cannot acquire host id:<br>
> (u'a5a6b0e7-fc3f-4838-8e26-c8b<wbr>4d5e5e922', SanlockException(22, 'Sanlock<br>
> lockspace add failure', 'Invalid argument'))<br>
><br>
> The only different in this setup is that instead of standard partitioning i<br>
> have GPT partitioning and the disks have 4K block size instead of 512.<br>
><br>
> The /var/log/sanlock.log has the following lines:<br>
><br>
> 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace<br>
> ba6bd862-c2b8-46e7-b2c8-91e4a5<wbr>bb2047:250:/rhev/data-center/m<wbr>nt/_var_lib_ovirt-hosted-engin<wbr>-setup_tmptjkIDI/ba6bd862-c2b8<wbr>-46e7-b2c8-91e4a5bb2047/dom_md<wbr>/ids:0<br>
> 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource<br>
> ba6bd862-c2b8-46e7-b2c8-91e4a5<wbr>bb2047:SDM:/rhev/data-center/m<wbr>nt/_var_lib_ovirt-hosted-engin<wbr>e-setup_tmptjkIDI/ba6bd862-c2b<wbr>8-46e7-b2c8-91e4a5bb2047/dom_m<wbr>d/leases:1048576<br>
> for 2,9,23040<br>
> 2017-06-03 19:21:36+0200 23471 [943]: s10 lockspace<br>
> a5a6b0e7-fc3f-4838-8e26-c8b4d5<wbr>e5e922:250:/rhev/data-center/m<wbr>nt/glusterSD/10.100.100.1:_eng<wbr>ine/a5a6b0e7-fc3f-4838-8e26-c8<wbr>b4d5e5e922/dom_md/ids:0<br>
> 2017-06-03 19:21:36+0200 23471 [23522]: a5a6b0e7 aio collect RD<br>
> 0x7f59b00008c0:0x7f59b00008d0:<wbr>0x7f59b0101000 result -22:0 match res<br>
> 2017-06-03 19:21:36+0200 23471 [23522]: read_sectors delta_leader offset<br>
> 127488 rv -22<br>
> /rhev/data-center/mnt/glusterS<wbr>D/10.100.100.1:_engine/a5a6b0e<wbr>7-fc3f-4838-8e26-c8b4d5e5e922/<wbr>dom_md/ids<br>
> 2017-06-03 19:21:37+0200 23472 [930]: s9 host 250 1 23450<br>
> 88c2244c-a782-40ed-9560-6cfa4d<wbr>46f853.v0.neptune<br>
> 2017-06-03 19:21:37+0200 23472 [943]: s10 add_lockspace fail result -22<br>
><br>
> And /var/log/vdsm/vdsm.log says:<br>
><br>
> 2017-06-03 19:19:38,176+0200 WARN (jsonrpc/3)<br>
> [storage.StorageServer.MountCo<wbr>nnection] Using user specified<br>
> backup-volfile-servers option (storageServer:253)<br>
> 2017-06-03 19:21:12,379+0200 WARN (periodic/1) [throttled] MOM not<br>
> available. (throttledlog:105)<br>
> 2017-06-03 19:21:12,380+0200 WARN (periodic/1) [throttled] MOM not<br>
> available, KSM stats will be missing. (throttledlog:105)<br>
> 2017-06-03 19:21:14,714+0200 WARN (jsonrpc/1)<br>
> [storage.StorageServer.MountCo<wbr>nnection] Using user specified<br>
> backup-volfile-servers option (storageServer:253)<br>
> 2017-06-03 19:21:15,515+0200 ERROR (jsonrpc/4) [storage.initSANLock] Cannot<br>
> initialize SANLock for domain a5a6b0e7-fc3f-4838-8e26-c8b4d5<wbr>e5e922<br>
> (clusterlock:238)<br>
> Traceback (most recent call last):<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/clusterlock.p<wbr>y", line<br>
> 234, in initSANLock<br>
> sanlock.init_lockspace(sdUUID<wbr>, idsPath)<br>
> SanlockException: (107, 'Sanlock lockspace init failure', 'Transport<br>
> endpoint is not connected')<br>
> 2017-06-03 19:21:15,515+0200 WARN (jsonrpc/4)<br>
> [storage.StorageDomainManifest<wbr>] lease did not initialize successfully<br>
> (sd:557)<br>
> Traceback (most recent call last):<br>
> File "/usr/share/vdsm/storage/sd.py<wbr>", line 552, in initDomainLock<br>
> self._domainLock.initLock(sel<wbr>f.getDomainLease())<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/clusterlock.p<wbr>y", line<br>
> 271, in initLock<br>
> initSANLock(self._sdUUID, self._idsPath, lease)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/clusterlock.p<wbr>y", line<br>
> 239, in initSANLock<br>
> raise se.ClusterLockInitError()<br>
> ClusterLockInitError: Could not initialize cluster lock: ()<br>
> 2017-06-03 19:21:37,867+0200 ERROR (jsonrpc/2) [storage.StoragePool] Create<br>
> pool hosted_datacenter canceled (sp:655)<br>
> Traceback (most recent call last):<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 652, in create<br>
> self.attachSD(sdUUID)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/securable.py"<wbr>, line<br>
> 79, in wrapper<br>
> return method(self, *args, **kwargs)<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 971, in attachSD<br>
> dom.acquireHostId(<a href="http://self.id" rel="noreferrer" target="_blank">self.id</a>)<br>
> File "/usr/share/vdsm/storage/sd.py<wbr>", line 790, in acquireHostId<br>
> self._manifest.acquireHostId(<wbr>hostId, async)<br>
> File "/usr/share/vdsm/storage/sd.py<wbr>", line 449, in acquireHostId<br>
> self._domainLock.acquireHostI<wbr>d(hostId, async)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/clusterlock.p<wbr>y", line<br>
> 297, in acquireHostId<br>
> raise se.AcquireHostIdFailure(self._<wbr>sdUUID, e)<br>
> AcquireHostIdFailure: Cannot acquire host id:<br>
> (u'a5a6b0e7-fc3f-4838-8e26-c8b<wbr>4d5e5e922', SanlockException(22, 'Sanlock<br>
> lockspace add failure', 'Invalid argument'))<br>
> 2017-06-03 19:21:37,870+0200 ERROR (jsonrpc/2) [storage.StoragePool] Domain<br>
> ba6bd862-c2b8-46e7-b2c8-91e4a5<wbr>bb2047 detach from MSD<br>
> ba6bd862-c2b8-46e7-b2c8-91e4a5<wbr>bb2047 Ver 1 failed. (sp:528)<br>
> Traceback (most recent call last):<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 525, in __cleanupDomains<br>
> self.detachSD(sdUUID)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/securable.py"<wbr>, line<br>
> 79, in wrapper<br>
> return method(self, *args, **kwargs)<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 1046, in detachSD<br>
> raise se.CannotDetachMasterStorageDo<wbr>main(sdUUID)<br>
> CannotDetachMasterStorageDomai<wbr>n: Illegal action:<br>
> (u'ba6bd862-c2b8-46e7-b2c8-91e<wbr>4a5bb2047',)<br>
> 2017-06-03 19:21:37,872+0200 ERROR (jsonrpc/2) [storage.StoragePool] Domain<br>
> a5a6b0e7-fc3f-4838-8e26-c8b4d5<wbr>e5e922 detach from MSD<br>
> ba6bd862-c2b8-46e7-b2c8-91e4a5<wbr>bb2047 Ver 1 failed. (sp:528)<br>
> Traceback (most recent call last):<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 525, in __cleanupDomains<br>
> self.detachSD(sdUUID)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/securable.py"<wbr>, line<br>
> 79, in wrapper<br>
> return method(self, *args, **kwargs)<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 1043, in detachSD<br>
> self.validateAttachedDomain(d<wbr>om)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/securable.py"<wbr>, line<br>
> 79, in wrapper<br>
> return method(self, *args, **kwargs)<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 542, in validateAttachedDomain<br>
> self.validatePoolSD(dom.sdUUI<wbr>D)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/securable.py"<wbr>, line<br>
> 79, in wrapper<br>
> return method(self, *args, **kwargs)<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 535, in validatePoolSD<br>
> raise se.StorageDomainNotMemberOfPoo<wbr>l(self.spUUID, sdUUID)<br>
> StorageDomainNotMemberOfPool: Domain is not member in pool:<br>
> u'pool=a1e7e9dd-0cf4-41ae-ba13<wbr>-36297ed66309,<br>
> domain=a5a6b0e7-fc3f-4838-8e26<wbr>-c8b4d5e5e922'<br>
> 2017-06-03 19:21:40,063+0200 ERROR (jsonrpc/2) [storage.TaskManager.Task]<br>
> (Task='a2476a33-26f8-4ebd-876d<wbr>-02fe5d13ef78') Unexpected error (task:870)<br>
> Traceback (most recent call last):<br>
> File "/usr/share/vdsm/storage/task.<wbr>py", line 877, in _run<br>
> return fn(*args, **kargs)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/logUtils.py", line 52, in<br>
> wrapper<br>
> res = f(*args, **kwargs)<br>
> File "/usr/share/vdsm/storage/hsm.p<wbr>y", line 959, in createStoragePool<br>
> leaseParams)<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 652, in create<br>
> self.attachSD(sdUUID)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/securable.py"<wbr>, line<br>
> 79, in wrapper<br>
> return method(self, *args, **kwargs)<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 971, in attachSD<br>
> dom.acquireHostId(<a href="http://self.id" rel="noreferrer" target="_blank">self.id</a>)<br>
> File "/usr/share/vdsm/storage/sd.py<wbr>", line 790, in acquireHostId<br>
> self._manifest.acquireHostId(<wbr>hostId, async)<br>
> File "/usr/share/vdsm/storage/sd.py<wbr>", line 449, in acquireHostId<br>
> self._domainLock.acquireHostI<wbr>d(hostId, async)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/clusterlock.p<wbr>y", line<br>
> 297, in acquireHostId<br>
> raise se.AcquireHostIdFailure(self._<wbr>sdUUID, e)<br>
> AcquireHostIdFailure: Cannot acquire host id:<br>
> (u'a5a6b0e7-fc3f-4838-8e26-c8b<wbr>4d5e5e922', SanlockException(22, 'Sanlock<br>
> lockspace add failure', 'Invalid argument'))<br>
> 2017-06-03 19:21:40,067+0200 ERROR (jsonrpc/2) [storage.Dispatcher]<br>
> {'status': {'message': "Cannot acquire host id:<br>
> (u'a5a6b0e7-fc3f-4838-8e26-c8b<wbr>4d5e5e922', SanlockException(22, 'Sanlock<br>
> lockspace add failure', 'Invalid argument'))", 'code': 661}} (dispatcher:77)<br>
><br>
> The gluster volume prepared for engine storage is online and no split brain<br>
> is reported. I don't understand what needs to be done to overcome this. Any<br>
> idea will be appreciated.<br>
><br>
> Thank you,<br>
> Alex<br>
><br>
</div></div>> ______________________________<wbr>_________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
><br>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div><br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div><br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div>
</blockquote></div></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div></div></div><br></div></div></div></div>
</blockquote></div><br></div>