<div dir="ltr"><div><div><div><div><div><div>I clean installed everything and ran into the same. <br></div>I then ran gdeploy and encountered the same issue when deploying engine. <br></div>Seems that gluster (?) doesn't like 4K sector drives. I am not sure if it has to do with alignment. The weird thing is that gluster volumes are all ok, replicating normally and no split brain is reported. <br><br></div>The solution to the mentioned bug (<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1386443" rel="noreferrer" target="_blank">1386443</a>) was to format with 512 sector size, which for my case is not an option: <br><br>mkfs.xfs -f -i size=512 -s size=512 /dev/gluster/engine<br>illegal sector size 512; hw sector is 4096<br><br></div>Is there any workaround to address this?<br><br></div>Thanx, <br></div>Alex<br><div><div><div><div><div><div><div><br></div></div></div></div></div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Jun 4, 2017 at 5:48 PM, Abi Askushi <span dir="ltr"><<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>Hi Maor, <br><br></div>My disk are of 4K block size and from this bug seems that gluster replica needs 512B block size. <br></div><div>Is there a way to make gluster function with 4K drives?<br><br></div><div>Thank you!<br></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk <span dir="ltr"><<a href="mailto:mlipchuk@redhat.com" target="_blank">mlipchuk@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Alex,<br>
<br>
I saw a bug that might be related to the issue you encountered at<br>
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1386443" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/sh<wbr>ow_bug.cgi?id=1386443</a><br>
<br>
Sahina, maybe you have any advise? Do you think that BZ1386443is related?<br>
<br>
Regards,<br>
Maor<br>
<div><div class="m_-5617071787451257698h5"><br>
On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi <<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>> wrote:<br>
> Hi All,<br>
><br>
> I have installed successfully several times oVirt (version 4.1) with 3 nodes<br>
> on top glusterfs.<br>
><br>
> This time, when trying to configure the same setup, I am facing the<br>
> following issue which doesn't seem to go away. During installation i get the<br>
> error:<br>
><br>
> Failed to execute stage 'Misc configuration': Cannot acquire host id:<br>
> (u'a5a6b0e7-fc3f-4838-8e26-c8b<wbr>4d5e5e922', SanlockException(22, 'Sanlock<br>
> lockspace add failure', 'Invalid argument'))<br>
><br>
> The only different in this setup is that instead of standard partitioning i<br>
> have GPT partitioning and the disks have 4K block size instead of 512.<br>
><br>
> The /var/log/sanlock.log has the following lines:<br>
><br>
> 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace<br>
> ba6bd862-c2b8-46e7-b2c8-91e4a5<wbr>bb2047:250:/rhev/data-center/<wbr>mnt/_var_lib_ovirt-hosted-<wbr>engin-setup_tmptjkIDI/ba6bd862<wbr>-c2b8-46e7-b2c8-91e4a5bb2047/<wbr>dom_md/ids:0<br>
> 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource<br>
> ba6bd862-c2b8-46e7-b2c8-91e4a5<wbr>bb2047:SDM:/rhev/data-center/<wbr>mnt/_var_lib_ovirt-hosted-<wbr>engine-setup_tmptjkIDI/ba6bd86<wbr>2-c2b8-46e7-b2c8-91e4a5bb2047/<wbr>dom_md/leases:1048576<br>
> for 2,9,23040<br>
> 2017-06-03 19:21:36+0200 23471 [943]: s10 lockspace<br>
> a5a6b0e7-fc3f-4838-8e26-c8b4d5<wbr>e5e922:250:/rhev/data-center/<wbr>mnt/glusterSD/10.100.100.1:_<wbr>engine/a5a6b0e7-fc3f-4838-<wbr>8e26-c8b4d5e5e922/dom_md/ids:0<br>
> 2017-06-03 19:21:36+0200 23471 [23522]: a5a6b0e7 aio collect RD<br>
> 0x7f59b00008c0:0x7f59b00008d0:<wbr>0x7f59b0101000 result -22:0 match res<br>
> 2017-06-03 19:21:36+0200 23471 [23522]: read_sectors delta_leader offset<br>
> 127488 rv -22<br>
> /rhev/data-center/mnt/glusterS<wbr>D/10.100.100.1:_engine/<wbr>a5a6b0e7-fc3f-4838-8e26-<wbr>c8b4d5e5e922/dom_md/ids<br>
> 2017-06-03 19:21:37+0200 23472 [930]: s9 host 250 1 23450<br>
> 88c2244c-a782-40ed-9560-6cfa4d<wbr>46f853.v0.neptune<br>
> 2017-06-03 19:21:37+0200 23472 [943]: s10 add_lockspace fail result -22<br>
><br>
> And /var/log/vdsm/vdsm.log says:<br>
><br>
> 2017-06-03 19:19:38,176+0200 WARN (jsonrpc/3)<br>
> [storage.StorageServer.MountCo<wbr>nnection] Using user specified<br>
> backup-volfile-servers option (storageServer:253)<br>
> 2017-06-03 19:21:12,379+0200 WARN (periodic/1) [throttled] MOM not<br>
> available. (throttledlog:105)<br>
> 2017-06-03 19:21:12,380+0200 WARN (periodic/1) [throttled] MOM not<br>
> available, KSM stats will be missing. (throttledlog:105)<br>
> 2017-06-03 19:21:14,714+0200 WARN (jsonrpc/1)<br>
> [storage.StorageServer.MountCo<wbr>nnection] Using user specified<br>
> backup-volfile-servers option (storageServer:253)<br>
> 2017-06-03 19:21:15,515+0200 ERROR (jsonrpc/4) [storage.initSANLock] Cannot<br>
> initialize SANLock for domain a5a6b0e7-fc3f-4838-8e26-c8b4d5<wbr>e5e922<br>
> (clusterlock:238)<br>
> Traceback (most recent call last):<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/clusterlock.<wbr>py", line<br>
> 234, in initSANLock<br>
> sanlock.init_lockspace(<wbr>sdUUID, idsPath)<br>
> SanlockException: (107, 'Sanlock lockspace init failure', 'Transport<br>
> endpoint is not connected')<br>
> 2017-06-03 19:21:15,515+0200 WARN (jsonrpc/4)<br>
> [storage.StorageDomainManifest<wbr>] lease did not initialize successfully<br>
> (sd:557)<br>
> Traceback (most recent call last):<br>
> File "/usr/share/vdsm/storage/sd.py<wbr>", line 552, in initDomainLock<br>
> self._domainLock.initLock(sel<wbr>f.getDomainLease())<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/clusterlock.<wbr>py", line<br>
> 271, in initLock<br>
> initSANLock(self._sdUUID, self._idsPath, lease)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/clusterlock.<wbr>py", line<br>
> 239, in initSANLock<br>
> raise se.ClusterLockInitError()<br>
> ClusterLockInitError: Could not initialize cluster lock: ()<br>
> 2017-06-03 19:21:37,867+0200 ERROR (jsonrpc/2) [storage.StoragePool] Create<br>
> pool hosted_datacenter canceled (sp:655)<br>
> Traceback (most recent call last):<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 652, in create<br>
> self.attachSD(sdUUID)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/securable.py"<wbr>, line<br>
> 79, in wrapper<br>
> return method(self, *args, **kwargs)<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 971, in attachSD<br>
> dom.acquireHostId(<a href="http://self.id" rel="noreferrer" target="_blank">self.id</a>)<br>
> File "/usr/share/vdsm/storage/sd.py<wbr>", line 790, in acquireHostId<br>
> self._manifest.acquireHostId(<wbr>hostId, async)<br>
> File "/usr/share/vdsm/storage/sd.py<wbr>", line 449, in acquireHostId<br>
> self._domainLock.acquireHostI<wbr>d(hostId, async)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/clusterlock.<wbr>py", line<br>
> 297, in acquireHostId<br>
> raise se.AcquireHostIdFailure(self._<wbr>sdUUID, e)<br>
> AcquireHostIdFailure: Cannot acquire host id:<br>
> (u'a5a6b0e7-fc3f-4838-8e26-c8b<wbr>4d5e5e922', SanlockException(22, 'Sanlock<br>
> lockspace add failure', 'Invalid argument'))<br>
> 2017-06-03 19:21:37,870+0200 ERROR (jsonrpc/2) [storage.StoragePool] Domain<br>
> ba6bd862-c2b8-46e7-b2c8-91e4a5<wbr>bb2047 detach from MSD<br>
> ba6bd862-c2b8-46e7-b2c8-91e4a5<wbr>bb2047 Ver 1 failed. (sp:528)<br>
> Traceback (most recent call last):<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 525, in __cleanupDomains<br>
> self.detachSD(sdUUID)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/securable.py"<wbr>, line<br>
> 79, in wrapper<br>
> return method(self, *args, **kwargs)<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 1046, in detachSD<br>
> raise se.CannotDetachMasterStorageDo<wbr>main(sdUUID)<br>
> CannotDetachMasterStorageDomai<wbr>n: Illegal action:<br>
> (u'ba6bd862-c2b8-46e7-b2c8-91e<wbr>4a5bb2047',)<br>
> 2017-06-03 19:21:37,872+0200 ERROR (jsonrpc/2) [storage.StoragePool] Domain<br>
> a5a6b0e7-fc3f-4838-8e26-c8b4d5<wbr>e5e922 detach from MSD<br>
> ba6bd862-c2b8-46e7-b2c8-91e4a5<wbr>bb2047 Ver 1 failed. (sp:528)<br>
> Traceback (most recent call last):<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 525, in __cleanupDomains<br>
> self.detachSD(sdUUID)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/securable.py"<wbr>, line<br>
> 79, in wrapper<br>
> return method(self, *args, **kwargs)<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 1043, in detachSD<br>
> self.validateAttachedDomain(d<wbr>om)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/securable.py"<wbr>, line<br>
> 79, in wrapper<br>
> return method(self, *args, **kwargs)<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 542, in validateAttachedDomain<br>
> self.validatePoolSD(dom.sdUUI<wbr>D)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/securable.py"<wbr>, line<br>
> 79, in wrapper<br>
> return method(self, *args, **kwargs)<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 535, in validatePoolSD<br>
> raise se.StorageDomainNotMemberOfPoo<wbr>l(self.spUUID, sdUUID)<br>
> StorageDomainNotMemberOfPool: Domain is not member in pool:<br>
> u'pool=a1e7e9dd-0cf4-41ae-ba13<wbr>-36297ed66309,<br>
> domain=a5a6b0e7-fc3f-4838-8e26<wbr>-c8b4d5e5e922'<br>
> 2017-06-03 19:21:40,063+0200 ERROR (jsonrpc/2) [storage.TaskManager.Task]<br>
> (Task='a2476a33-26f8-4ebd-876d<wbr>-02fe5d13ef78') Unexpected error (task:870)<br>
> Traceback (most recent call last):<br>
> File "/usr/share/vdsm/storage/task.<wbr>py", line 877, in _run<br>
> return fn(*args, **kargs)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/logUtils.py", line 52, in<br>
> wrapper<br>
> res = f(*args, **kwargs)<br>
> File "/usr/share/vdsm/storage/hsm.p<wbr>y", line 959, in createStoragePool<br>
> leaseParams)<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 652, in create<br>
> self.attachSD(sdUUID)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/securable.py"<wbr>, line<br>
> 79, in wrapper<br>
> return method(self, *args, **kwargs)<br>
> File "/usr/share/vdsm/storage/sp.py<wbr>", line 971, in attachSD<br>
> dom.acquireHostId(<a href="http://self.id" rel="noreferrer" target="_blank">self.id</a>)<br>
> File "/usr/share/vdsm/storage/sd.py<wbr>", line 790, in acquireHostId<br>
> self._manifest.acquireHostId(<wbr>hostId, async)<br>
> File "/usr/share/vdsm/storage/sd.py<wbr>", line 449, in acquireHostId<br>
> self._domainLock.acquireHostI<wbr>d(hostId, async)<br>
> File "/usr/lib/python2.7/site-packa<wbr>ges/vdsm/storage/clusterlock.<wbr>py", line<br>
> 297, in acquireHostId<br>
> raise se.AcquireHostIdFailure(self._<wbr>sdUUID, e)<br>
> AcquireHostIdFailure: Cannot acquire host id:<br>
> (u'a5a6b0e7-fc3f-4838-8e26-c8b<wbr>4d5e5e922', SanlockException(22, 'Sanlock<br>
> lockspace add failure', 'Invalid argument'))<br>
> 2017-06-03 19:21:40,067+0200 ERROR (jsonrpc/2) [storage.Dispatcher]<br>
> {'status': {'message': "Cannot acquire host id:<br>
> (u'a5a6b0e7-fc3f-4838-8e26-c8b<wbr>4d5e5e922', SanlockException(22, 'Sanlock<br>
> lockspace add failure', 'Invalid argument'))", 'code': 661}} (dispatcher:77)<br>
><br>
> The gluster volume prepared for engine storage is online and no split brain<br>
> is reported. I don't understand what needs to be done to overcome this. Any<br>
> idea will be appreciated.<br>
><br>
> Thank you,<br>
> Alex<br>
><br>
</div></div>> ______________________________<wbr>_________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
><br>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>