<div dir="ltr">Please check the attachment.<div><br></div><div><div class="gmail_quote"><div dir="ltr">On Thu, Jul 28, 2016 at 7:46 PM Sahina Bose <<a href="mailto:sabose@redhat.com">sabose@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
----- Original Message -----<br>
> From: "Siavash Safi" <<a href="mailto:siavash.safi@gmail.com" target="_blank">siavash.safi@gmail.com</a>><br>
> To: "Sahina Bose" <<a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>><br>
> Cc: "David Gossage" <<a href="mailto:dgossage@carouselchecks.com" target="_blank">dgossage@carouselchecks.com</a>>, "users" <<a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a>><br>
> Sent: Thursday, July 28, 2016 8:35:18 PM<br>
> Subject: Re: [ovirt-users] Cannot find master domain<br>
><br>
> [root@node1 ~]# ls -ld /rhev/data-center/mnt/glusterSD/<br>
> drwxr-xr-x. 2 vdsm kvm 6 Jul 28 19:28 /rhev/data-center/mnt/glusterSD/<br>
> [root@node1 ~]# getfacl /rhev/data-center/mnt/glusterSD/<br>
> getfacl: Removing leading '/' from absolute path names<br>
> # file: rhev/data-center/mnt/glusterSD/<br>
> # owner: vdsm<br>
> # group: kvm<br>
> user::rwx<br>
> group::r-x<br>
> other::r-x<br>
><br>
<br>
<br>
The ACLs look correct to me. Adding Nir/Allon for insights.<br>
<br>
Can you attach the gluster mount logs from this host?<br>
<br>
<br>
> And as I mentioned in another message, the directory is empty.<br>
><br>
> On Thu, Jul 28, 2016 at 7:24 PM Sahina Bose <<a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>> wrote:<br>
><br>
> > Error from vdsm log: Permission settings on the specified path do not<br>
> > allow access to the storage. Verify permission settings on the specified<br>
> > storage path.: 'path = /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'<br>
> ><br>
> > I remember another thread about a similar issue - can you check the ACL<br>
> > settings on the storage path?<br>
> ><br>
> > ----- Original Message -----<br>
> > > From: "Siavash Safi" <<a href="mailto:siavash.safi@gmail.com" target="_blank">siavash.safi@gmail.com</a>><br>
> > > To: "David Gossage" <<a href="mailto:dgossage@carouselchecks.com" target="_blank">dgossage@carouselchecks.com</a>><br>
> > > Cc: "users" <<a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a>><br>
> > > Sent: Thursday, July 28, 2016 7:58:29 PM<br>
> > > Subject: Re: [ovirt-users] Cannot find master domain<br>
> > ><br>
> > ><br>
> > ><br>
> > > On Thu, Jul 28, 2016 at 6:29 PM David Gossage <<br>
> > <a href="mailto:dgossage@carouselchecks.com" target="_blank">dgossage@carouselchecks.com</a> ><br>
> > > wrote:<br>
> > ><br>
> > ><br>
> > ><br>
> > > On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi < <a href="mailto:siavash.safi@gmail.com" target="_blank">siavash.safi@gmail.com</a> ><br>
> > > wrote:<br>
> > ><br>
> > ><br>
> > ><br>
> > > Hi,<br>
> > ><br>
> > > Issue: Cannot find master domain<br>
> > > Changes applied before issue started to happen: replaced<br>
> > > 172.16.0.12:/data/brick1/brick1 with 172.16.0.12:/data/brick3/brick3,<br>
> > did<br>
> > > minor package upgrades for vdsm and glusterfs<br>
> > ><br>
> > > vdsm log: <a href="https://paste.fedoraproject.org/396842/" rel="noreferrer" target="_blank">https://paste.fedoraproject.org/396842/</a><br>
> > ><br>
> > ><br>
> > > Any errrors in glusters brick or server logs? The client gluster logs<br>
> > from<br>
> > > ovirt?<br>
> > > Brick errors:<br>
> > > [2016-07-28 14:03:25.002396] E [MSGID: 113091] [posix.c:178:posix_lookup]<br>
> > > 0-ovirt-posix: null gfid for path (null)<br>
> > > [2016-07-28 14:03:25.002430] E [MSGID: 113018] [posix.c:196:posix_lookup]<br>
> > > 0-ovirt-posix: lstat on null failed [Invalid argument]<br>
> > > (Both repeated many times)<br>
> > ><br>
> > > Server errors:<br>
> > > None<br>
> > ><br>
> > > Client errors:<br>
> > > None<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > yum log: <a href="https://paste.fedoraproject.org/396854/" rel="noreferrer" target="_blank">https://paste.fedoraproject.org/396854/</a><br>
> > ><br>
> > > What version of gluster was running prior to update to 3.7.13?<br>
> > > 3.7.11-1 from <a href="http://gluster.org" rel="noreferrer" target="_blank">gluster.org</a> repository(after update ovirt switched to<br>
> > centos<br>
> > > repository)<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > Did it create gluster mounts on server when attempting to start?<br>
> > > As I checked the master domain is not mounted on any nodes.<br>
> > > Restarting vdsmd generated following errors:<br>
> > ><br>
> > > jsonrpc.Executor/5::DEBUG::2016-07-28<br>
> > > 18:50:57,661::fileUtils::143::Storage.fileUtils::(createdir) Creating<br>
> > > directory: /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt mode: None<br>
> > > jsonrpc.Executor/5::DEBUG::2016-07-28<br>
> > ><br>
> > 18:50:57,661::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)<br>
> > > Using bricks: ['172.16.0.11', '172.16.0.12', '172.16.0.13']<br>
> > > jsonrpc.Executor/5::DEBUG::2016-07-28<br>
> > > 18:50:57,662::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset<br>
> > > --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/systemd-run --scope<br>
> > > --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o<br>
> > > backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt<br>
> > > /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)<br>
> > > jsonrpc.Executor/5::DEBUG::2016-07-28<br>
> > > 18:50:57,789::__init__::318::IOProcessClient::(_run) Starting<br>
> > IOProcess...<br>
> > > jsonrpc.Executor/5::DEBUG::2016-07-28<br>
> > > 18:50:57,802::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset<br>
> > > --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/umount -f -l<br>
> > > /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)<br>
> > > jsonrpc.Executor/5::ERROR::2016-07-28<br>
> > > 18:50:57,813::hsm::2473::Storage.HSM::(connectStorageServer) Could not<br>
> > > connect to storageServer<br>
> > > Traceback (most recent call last):<br>
> > > File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer<br>
> > > conObj.connect()<br>
> > > File "/usr/share/vdsm/storage/storageServer.py", line 248, in connect<br>
> > > six.reraise(t, v, tb)<br>
> > > File "/usr/share/vdsm/storage/storageServer.py", line 241, in connect<br>
> > > self.getMountObj().getRecord().fs_file)<br>
> > > File "/usr/share/vdsm/storage/fileSD.py", line 79, in validateDirAccess<br>
> > > raise se.StorageServerAccessPermissionError(dirPath)<br>
> > > StorageServerAccessPermissionError: Permission settings on the specified<br>
> > path<br>
> > > do not allow access to the storage. Verify permission settings on the<br>
> > > specified storage path.: 'path =<br>
> > > /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'<br>
> > > jsonrpc.Executor/5::DEBUG::2016-07-28<br>
> > > 18:50:57,817::hsm::2497::Storage.HSM::(connectStorageServer) knownSDs: {}<br>
> > > jsonrpc.Executor/5::INFO::2016-07-28<br>
> > > 18:50:57,817::logUtils::51::dispatcher::(wrapper) Run and protect:<br>
> > > connectStorageServer, Return response: {'statuslist': [{'status': 469,<br>
> > 'id':<br>
> > > u'2d285de3-eede-42aa-b7d6-7b8c6e0667bc'}]}<br>
> > > jsonrpc.Executor/5::DEBUG::2016-07-28<br>
> > > 18:50:57,817::task::1191::Storage.TaskManager.Task::(prepare)<br>
> > > Task=`21487eb4-de9b-47a3-aa37-7dce06533cc9`::finished: {'statuslist':<br>
> > > [{'status': 469, 'id': u'2d285de3-eede-42aa-b7d6-7b8c6e0667bc'}]}<br>
> > > jsonrpc.Executor/5::DEBUG::2016-07-28<br>
> > > 18:50:57,817::task::595::Storage.TaskManager.Task::(_updateState)<br>
> > > Task=`21487eb4-de9b-47a3-aa37-7dce06533cc9`::moving from state preparing<br>
> > -><br>
> > > state finished<br>
> > ><br>
> > > I can manually mount the gluster volume on the same server.<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > Setup:<br>
> > > engine running on a separate node<br>
> > > 3 x kvm/glusterd nodes<br>
> > ><br>
> > > Status of volume: ovirt<br>
> > > Gluster process TCP Port RDMA Port Online Pid<br>
> > ><br>
> > ------------------------------------------------------------------------------<br>
> > > Brick 172.16.0.11:/data/brick1/brick1 49152 0 Y 17304<br>
> > > Brick 172.16.0.12:/data/brick3/brick3 49155 0 Y 9363<br>
> > > Brick 172.16.0.13:/data/brick1/brick1 49152 0 Y 23684<br>
> > > Brick 172.16.0.11:/data/brick2/brick2 49153 0 Y 17323<br>
> > > Brick 172.16.0.12:/data/brick2/brick2 49153 0 Y 9382<br>
> > > Brick 172.16.0.13:/data/brick2/brick2 49153 0 Y 23703<br>
> > > NFS Server on localhost 2049 0 Y 30508<br>
> > > Self-heal Daemon on localhost N/A N/A Y 30521<br>
> > > NFS Server on 172.16.0.11 2049 0 Y 24999<br>
> > > Self-heal Daemon on 172.16.0.11 N/A N/A Y 25016<br>
> > > NFS Server on 172.16.0.13 2049 0 Y 25379<br>
> > > Self-heal Daemon on 172.16.0.13 N/A N/A Y 25509<br>
> > ><br>
> > > Task Status of Volume ovirt<br>
> > ><br>
> > ------------------------------------------------------------------------------<br>
> > > Task : Rebalance<br>
> > > ID : 84d5ab2a-275e-421d-842b-928a9326c19a<br>
> > > Status : completed<br>
> > ><br>
> > > Thanks,<br>
> > > Siavash<br>
> > ><br>
> > ><br>
> > ><br>
> > > _______________________________________________<br>
> > > Users mailing list<br>
> > > <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> > > <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> > ><br>
> > ><br>
> > ><br>
> > > _______________________________________________<br>
> > > Users mailing list<br>
> > > <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> > > <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> > ><br>
> ><br>
><br>
</blockquote></div></div></div>