<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, Jul 28, 2016 at 9:38 AM, Siavash Safi <span dir="ltr"><<a href="mailto:siavash.safi@gmail.com" target="_blank">siavash.safi@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">file system: xfs<div>features.shard: off<br></div></div></blockquote><div><br></div><div>Ok was just seeing if matched up to the issues latest 3.7.x releases have with zfs and sharding but doesn't look like your issue.</div><div><br></div><div> In your logs I see it mounts with thee commands. What happens if you use same to a test dir?</div><div><br></div><div> /usr/bin/mount -t glusterfs -o backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt<br></div><div><br></div><div>It then umounts it and complains short while later of permissions.</div><div><br></div><div>StorageServerAccessPermissionError: Permission settings on the specified path do not allow access to the storage. Verify permission settings on the specified storage path.: 'path = /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'<br></div><div><br></div><div>Are the permissions of dirs to /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt as expected?</div><div>How about on the bricks anything out of place?</div><div><br></div><div>Is gluster still using same options as before? could it have reset the user and group to not be 36?</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div></div></div><br><div class="gmail_quote"><div dir="ltr">On Thu, Jul 28, 2016 at 7:03 PM David Gossage <<a href="mailto:dgossage@carouselchecks.com" target="_blank">dgossage@carouselchecks.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, Jul 28, 2016 at 9:28 AM, Siavash Safi <span dir="ltr"><<a href="mailto:siavash.safi@gmail.com" target="_blank">siavash.safi@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Thu, Jul 28, 2016 at 6:29 PM David Gossage <<a href="mailto:dgossage@carouselchecks.com" target="_blank">dgossage@carouselchecks.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div><div><div dir="ltr"><div>On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi <span dir="ltr"><<a href="mailto:siavash.safi@gmail.com" target="_blank">siavash.safi@gmail.com</a>></span> wrote:<br></div></div></div></div></div></div><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi,<div><br></div><div>Issue: Cannot find master domain</div><div>Changes applied before issue started to happen: replaced <span style="line-height:1.5">172.16.0.12:/data/brick1/brick1 with </span><span style="line-height:1.5">172.16.0.12:/data/brick3/brick3, did minor package upgrades for vdsm and glusterfs</span></div><div><br></div><div>vdsm log: <a href="https://paste.fedoraproject.org/396842/" target="_blank">https://paste.fedoraproject.org/396842/</a></div></div></blockquote><div><br></div><div><br></div></div></div></div><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>Any errrors in glusters brick or server logs? The client gluster logs from ovirt?</div></div></div></div></blockquote><div>Brick errors:</div><div>[2016-07-28 14:03:25.002396] E [MSGID: 113091] [posix.c:178:posix_lookup] 0-ovirt-posix: null gfid for path (null)</div><div>[2016-07-28 14:03:25.002430] E [MSGID: 113018] [posix.c:196:posix_lookup] 0-ovirt-posix: lstat on null failed [Invalid argument]</div><div>(Both repeated many times)</div><div><br></div><div>Server errors:</div><div>None</div><div><br></div><div>Client errors:</div><div>None</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>yum log: <a href="https://paste.fedoraproject.org/396854/" target="_blank">https://paste.fedoraproject.org/396854/</a></div></div></blockquote><div><br></div><div>What version of gluster was running prior to update to 3.7.13? </div></div></div></div></blockquote><div>3.7.11-1 from <a href="http://gluster.org" target="_blank">gluster.org</a> repository(after update ovirt switched to centos repository)</div></div></div></blockquote><div><br></div></div></div></div><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>What file system do your bricks reside on and do you have sharding enabled? </div></div></div></div><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div>Did it create gluster mounts on server when attempting to start?</div></div></div></div></blockquote><div>As I checked the master domain is not mounted on any nodes.</div><div>Restarting vdsmd generated following errors:</div><div><br></div><div><div>jsonrpc.Executor/5::DEBUG::2016-07-28 18:50:57,661::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt mode: None</div><div>jsonrpc.Executor/5::DEBUG::2016-07-28 18:50:57,661::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option) Using bricks: ['172.16.0.11', '172.16.0.12', '172.16.0.13']</div><div>jsonrpc.Executor/5::DEBUG::2016-07-28 18:50:57,662::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o backup-volfile-servers=172.16.0.12:172.16.0.13 172.16.0.11:/ovirt /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)</div><div>jsonrpc.Executor/5::DEBUG::2016-07-28 18:50:57,789::__init__::318::IOProcessClient::(_run) Starting IOProcess...</div><div>jsonrpc.Executor/5::DEBUG::2016-07-28 18:50:57,802::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset --cpu-list 0-31 /usr/bin/sudo -n /usr/bin/umount -f -l /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt (cwd None)</div><div>jsonrpc.Executor/5::ERROR::2016-07-28 18:50:57,813::hsm::2473::Storage.HSM::(connectStorageServer) Could not connect to storageServer</div><div>Traceback (most recent call last):</div><div> File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer</div><div> conObj.connect()</div><div> File "/usr/share/vdsm/storage/storageServer.py", line 248, in connect</div><div> six.reraise(t, v, tb)</div><div> File "/usr/share/vdsm/storage/storageServer.py", line 241, in connect</div><div> self.getMountObj().getRecord().fs_file)</div><div> File "/usr/share/vdsm/storage/fileSD.py", line 79, in validateDirAccess</div><div> raise se.StorageServerAccessPermissionError(dirPath)</div><div>StorageServerAccessPermissionError: Permission settings on the specified path do not allow access to the storage. Verify permission settings on the specified storage path.: 'path = /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'</div><div>jsonrpc.Executor/5::DEBUG::2016-07-28 18:50:57,817::hsm::2497::Storage.HSM::(connectStorageServer) knownSDs: {}</div><div>jsonrpc.Executor/5::INFO::2016-07-28 18:50:57,817::logUtils::51::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 469, 'id': u'2d285de3-eede-42aa-b7d6-7b8c6e0667bc'}]}</div><div>jsonrpc.Executor/5::DEBUG::2016-07-28 18:50:57,817::task::1191::Storage.TaskManager.Task::(prepare) Task=`21487eb4-de9b-47a3-aa37-7dce06533cc9`::finished: {'statuslist': [{'status': 469, 'id': u'2d285de3-eede-42aa-b7d6-7b8c6e0667bc'}]}</div><div>jsonrpc.Executor/5::DEBUG::2016-07-28 18:50:57,817::task::595::Storage.TaskManager.Task::(_updateState) Task=`21487eb4-de9b-47a3-aa37-7dce06533cc9`::moving from state preparing -> state finished</div></div><div><br></div><div>I can manually mount the gluster volume on the same server.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"></blockquote></div></div></div><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>Setup:</div><div>engine running on a separate node</div><div>3 x kvm/glusterd nodes</div><div><br></div><div><div>Status of volume: ovirt</div><div>Gluster process TCP Port RDMA Port Online Pid</div><div>------------------------------------------------------------------------------</div><div>Brick 172.16.0.11:/data/brick1/brick1 49152 0 Y 17304</div><div>Brick 172.16.0.12:/data/brick3/brick3 49155 0 Y 9363</div><div>Brick 172.16.0.13:/data/brick1/brick1 49152 0 Y 23684</div><div>Brick 172.16.0.11:/data/brick2/brick2 49153 0 Y 17323</div><div>Brick 172.16.0.12:/data/brick2/brick2 49153 0 Y 9382</div><div>Brick 172.16.0.13:/data/brick2/brick2 49153 0 Y 23703</div><div>NFS Server on localhost 2049 0 Y 30508</div><div>Self-heal Daemon on localhost N/A N/A Y 30521</div><div>NFS Server on 172.16.0.11 2049 0 Y 24999</div><div>Self-heal Daemon on 172.16.0.11 N/A N/A Y 25016</div><div>NFS Server on 172.16.0.13 2049 0 Y 25379</div><div>Self-heal Daemon on 172.16.0.13 N/A N/A Y 25509</div><div><br></div><div>Task Status of Volume ovirt</div><div>------------------------------------------------------------------------------</div><div>Task : Rebalance</div><div>ID : 84d5ab2a-275e-421d-842b-928a9326c19a</div><div>Status : completed</div></div><div><br></div><div>Thanks,</div><div>Siavash</div></div>
<br></blockquote></div></div></div><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
<br></blockquote></div><br></div></div>
</blockquote></div></div>
</blockquote></div></div></div></blockquote></div>
</blockquote></div><br></div></div>