<html><body><div style="font-family: Arial; font-size: 12pt; color: #000000"><div>During network testing last night I put one compute node into maintenance mode and changed it's network from legacy to OVS, this caused issues and I changed it back. When I changed it back SPM contention started and neither became SPM, the logs are filled with this error message:</div><div><br>2016-09-06 14:43:38,720 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] (DefaultQuartzScheduler5) [] Command 'SpmStatusVDSCommand(HostName = ovirttest1, SpmStatusVDSCommandParameters:{runAsync='true', hostId='d84ebe29-5acd-4e4b-9bee-041b27a2f9f9', storagePoolId='00000001-0001-0001-0001-0000000000d8'})' execution failed: VDSGenericException: VDSErrorException: Failed to SpmStatusVDS, error = (13, 'Sanlock resource read failure', 'Permission denied'), code = 100<br></div><div><br></div><div>I've put all but the master data domain into maitenance mode and the permissions on it are vdsm/kvm. The permissions on the nfs share have not been modified, but I re-exported the share anyway, the share can be seen on the compute node and I can even mount it on that compute node, as can vdsm: </div><div><br data-mce-bogus="1"></div><div>nfs-server:/rbd/it/ovirt-nfs 293G 111G 183G 38% /rhev/data-center/mnt/nfs-server:_rbd_it_ovirt-nfs</div><div><br data-mce-bogus="1"></div><div>I'm not really sure where to go beyond this point..</div><div><br data-mce-bogus="1"></div><div data-marker="__SIG_PRE__">Regards,<br>Logan</div></div></body></html>