IIRC, a previous (recent) post stated that going from OVS to legacy was not
supported.
CC
On Wed, Sep 7, 2016 at 3:49 AM, Logan Kuhn <logank(a)wolfram.com> wrote:
During network testing last night I put one compute node into
maintenance
mode and changed it's network from legacy to OVS, this caused issues and I
changed it back. When I changed it back SPM contention started and neither
became SPM, the logs are filled with this error message:
2016-09-06 14:43:38,720 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand]
(DefaultQuartzScheduler5) [] Command 'SpmStatusVDSCommand(HostName =
ovirttest1, SpmStatusVDSCommandParameters:{runAsync='true',
hostId='d84ebe29-5acd-4e4b-9bee-041b27a2f9f9',
storagePoolId='00000001-0001-0001-0001-0000000000d8'})' execution failed:
VDSGenericException: VDSErrorException: Failed to SpmStatusVDS, error =
(13, 'Sanlock resource read failure', 'Permission denied'), code = 100
I've put all but the master data domain into maitenance mode and the
permissions on it are vdsm/kvm. The permissions on the nfs share have not
been modified, but I re-exported the share anyway, the share can be seen on
the compute node and I can even mount it on that compute node, as can vdsm:
nfs-server:/rbd/it/ovirt-nfs 293G 111G 183G 38% /rhev/data-center/mnt/nfs-
server:_rbd_it_ovirt-nfs
I'm not really sure where to go beyond this point..
Regards,
Logan
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users