[ovirt-users] no spm node in cluster and unable to start any vm or stopped storage domain

Adam Litke alitke at redhat.com
Wed May 31 19:41:01 UTC 2017


I'm no NFS expert but for development domains I use the following options:

rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36

I wonder if something subtle changed on upgrade that interacts poorly with
your configuration?

On Wed, May 31, 2017 at 11:34 AM, Moritz Baumann <moritz.baumann at inf.ethz.ch
> wrote:

> Hi Adam,
>
> Just an idea, but could this be related to stale mounts from when you
>> rebooted the storage?  Please try the following:
>>
>>  1. Place all nodes into maintenance mode
>>  2. Disable the ovirt NFS exports
>>      1. Comment out lines in /etc/exports
>>      2. exportfs -r
>>  3. Reboot your nodes
>>  4. Re-enable the ovirt NFS exports
>>  5. Activate your nodes
>>
>
> all storage domains (data/iso) are down, so is the data center
> (non-responsive) and no nfs mount is on any of the nodes.
>
> I can however manually mount the data export and touch a file (as root).
>
> So I think stale mounts is not the issue.
>
> However I did the steps and the result is the same.
>
> Best,
> Mo
>



-- 
Adam Litke
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170531/e49309d1/attachment.html>


More information about the Users mailing list