when you put the spm host into maintenance the spm role is moved to the
second host.
this means that f18ovn01 host is now the SPM
2013-Mar-19, 14:47
Storage Pool Manager runs on Host f18ovn01 (Address: 10.4.4.58).
when we move the spm role, for a few seconds, there is no spm -> hence the status
change
2013-Mar-19, 14:47
Invalid status on Data Center Default. Setting status to Non-Responsive.
On 03/19/2013 03:56 PM, Gianluca Cecchi wrote:
Hello 3.2 oVirt cluster based on f18 nodes
One node is SPM (f18ovn03)
I put the other node (f18ovn01) into maintenance and stop vdsmd and
update, so that I take it to 3.2.1 (engine already updated and
restarted before)
It correctly joins the cluster and I live migrate my 5 VM to it:
fedora 18
w7
2 x centos 5.6
slackware 32bit
I then put into maintenance the SPM node that doesn't have any VM.
I stop vdsmd on it and update packages
I see these messages in the gui, because I think the surviving node
takes some time to become the SPM now.
Apparently no problems at my side. Are these messages expected?
Or did I have to make anything before putting the SPM node into
maintenance (something like elect the other one as SPM if it
exist...)?
Any operational test to prove correct SPM role now for f18ovn01
?
Thanks Gianluca
2013-Mar-19, 14:47
Storage Pool Manager runs on Host f18ovn01 (Address: 10.4.4.58).
2013-Mar-19, 14:47
Storage Pool Manager runs on Host f18ovn01 (Address: 10.4.4.58).
2013-Mar-19, 14:47
Invalid status on Data Center Default. Setting status to Non-Responsive.
2013-Mar-19, 14:47
Host f18ovn03 was switched to Maintenance mode by admin@internal.
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
Dafna Ron