[Users] Behavior when putting SPM node into mainenance
Gianluca Cecchi
gianluca.cecchi at gmail.com
Tue Mar 19 09:56:58 EDT 2013
Hello 3.2 oVirt cluster based on f18 nodes
One node is SPM (f18ovn03)
I put the other node (f18ovn01) into maintenance and stop vdsmd and
update, so that I take it to 3.2.1 (engine already updated and
restarted before)
It correctly joins the cluster and I live migrate my 5 VM to it:
fedora 18
w7
2 x centos 5.6
slackware 32bit
I then put into maintenance the SPM node that doesn't have any VM.
I stop vdsmd on it and update packages
I see these messages in the gui, because I think the surviving node
takes some time to become the SPM now.
Apparently no problems at my side. Are these messages expected?
Or did I have to make anything before putting the SPM node into
maintenance (something like elect the other one as SPM if it
exist...)?
Any operational test to prove correct SPM role now for f18ovn01
?
Thanks Gianluca
2013-Mar-19, 14:47
Storage Pool Manager runs on Host f18ovn01 (Address: 10.4.4.58).
2013-Mar-19, 14:47
Storage Pool Manager runs on Host f18ovn01 (Address: 10.4.4.58).
2013-Mar-19, 14:47
Invalid status on Data Center Default. Setting status to Non-Responsive.
2013-Mar-19, 14:47
Host f18ovn03 was switched to Maintenance mode by admin at internal.
More information about the Users
mailing list