From j.bittner at nbu.cz Mon Apr 22 02:30:45 2013 Content-Type: multipart/mixed; boundary="===============6087957732438064272==" MIME-Version: 1.0 From: Jakub Bittner To: users at ovirt.org Subject: [Users] 1 node cluster problem Date: Mon, 22 Apr 2013 08:30:36 +0200 Message-ID: <5174D90C.60907@nbu.cz> --===============6087957732438064272== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------060205000207060203010908 Content-Type: text/plain; charset=3DISO-8859-2; format=3Dflowed Content-Transfer-Encoding: 7bit Hi, I have one node cluster (ovirt version 3.2) and this node went down = because kernel panic. In wui I can see it is down, but VMs are still = reported to be running. (they dont, ofcourse..) I manualy rebooted that = host and now it is up. But problem is, that I can not activate it in wui = (still red) If I press "Confirm 'Host has been Rebooted'" it says: Manual fence did not revoke the selected SPM (node1.site.com) since the = master storage domain was not active or could not use another host for = the fence operation. If I try to put host in maitenance it says: Error while executing action: Cannot switch the following Hosts to = Maintenance mode: node1.site.com. One or more running VMs are indicated as non-migratable. The = non-migratable VMs are: vm2.site.com. I removed "HA mode" from vm2.site.com, but did not do the job. I can not even Power off running hosts. Is there any way what to do? Thank you, Jakub Bittner --------------060205000207060203010908 Content-Type: text/html; charset=3DISO-8859-2 Content-Transfer-Encoding: 7bit Hi,

I have one node cluster (ovirt version 3.2) and this node went down because kernel panic. In wui I can see it is down, but VMs are still reported to be running. (they dont, ofcourse..) I manualy rebooted that host and now it is up. But problem is, that I can not activate it in wui (still red)

If I press "Confirm 'Host has been Rebooted'" it says:
Manual fence did not revoke the selected SPM (node1.site.com) since the master storage domain was not active or could not use another host for the fence operation.

If I try to put host in maitenance it says:

Error while executing action: Cannot switch the following Hosts to Maintenance mode: node1.site.com.
One or more running VMs are indicated as non-migratable. The non-migratable VMs are: vm2.site.com.


I removed "HA mode" from vm2.site.com, but did not do the job.

I can not even Power off running hosts.

Is there any way what to do?

Thank you,
Jakub Bittner
--------------060205000207060203010908-- --===============6087957732438064272== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wNjAyMDUwMDAyMDcwNjAyMDMwMTA5MDgKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PUlTTy04ODU5LTI7IGZvcm1hdD1mbG93ZWQKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzog N2JpdAoKSGksCgpJIGhhdmUgb25lIG5vZGUgY2x1c3RlciAob3ZpcnQgdmVyc2lvbiAzLjIpIGFu ZCB0aGlzIG5vZGUgd2VudCBkb3duIApiZWNhdXNlIGtlcm5lbCBwYW5pYy4gSW4gd3VpIEkgY2Fu IHNlZSBpdCBpcyBkb3duLCBidXQgVk1zIGFyZSBzdGlsbCAKcmVwb3J0ZWQgdG8gYmUgcnVubmlu Zy4gKHRoZXkgZG9udCwgb2Zjb3Vyc2UuLikgSSBtYW51YWx5IHJlYm9vdGVkIHRoYXQgCmhvc3Qg YW5kIG5vdyBpdCBpcyB1cC4gQnV0IHByb2JsZW0gaXMsIHRoYXQgSSBjYW4gbm90IGFjdGl2YXRl IGl0IGluIHd1aSAKKHN0aWxsIHJlZCkKCklmIEkgcHJlc3MgIkNvbmZpcm0gJ0hvc3QgaGFzIGJl ZW4gUmVib290ZWQnIiBpdCBzYXlzOgpNYW51YWwgZmVuY2UgZGlkIG5vdCByZXZva2UgdGhlIHNl bGVjdGVkIFNQTSAobm9kZTEuc2l0ZS5jb20pIHNpbmNlIHRoZSAKbWFzdGVyIHN0b3JhZ2UgZG9t YWluIHdhcyBub3QgYWN0aXZlIG9yIGNvdWxkIG5vdCB1c2UgYW5vdGhlciBob3N0IGZvciAKdGhl IGZlbmNlIG9wZXJhdGlvbi4KCklmIEkgdHJ5IHRvIHB1dCBob3N0IGluIG1haXRlbmFuY2UgaXQg c2F5czoKCkVycm9yIHdoaWxlIGV4ZWN1dGluZyBhY3Rpb246IENhbm5vdCBzd2l0Y2ggdGhlIGZv bGxvd2luZyBIb3N0cyB0byAKTWFpbnRlbmFuY2UgbW9kZTogbm9kZTEuc2l0ZS5jb20uCk9uZSBv ciBtb3JlIHJ1bm5pbmcgVk1zIGFyZSBpbmRpY2F0ZWQgYXMgbm9uLW1pZ3JhdGFibGUuIFRoZSAK bm9uLW1pZ3JhdGFibGUgVk1zIGFyZTogdm0yLnNpdGUuY29tLgoKCkkgcmVtb3ZlZCAiSEEgbW9k ZSIgZnJvbSB2bTIuc2l0ZS5jb20sIGJ1dCBkaWQgbm90IGRvIHRoZSBqb2IuCgpJIGNhbiBub3Qg ZXZlbiBQb3dlciBvZmYgcnVubmluZyBob3N0cy4KCklzIHRoZXJlIGFueSB3YXkgd2hhdCB0byBk bz8KClRoYW5rIHlvdSwKSmFrdWIgQml0dG5lcgoKLS0tLS0tLS0tLS0tLS0wNjAyMDUwMDAyMDcw NjAyMDMwMTA5MDgKQ29udGVudC1UeXBlOiB0ZXh0L2h0bWw7IGNoYXJzZXQ9SVNPLTg4NTktMgpD b250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0Cgo8aHRtbD4KICA8aGVhZD4KCiAgICA8bWV0 YSBodHRwLWVxdWl2PSJjb250ZW50LXR5cGUiIGNvbnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD1J U08tODg1OS0yIj4KICA8L2hlYWQ+CiAgPGJvZHkgYmdjb2xvcj0iI0ZGRkZGRiIgdGV4dD0iIzAw MDAwMCI+CiAgICBIaSw8YnI+CiAgICA8YnI+CiAgICBJIGhhdmUgb25lIG5vZGUgY2x1c3RlciAo b3ZpcnQgdmVyc2lvbiAzLjIpIGFuZCB0aGlzIG5vZGUgd2VudCBkb3duCiAgICBiZWNhdXNlIGtl cm5lbCBwYW5pYy4gSW4gd3VpIEkgY2FuIHNlZSBpdCBpcyBkb3duLCBidXQgVk1zIGFyZSBzdGls bAogICAgcmVwb3J0ZWQgdG8gYmUgcnVubmluZy4gKHRoZXkgZG9udCwgb2Zjb3Vyc2UuLikgSSBt YW51YWx5IHJlYm9vdGVkCiAgICB0aGF0IGhvc3QgYW5kIG5vdyBpdCBpcyB1cC4gQnV0IHByb2Js ZW0gaXMsIHRoYXQgSSBjYW4gbm90IGFjdGl2YXRlCiAgICBpdCBpbiB3dWkgKHN0aWxsIHJlZCk8 YnI+CiAgICA8YnI+CiAgICBJZiBJIHByZXNzICJDb25maXJtICdIb3N0IGhhcyBiZWVuIFJlYm9v dGVkJyIgaXQgc2F5czogPGJyPgogICAgPGRpdiB0aXRsZT0iIiBzdHlsZT0ib3V0bGluZTogbWVk aXVtIG5vbmU7IiB0YWJpbmRleD0iMCI+CiAgICAgIDxkaXYgaWQ9Imd3dC11aWQtMTQyOF9jb2wy X3JvdzEiPk1hbnVhbCBmZW5jZSBkaWQgbm90IHJldm9rZSB0aGUKICAgICAgICBzZWxlY3RlZCBT UE0gKG5vZGUxLnNpdGUuY29tKSBzaW5jZSB0aGUgbWFzdGVyIHN0b3JhZ2UgZG9tYWluCiAgICAg ICAgd2FzIG5vdCBhY3RpdmUgb3IgY291bGQgbm90IHVzZSBhbm90aGVyIGhvc3QgZm9yIHRoZSBm ZW5jZQogICAgICAgIG9wZXJhdGlvbi48YnI+CiAgICAgICAgPGJyPgogICAgICAgIElmIEkgdHJ5 IHRvIHB1dCBob3N0IGluIG1haXRlbmFuY2UgaXQgc2F5czogPGJyPgogICAgICAgIDxicj4KICAg ICAgICBFcnJvciB3aGlsZSBleGVjdXRpbmcgYWN0aW9uOiBDYW5ub3Qgc3dpdGNoIHRoZSBmb2xs b3dpbmcgSG9zdHMKICAgICAgICB0byBNYWludGVuYW5jZSBtb2RlOiBub2RlMS5zaXRlLmNvbS48 YnI+CiAgICAgICAgT25lIG9yIG1vcmUgcnVubmluZyBWTXMgYXJlIGluZGljYXRlZCBhcyBub24t bWlncmF0YWJsZS4gVGhlCiAgICAgICAgbm9uLW1pZ3JhdGFibGUgVk1zIGFyZTogdm0yLnNpdGUu Y29tLjxicj4KICAgICAgPC9kaXY+CiAgICA8L2Rpdj4KICAgIDxicj4KICAgIDxicj4KICAgIEkg cmVtb3ZlZCAiSEEgbW9kZSIgZnJvbSB2bTIuc2l0ZS5jb20sIGJ1dCBkaWQgbm90IGRvIHRoZSBq b2IuPGJyPgogICAgPGJyPgogICAgSSBjYW4gbm90IGV2ZW4gUG93ZXIgb2ZmIHJ1bm5pbmcgaG9z dHMuPGJyPgogICAgPGJyPgogICAgSXMgdGhlcmUgYW55IHdheSB3aGF0IHRvIGRvPzxicj4KICAg IDxicj4KICAgIFRoYW5rIHlvdSw8YnI+CiAgICBKYWt1YiBCaXR0bmVyPGJyPgogIDwvYm9keT4K PC9odG1sPgoKLS0tLS0tLS0tLS0tLS0wNjAyMDUwMDAyMDcwNjAyMDMwMTA5MDgtLQo= --===============6087957732438064272==-- From iheim at redhat.com Mon Apr 22 02:33:21 2013 Content-Type: multipart/mixed; boundary="===============8091074253521639377==" MIME-Version: 1.0 From: Itamar Heim To: users at ovirt.org Subject: Re: [Users] 1 node cluster problem Date: Mon, 22 Apr 2013 09:33:14 +0300 Message-ID: <5174D9AA.6050108@redhat.com> In-Reply-To: 5174D90C.60907@nbu.cz --===============8091074253521639377== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 04/22/2013 09:30 AM, Jakub Bittner wrote: > Hi, > > I have one node cluster (ovirt version 3.2) and this node went down > because kernel panic. In wui I can see it is down, but VMs are still > reported to be running. (they dont, ofcourse..) I manualy rebooted that > host and now it is up. But problem is, that I can not activate it in wui > (still red) > > If I press "Confirm 'Host has been Rebooted'" it says: > Manual fence did not revoke the selected SPM (node1.site.com) since the > master storage domain was not active or could not use another host for > the fence operation. > > If I try to put host in maitenance it says: > > Error while executing action: Cannot switch the following Hosts to > Maintenance mode: node1.site.com. > One or more running VMs are indicated as non-migratable. The > non-migratable VMs are: vm2.site.com. > > > I removed "HA mode" from vm2.site.com, but did not do the job. > > I can not even Power off running hosts. > > Is there any way what to do? > > Thank you, > Jakub Bittner > > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > iirc, there is a corner case with a single host in DC around this fixed = post ovirt 3.2 (http://gerrit.ovirt.org/#/c/13045/), but if you rebooted the host, and it is back up, engine shouldn't have = an issue connecting to VDSM and detecting the VMs are no longer running. if the host/vdsm are up and running, and host is activated in engine - = what status is it at? --===============8091074253521639377==-- From j.bittner at nbu.cz Mon Apr 22 02:54:04 2013 Content-Type: multipart/mixed; boundary="===============0558509609810772370==" MIME-Version: 1.0 From: Jakub Bittner To: users at ovirt.org Subject: Re: [Users] 1 node cluster problem Date: Mon, 22 Apr 2013 08:53:54 +0200 Message-ID: <5174DE82.4070908@nbu.cz> In-Reply-To: 5174D9AA.6050108@redhat.com --===============0558509609810772370== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Dne 22.4.2013 08:33, Itamar Heim napsal(a): > On 04/22/2013 09:30 AM, Jakub Bittner wrote: >> Hi, >> >> I have one node cluster (ovirt version 3.2) and this node went down >> because kernel panic. In wui I can see it is down, but VMs are still >> reported to be running. (they dont, ofcourse..) I manualy rebooted that >> host and now it is up. But problem is, that I can not activate it in wui >> (still red) >> >> If I press "Confirm 'Host has been Rebooted'" it says: >> Manual fence did not revoke the selected SPM (node1.site.com) since the >> master storage domain was not active or could not use another host for >> the fence operation. >> >> If I try to put host in maitenance it says: >> >> Error while executing action: Cannot switch the following Hosts to >> Maintenance mode: node1.site.com. >> One or more running VMs are indicated as non-migratable. The >> non-migratable VMs are: vm2.site.com. >> >> >> I removed "HA mode" from vm2.site.com, but did not do the job. >> >> I can not even Power off running hosts. >> >> Is there any way what to do? >> >> Thank you, >> Jakub Bittner >> >> >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > iirc, there is a corner case with a single host in DC around this = > fixed post ovirt 3.2 (http://gerrit.ovirt.org/#/c/13045/), > but if you rebooted the host, and it is back up, engine shouldn't have = > an issue connecting to VDSM and detecting the VMs are no longer running. > > if the host/vdsm are up and running, and host is activated in engine - = > what status is it at? Hypervisor node is reported to be down by ovirt engine, but it is = running. The VMs have question mark in their statuses. --===============0558509609810772370==-- From r.koch at ovido.at Mon Apr 22 05:30:13 2013 Content-Type: multipart/mixed; boundary="===============0117299243945611148==" MIME-Version: 1.0 From: =?utf-8?q?Ren=C3=A9_Koch_=28ovido=29_=3Cr=2Ekoch_at_ovido=2Eat=3E?= To: users at ovirt.org Subject: Re: [Users] 1 node cluster problem Date: Mon, 22 Apr 2013 11:29:50 +0200 Message-ID: <1366622990.1776.57.camel@pc-ovido02.lan.ovido.at> In-Reply-To: 5174DE82.4070908@nbu.cz --===============0117299243945611148== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On Mon, 2013-04-22 at 08:53 +0200, Jakub Bittner wrote: > Dne 22.4.2013 08:33, Itamar Heim napsal(a): > > On 04/22/2013 09:30 AM, Jakub Bittner wrote: > >> Hi, > >> > >> I have one node cluster (ovirt version 3.2) and this node went down > >> because kernel panic. In wui I can see it is down, but VMs are still > >> reported to be running. (they dont, ofcourse..) I manualy rebooted that > >> host and now it is up. But problem is, that I can not activate it in w= ui > >> (still red) > >> > >> If I press "Confirm 'Host has been Rebooted'" it says: > >> Manual fence did not revoke the selected SPM (node1.site.com) since the > >> master storage domain was not active or could not use another host for > >> the fence operation. > >> > >> If I try to put host in maitenance it says: > >> > >> Error while executing action: Cannot switch the following Hosts to > >> Maintenance mode: node1.site.com. > >> One or more running VMs are indicated as non-migratable. The > >> non-migratable VMs are: vm2.site.com. > >> > >> > >> I removed "HA mode" from vm2.site.com, but did not do the job. > >> > >> I can not even Power off running hosts. > >> > >> Is there any way what to do? > >> > >> Thank you, > >> Jakub Bittner > >> > >> > >> _______________________________________________ > >> Users mailing list > >> Users(a)ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/users > >> > > > > iirc, there is a corner case with a single host in DC around this = > > fixed post ovirt 3.2 (http://gerrit.ovirt.org/#/c/13045/), > > but if you rebooted the host, and it is back up, engine shouldn't have = > > an issue connecting to VDSM and detecting the VMs are no longer running. > > > > if the host/vdsm are up and running, and host is activated in engine - = > > what status is it at? > = > Hypervisor node is reported to be down by ovirt engine, but it is = > running. The VMs have question mark in their statuses. Can you attach vdsm.log of your hypervisor? I guess there will be error messages like missing logical volumes or libvirt getcapability check issues. Have seen these and host-behavior was the same as in your case. I solved it by changing storage to local NFS. Didn't had enough time to report bugs, yet - sorry for this. Regards, Ren=C3=A9 > = > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============0117299243945611148==-- From iheim at redhat.com Mon Apr 22 05:42:57 2013 Content-Type: multipart/mixed; boundary="===============5362392240637461179==" MIME-Version: 1.0 From: Itamar Heim To: users at ovirt.org Subject: Re: [Users] 1 node cluster problem Date: Mon, 22 Apr 2013 12:42:51 +0300 Message-ID: <5175061B.8060308@redhat.com> In-Reply-To: 5174DE82.4070908@nbu.cz --===============5362392240637461179== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 04/22/2013 09:53 AM, Jakub Bittner wrote: > Dne 22.4.2013 08:33, Itamar Heim napsal(a): >> On 04/22/2013 09:30 AM, Jakub Bittner wrote: >>> Hi, >>> >>> I have one node cluster (ovirt version 3.2) and this node went down >>> because kernel panic. In wui I can see it is down, but VMs are still >>> reported to be running. (they dont, ofcourse..) I manualy rebooted that >>> host and now it is up. But problem is, that I can not activate it in wui >>> (still red) >>> >>> If I press "Confirm 'Host has been Rebooted'" it says: >>> Manual fence did not revoke the selected SPM (node1.site.com) since the >>> master storage domain was not active or could not use another host for >>> the fence operation. >>> >>> If I try to put host in maitenance it says: >>> >>> Error while executing action: Cannot switch the following Hosts to >>> Maintenance mode: node1.site.com. >>> One or more running VMs are indicated as non-migratable. The >>> non-migratable VMs are: vm2.site.com. >>> >>> >>> I removed "HA mode" from vm2.site.com, but did not do the job. >>> >>> I can not even Power off running hosts. >>> >>> Is there any way what to do? >>> >>> Thank you, >>> Jakub Bittner >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> >> iirc, there is a corner case with a single host in DC around this >> fixed post ovirt 3.2 (http://gerrit.ovirt.org/#/c/13045/), >> but if you rebooted the host, and it is back up, engine shouldn't have >> an issue connecting to VDSM and detecting the VMs are no longer running. >> >> if the host/vdsm are up and running, and host is activated in engine - >> what status is it at? > > Hypervisor node is reported to be down by ovirt engine, but it is > running. The VMs have question mark in their statuses. > what happens when you activate the host? --===============5362392240637461179==-- From j.bittner at nbu.cz Mon Apr 22 06:32:29 2013 Content-Type: multipart/mixed; boundary="===============0208304194317863332==" MIME-Version: 1.0 From: Jakub Bittner To: users at ovirt.org Subject: Re: [Users] 1 node cluster problem Date: Mon, 22 Apr 2013 12:32:19 +0200 Message-ID: <517511B3.6030505@nbu.cz> In-Reply-To: 5175061B.8060308@redhat.com --===============0208304194317863332== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Dne 22.4.2013 11:42, Itamar Heim napsal(a): > On 04/22/2013 09:53 AM, Jakub Bittner wrote: >> Dne 22.4.2013 08:33, Itamar Heim napsal(a): >>> On 04/22/2013 09:30 AM, Jakub Bittner wrote: >>>> Hi, >>>> >>>> I have one node cluster (ovirt version 3.2) and this node went down >>>> because kernel panic. In wui I can see it is down, but VMs are still >>>> reported to be running. (they dont, ofcourse..) I manualy rebooted = >>>> that >>>> host and now it is up. But problem is, that I can not activate it = >>>> in wui >>>> (still red) >>>> >>>> If I press "Confirm 'Host has been Rebooted'" it says: >>>> Manual fence did not revoke the selected SPM (node1.site.com) since = >>>> the >>>> master storage domain was not active or could not use another host for >>>> the fence operation. >>>> >>>> If I try to put host in maitenance it says: >>>> >>>> Error while executing action: Cannot switch the following Hosts to >>>> Maintenance mode: node1.site.com. >>>> One or more running VMs are indicated as non-migratable. The >>>> non-migratable VMs are: vm2.site.com. >>>> >>>> >>>> I removed "HA mode" from vm2.site.com, but did not do the job. >>>> >>>> I can not even Power off running hosts. >>>> >>>> Is there any way what to do? >>>> >>>> Thank you, >>>> Jakub Bittner >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users(a)ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>> >>> iirc, there is a corner case with a single host in DC around this >>> fixed post ovirt 3.2 (http://gerrit.ovirt.org/#/c/13045/), >>> but if you rebooted the host, and it is back up, engine shouldn't have >>> an issue connecting to VDSM and detecting the VMs are no longer = >>> running. >>> >>> if the host/vdsm are up and running, and host is activated in engine - >>> what status is it at? >> >> Hypervisor node is reported to be down by ovirt engine, but it is >> running. The VMs have question mark in their statuses. >> > > what happens when you activate the host? I cant, it is grayed out. --===============0208304194317863332==-- From j.bittner at nbu.cz Mon Apr 22 10:19:20 2013 Content-Type: multipart/mixed; boundary="===============6212731648575359861==" MIME-Version: 1.0 From: Jakub Bittner To: users at ovirt.org Subject: Re: [Users] 1 node cluster problem Date: Mon, 22 Apr 2013 16:19:11 +0200 Message-ID: <517546DF.7000704@nbu.cz> In-Reply-To: 517511B3.6030505@nbu.cz --===============6212731648575359861== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Dne 22.4.2013 12:32, Jakub Bittner napsal(a): > Dne 22.4.2013 11:42, Itamar Heim napsal(a): >> On 04/22/2013 09:53 AM, Jakub Bittner wrote: >>> Dne 22.4.2013 08:33, Itamar Heim napsal(a): >>>> On 04/22/2013 09:30 AM, Jakub Bittner wrote: >>>>> Hi, >>>>> >>>>> I have one node cluster (ovirt version 3.2) and this node went down >>>>> because kernel panic. In wui I can see it is down, but VMs are still >>>>> reported to be running. (they dont, ofcourse..) I manualy rebooted = >>>>> that >>>>> host and now it is up. But problem is, that I can not activate it = >>>>> in wui >>>>> (still red) >>>>> >>>>> If I press "Confirm 'Host has been Rebooted'" it says: >>>>> Manual fence did not revoke the selected SPM (node1.site.com) = >>>>> since the >>>>> master storage domain was not active or could not use another host = >>>>> for >>>>> the fence operation. >>>>> >>>>> If I try to put host in maitenance it says: >>>>> >>>>> Error while executing action: Cannot switch the following Hosts to >>>>> Maintenance mode: node1.site.com. >>>>> One or more running VMs are indicated as non-migratable. The >>>>> non-migratable VMs are: vm2.site.com. >>>>> >>>>> >>>>> I removed "HA mode" from vm2.site.com, but did not do the job. >>>>> >>>>> I can not even Power off running hosts. >>>>> >>>>> Is there any way what to do? >>>>> >>>>> Thank you, >>>>> Jakub Bittner >>>>> >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users(a)ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>> >>>> iirc, there is a corner case with a single host in DC around this >>>> fixed post ovirt 3.2 (http://gerrit.ovirt.org/#/c/13045/), >>>> but if you rebooted the host, and it is back up, engine shouldn't have >>>> an issue connecting to VDSM and detecting the VMs are no longer = >>>> running. >>>> >>>> if the host/vdsm are up and running, and host is activated in engine - >>>> what status is it at? >>> >>> Hypervisor node is reported to be down by ovirt engine, but it is >>> running. The VMs have question mark in their statuses. >>> >> >> what happens when you activate the host? > I cant, it is grayed out. > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users I solved it by manually editing SQL database. I set all VMs down, = removed hypervisor and reinstalled it. And it works... --===============6212731648575359861==-- From iheim at redhat.com Mon Apr 22 16:40:36 2013 Content-Type: multipart/mixed; boundary="===============7797584165021566360==" MIME-Version: 1.0 From: Itamar Heim To: users at ovirt.org Subject: Re: [Users] 1 node cluster problem Date: Mon, 22 Apr 2013 23:40:28 +0300 Message-ID: <5175A03C.2030504@redhat.com> In-Reply-To: 517511B3.6030505@nbu.cz --===============7797584165021566360== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 04/22/2013 01:32 PM, Jakub Bittner wrote: > Dne 22.4.2013 11:42, Itamar Heim napsal(a): >> On 04/22/2013 09:53 AM, Jakub Bittner wrote: >>> Dne 22.4.2013 08:33, Itamar Heim napsal(a): >>>> On 04/22/2013 09:30 AM, Jakub Bittner wrote: >>>>> Hi, >>>>> >>>>> I have one node cluster (ovirt version 3.2) and this node went down >>>>> because kernel panic. In wui I can see it is down, but VMs are still >>>>> reported to be running. (they dont, ofcourse..) I manualy rebooted >>>>> that >>>>> host and now it is up. But problem is, that I can not activate it >>>>> in wui >>>>> (still red) >>>>> >>>>> If I press "Confirm 'Host has been Rebooted'" it says: >>>>> Manual fence did not revoke the selected SPM (node1.site.com) since >>>>> the >>>>> master storage domain was not active or could not use another host for >>>>> the fence operation. >>>>> >>>>> If I try to put host in maitenance it says: >>>>> >>>>> Error while executing action: Cannot switch the following Hosts to >>>>> Maintenance mode: node1.site.com. >>>>> One or more running VMs are indicated as non-migratable. The >>>>> non-migratable VMs are: vm2.site.com. >>>>> >>>>> >>>>> I removed "HA mode" from vm2.site.com, but did not do the job. >>>>> >>>>> I can not even Power off running hosts. >>>>> >>>>> Is there any way what to do? >>>>> >>>>> Thank you, >>>>> Jakub Bittner >>>>> >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users(a)ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>> >>>> iirc, there is a corner case with a single host in DC around this >>>> fixed post ovirt 3.2 (http://gerrit.ovirt.org/#/c/13045/), >>>> but if you rebooted the host, and it is back up, engine shouldn't have >>>> an issue connecting to VDSM and detecting the VMs are no longer >>>> running. >>>> >>>> if the host/vdsm are up and running, and host is activated in engine - >>>> what status is it at? >>> >>> Hypervisor node is reported to be down by ovirt engine, but it is >>> running. The VMs have question mark in their statuses. >>> >> >> what happens when you activate the host? > I cant, it is grayed out. do you remember which flow caused it to be in a down status? --===============7797584165021566360==--