On 06 Jan 2016, at 15:31, Will Dennis <wdennis(a)nec-labs.com>
wrote:
To follow up on this, after the migrations as a result of the troubleshooting, the
webadmin UI of the hosts in my datacenter now has each host with “1” VM running…
https://drive.google.com/file/d/0B88nnCy4LpFMYklDVDhFUV96Y00/view?usp=sha...
However, The only VM that is running currently is the hosted engine, which is currently
running on host “ovirt-node-03” —
$ ansible istgroup-ovirt -f 1 -i prod -u root -m shell -a "hosted-engine --vm-status
| grep -e '^Hostname' -e '^Engine'"
ovirt-node-01 | success | rc=0 >>
Hostname : ovirt-node-01
Engine status : {"reason": "bad vm status",
"health": "bad", "vm": "down", "detail":
"down"}
Hostname : ovirt-node-02
Engine status : {"reason": "bad vm status",
"health": "bad", "vm": "down", "detail":
"down"}
Hostname : ovirt-node-03
Engine status : {"health": "good",
"vm": "up", "detail": "up"}
ovirt-node-02 | success | rc=0 >>
Hostname : ovirt-node-01
Engine status : {"reason": "bad vm status",
"health": "bad", "vm": "down", "detail":
"down"}
Hostname : ovirt-node-02
Engine status : {"reason": "bad vm status",
"health": "bad", "vm": "down", "detail":
"down"}
Hostname : ovirt-node-03
Engine status : {"health": "good",
"vm": "up", "detail": "up"}
ovirt-node-03 | success | rc=0 >>
Hostname : ovirt-node-01
Engine status : {"reason": "bad vm status",
"health": "bad", "vm": "down", "detail":
"down"}
Hostname : ovirt-node-02
Engine status : {"reason": "bad vm status",
"health": "bad", "vm": "down", "detail":
"down"}
Hostname : ovirt-node-03
Engine status : {"health": "good",
"vm": "up", "detail": "up”}
Is this a UI bug of some sort?
Might be, but I would doubt it. It merely reflects what hosts are reporting
are there other VMs?
migrations going on?
On Jan 4, 2016, at 10:47 PM, Will Dennis
<wdennis@nec-labs.com<mailto:wdennis@nec-labs.com>> wrote:
Note that on the screenshot I posted above, that the webadmin hosts screen says that
-node-01 has one VM running, and the others 0… You’d think that would be the HE VM running
on there, but it’s actually on -node-02:
$ ansible istgroup-ovirt -f 1 -i prod -u root -m shell -a "hosted-engine --vm-status
| grep -e '^Hostname' -e '^Engine'"
ovirt-node-01 | success | rc=0 >>
Hostname : ovirt-node-01
Engine status : {"reason": "bad vm status",
"health": "bad", "vm": "down", "detail":
"down"}
Hostname : ovirt-node-02
Engine status : {"health": "good",
"vm": "up", "detail": "up"}
Hostname : ovirt-node-03
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down",
"detail": "unknown"}
ovirt-node-02 | success | rc=0 >>
Hostname : ovirt-node-01
Engine status : {"reason": "bad vm status",
"health": "bad", "vm": "down", "detail":
"down"}
Hostname : ovirt-node-02
Engine status : {"health": "good",
"vm": "up", "detail": "up"}
Hostname : ovirt-node-03
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down",
"detail": "unknown"}
ovirt-node-03 | success | rc=0 >>
Hostname : ovirt-node-01
Engine status : {"reason": "bad vm status",
"health": "bad", "vm": "down", "detail":
"down"}
Hostname : ovirt-node-02
Engine status : {"health": "good",
"vm": "up", "detail": "up"}
Hostname : ovirt-node-03
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down",
"detail": "unknown”}
So it looks like the webadmin UI is wrong as well…
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users