[ovirt-users] Webadmin UI not reporting # of VMs correctly
Will Dennis
wdennis at nec-labs.com
Wed Jan 6 09:31:25 EST 2016
To follow up on this, after the migrations as a result of the troubleshooting, the webadmin UI of the hosts in my datacenter now has each host with “1” VM running…
https://drive.google.com/file/d/0B88nnCy4LpFMYklDVDhFUV96Y00/view?usp=sharing
However, The only VM that is running currently is the hosted engine, which is currently running on host “ovirt-node-03” —
$ ansible istgroup-ovirt -f 1 -i prod -u root -m shell -a "hosted-engine --vm-status | grep -e '^Hostname' -e '^Engine'"
ovirt-node-01 | success | rc=0 >>
Hostname : ovirt-node-01
Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down", "detail": "down"}
Hostname : ovirt-node-02
Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down", "detail": "down"}
Hostname : ovirt-node-03
Engine status : {"health": "good", "vm": "up", "detail": "up"}
ovirt-node-02 | success | rc=0 >>
Hostname : ovirt-node-01
Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down", "detail": "down"}
Hostname : ovirt-node-02
Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down", "detail": "down"}
Hostname : ovirt-node-03
Engine status : {"health": "good", "vm": "up", "detail": "up"}
ovirt-node-03 | success | rc=0 >>
Hostname : ovirt-node-01
Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down", "detail": "down"}
Hostname : ovirt-node-02
Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down", "detail": "down"}
Hostname : ovirt-node-03
Engine status : {"health": "good", "vm": "up", "detail": "up”}
Is this a UI bug of some sort?
On Jan 4, 2016, at 10:47 PM, Will Dennis <wdennis at nec-labs.com<mailto:wdennis at nec-labs.com>> wrote:
Note that on the screenshot I posted above, that the webadmin hosts screen says that -node-01 has one VM running, and the others 0… You’d think that would be the HE VM running on there, but it’s actually on -node-02:
$ ansible istgroup-ovirt -f 1 -i prod -u root -m shell -a "hosted-engine --vm-status | grep -e '^Hostname' -e '^Engine'"
ovirt-node-01 | success | rc=0 >>
Hostname : ovirt-node-01
Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down", "detail": "down"}
Hostname : ovirt-node-02
Engine status : {"health": "good", "vm": "up", "detail": "up"}
Hostname : ovirt-node-03
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}
ovirt-node-02 | success | rc=0 >>
Hostname : ovirt-node-01
Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down", "detail": "down"}
Hostname : ovirt-node-02
Engine status : {"health": "good", "vm": "up", "detail": "up"}
Hostname : ovirt-node-03
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}
ovirt-node-03 | success | rc=0 >>
Hostname : ovirt-node-01
Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down", "detail": "down"}
Hostname : ovirt-node-02
Engine status : {"health": "good", "vm": "up", "detail": "up"}
Hostname : ovirt-node-03
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown”}
So it looks like the webadmin UI is wrong as well…
More information about the Users
mailing list