POWER9 (ppc64le) Support on oVirt 4.4.1

Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues. Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv. Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10. Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix Thanks,

On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org> wrote:
Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues.
Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv.
Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10.
Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix
Can you please provide the output of 'vdsm-client Host getCapabilities' on that host?
Thanks,
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV...

Hello Arik, This is probably the issue. Output totally empty: [root@power ~]# vdsm-client Host getCapabilities [root@power ~]# Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch Any ideias to try? Thanks. On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues. Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv. Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10. Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix Can you please provide the output of 'vdsm-client Host getCapabilities' on that host? Thanks, _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV...

What a strange thing is happening here: [root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files: Transaction test succeeded. Running transaction Preparing : 1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch 2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. Verifying : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Installed products updated. Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch I’ve never seen something like this. I’ve already reinstalled the host from the ground and the same thing happens. On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello Arik, This is probably the issue. Output totally empty: [root@power ~]# vdsm-client Host getCapabilities [root@power ~]# Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch Any ideias to try? Thanks. On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues. Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv. Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10. Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix Can you please provide the output of 'vdsm-client Host getCapabilities' on that host? Thanks, _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4...

Okay here we go Arik. With your insight I’ve done the following: # rpm -Va This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done: yum list installed | cut -f 1 -d " " > file yum -y reinstall `cat file | xargs` Reinstalled everything. Everything worked as expected and I finally added the machine back to the cluster. It’s operational. Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64: [cid:78A36F83-2CAF-4A52-B0CA-FCF35177F0F9] So… This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this… Ideias? On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: What a strange thing is happening here: [root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files: Transaction test succeeded. Running transaction Preparing : 1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch 2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. Verifying : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Installed products updated. Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch I’ve never seen something like this. I’ve already reinstalled the host from the ground and the same thing happens. On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello Arik, This is probably the issue. Output totally empty: [root@power ~]# vdsm-client Host getCapabilities [root@power ~]# Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch Any ideias to try? Thanks. On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues. Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv. Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10. Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix Can you please provide the output of 'vdsm-client Host getCapabilities' on that host? Thanks, _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4...

On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users@ovirt.org> wrote:
Okay here we go Arik.
With your insight I’ve done the following:
# rpm -Va
This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done:
you should still have host deploy logs on the engine machine. it’s weird it succeeded, unless it somehow happened afterwards?
yum list installed | cut -f 1 -d " " > file yum -y reinstall `cat file | xargs`
Reinstalled everything.
Everything worked as expected and I finally added the machine back to the cluster. It’s operational.
eh, I wouldn’t trust it much. did you run redeploy at least?
Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64:
<PastedGraphic-2.png>
So…
This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this…
how exactly did you import them? could be a bug indeed. we don’t support changing it as it doesn’t make sense, the guest can’t be converted Thanks, michal
Ideias?
On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br <mailto:ferrao@versatushpc.com.br>> wrote:
What a strange thing is happening here:
[root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client
A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files:
Transaction test succeeded. Running transaction Preparing : 1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch 2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked.
/sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked.
Verifying : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Installed products updated.
Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch
I’ve never seen something like this.
I’ve already reinstalled the host from the ground and the same thing happens.
On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org <mailto:users@ovirt.org>> wrote:
Hello Arik, This is probably the issue. Output totally empty:
[root@power ~]# vdsm-client Host getCapabilities [root@power ~]#
Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch
Any ideias to try?
Thanks.
On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com <mailto:ahadas@redhat.com>> wrote:
On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org <mailto:users@ovirt.org>> wrote: Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues.
Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv.
Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10.
Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix
Can you please provide the output of 'vdsm-client Host getCapabilities' on that host?
Thanks,
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHVR36WKUHBFDMCQHEJHP/>
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4U3DIMDKP6I2RNNNA3T/>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFV...

Hi Michal, On 27 Aug 2020, at 05:08, Michal Skrivanek <michal.skrivanek@redhat.com<mailto:michal.skrivanek@redhat.com>> wrote: On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Okay here we go Arik. With your insight I’ve done the following: # rpm -Va This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done: you should still have host deploy logs on the engine machine. it’s weird it succeeded, unless it somehow happened afterwards? It only succeeded my yum reinstall rampage. yum list installed | cut -f 1 -d " " > file yum -y reinstall `cat file | xargs` Reinstalled everything. Everything worked as expected and I finally added the machine back to the cluster. It’s operational. eh, I wouldn’t trust it much. did you run redeploy at least? I’ve done reinstall on the web interface of the engine. I can reinstall the host, there’s nothing running on it… gonna try a third format. Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64: <PastedGraphic-2.png> So… This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this… how exactly did you import them? could be a bug indeed. we don’t support changing it as it doesn’t make sense, the guest can’t be converted Yeah. I done the normal procedure, added the storage domain to the engine and clicked on “Import VM”. Immediately it was detected as x86_64. Since I wasn’t able to upgrade my environment from 4.3.10 to 4.4.1 due to random errors when redeploying the engine with the backup from 4.3.10, I just reinstalled it, reconfigured everything and them imported the storage domains. I don’t know where the information about architecture is stored in the storage domain, I tried to search for some metadata files inside the domain but nothing come up. Is there a way to force this change? It must be a way. I even tried to import the machine as x86_64. So I can delete the VM and just reattach the disks in a new only, effectively not losing the data, but… [cid:254FDE4F-5CBC-472A-9C89-0B728E4B0894] Yeah, so something is broken. The check during the import appears to be OK, but the interface does not me allow to import it to the ppc64le machine, since it’s read as x86_64. Thanks, michal Ideias? On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: What a strange thing is happening here: [root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files: Transaction test succeeded. Running transaction Preparing : 1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch 2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. Verifying : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Installed products updated. Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch I’ve never seen something like this. I’ve already reinstalled the host from the ground and the same thing happens. On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello Arik, This is probably the issue. Output totally empty: [root@power ~]# vdsm-client Host getCapabilities [root@power ~]# Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch Any ideias to try? Thanks. On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues. Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv. Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10. Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix Can you please provide the output of 'vdsm-client Host getCapabilities' on that host? Thanks, _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFV...

On Thu, Aug 27, 2020 at 8:40 PM Vinícius Ferrão via Users <users@ovirt.org> wrote:
Hi Michal,
On 27 Aug 2020, at 05:08, Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users@ovirt.org> wrote:
Okay here we go Arik.
With your insight I’ve done the following:
# rpm -Va
This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done:
you should still have host deploy logs on the engine machine. it’s weird it succeeded, unless it somehow happened afterwards?
It only succeeded my yum reinstall rampage.
yum list installed | cut -f 1 -d " " > file yum -y reinstall `cat file | xargs`
Reinstalled everything.
Everything worked as expected and I finally added the machine back to the cluster. It’s operational.
eh, I wouldn’t trust it much. did you run redeploy at least?
I’ve done reinstall on the web interface of the engine. I can reinstall the host, there’s nothing running on it… gonna try a third format.
Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64:
<PastedGraphic-2.png>
So…
This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this…
how exactly did you import them? could be a bug indeed. we don’t support changing it as it doesn’t make sense, the guest can’t be converted
Yeah. I done the normal procedure, added the storage domain to the engine and clicked on “Import VM”. Immediately it was detected as x86_64.
Since I wasn’t able to upgrade my environment from 4.3.10 to 4.4.1 due to random errors when redeploying the engine with the backup from 4.3.10, I just reinstalled it, reconfigured everything and them imported the storage domains.
I don’t know where the information about architecture is stored in the storage domain, I tried to search for some metadata files inside the domain but nothing come up. Is there a way to force this change? It must be a way.
I even tried to import the machine as x86_64. So I can delete the VM and just reattach the disks in a new only, effectively not losing the data, but…
Yeah, so something is broken. The check during the import appears to be OK, but the interface does not me allow to import it to the ppc64le machine, since it’s read as x86_64.
Could you please provide the output of the following query from the database: select * from unregistered_ovf_of_entities where entity_name=' energy.versatushpc.com.br';
Thanks, michal
Ideias?
On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br> wrote:
What a strange thing is happening here:
[root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client
A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files:
Transaction test succeeded. Running transaction Preparing :
1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch
1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch
2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch
2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked.
/sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked.
Verifying : vdsm-client-4.40.22-1.el8ev.noarch
1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch
2/2 Installed products updated.
Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch
I’ve never seen something like this.
I’ve already reinstalled the host from the ground and the same thing happens.
On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org> wrote:
Hello Arik, This is probably the issue. Output totally empty:
[root@power ~]# vdsm-client Host getCapabilities [root@power ~]#
Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch
Any ideias to try?
Thanks.
On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com> wrote:
On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org> wrote:
Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues.
Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv.
Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10.
Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix
Can you please provide the output of 'vdsm-client Host getCapabilities' on that host?
Thanks,
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFV...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMNMYMBMWTC7UG...

Etc/GMT</TimeZone><default_boot_sequence>9</default_boot_sequence><Generation>8</Generation><ClusterCompatibilityVersion>4.3</ClusterCompatibilityVersion><V mType>1</VmType><ResumeBehavior>AUTO_RESUME</ResumeBehavior><MinAllocatedMem>2730</MinAllocatedMem><IsStateless>false</IsStateless><IsRunAndPause>false</IsRu nAndPause><AutoStartup>false</AutoStartup><Priority>1</Priority><CreatedByUserId>6ea16f22-45d7-11ea-bd83-00163e518b7c</CreatedByUserId><MigrationSupport>0</M igrationSupport><IsBootMenuEnabled>false</IsBootMenuEnabled><IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled><IsSpiceCopyPasteEnabled>true</IsSpi ceCopyPasteEnabled><AllowConsoleReconnect>true</AllowConsoleReconnect><ConsoleDisconnectAction>LOCK_SCREEN</ConsoleDisconnectAction><CustomEmulatedMachine></ CustomEmulatedMachine><BiosType>0</BiosType><CustomCpuName></CustomCpuName><PredefinedProperties></PredefinedProperties><UserDefinedProperties></UserDefinedP roperties><MaxMemorySizeMb>16384</MaxMemorySizeMb><MultiQueuesEnabled>true</MultiQueuesEnabled><UseHostCpu>false</UseHostCpu><ClusterName>Blastoise</ClusterN ame><TemplateId>00000000-0000-0000-0000-000000000000</TemplateId><TemplateName>Blank</TemplateName><IsInitilized>true</IsInitilized><Origin>0</Origin><quota_ id>32644894-755e-4588-b967-8fb9dc327795</quota_id><DefaultDisplayType>2</DefaultDisplayType><TrustedService>false</TrustedService><OriginalTemplateId>0000000 0-0000-0000-0000-000000000000</OriginalTemplateId><OriginalTemplateName>Blank</OriginalTemplateName><CpuPinning></CpuPinning><UseLatestVersion>false</UseLate stVersion><StopTime>2020/08/20 17:52:35</StopTime><Section ovf:id="46ad1d80-2649-48f5-92e6-e5489d11d30c" ovf:required="false" xsi:type="ovf:OperatingSystemSe ction_Type"><Info>Guest Operating System</Info><Description>other_linux_ppc64</Description></Section><Section xsi:type="ovf:VirtualHardwareSection_Type"><Inf o>2 CPU, 4096 Memory</Info><System><vssd:VirtualSystemType>ENGINE 4.1.0.0</vssd:VirtualSystemType></System><Item><rasd:Caption>2 virtual cpu</rasd:Caption><r asd:Description>Number of virtual CPU</rasd:Description><rasd:InstanceId>1</rasd:InstanceId><rasd:ResourceType>3</rasd:ResourceType><rasd:num_of_sockets>2</r asd:num_of_sockets><rasd:cpu_per_socket>1</rasd:cpu_per_socket><rasd:threads_per_cpu>1</rasd:threads_per_cpu><rasd:max_num_of_vcpus>16</rasd:max_num_of_vcpus <rasd:VirtualQuantity>2</rasd:VirtualQuantity></Item><Item><rasd:Caption>4096 MB of memory</rasd:Caption><rasd:Description>Memory Size</rasd:Description><ra sd:InstanceId>2</rasd:InstanceId><rasd:ResourceType>4</rasd:ResourceType><rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits><rasd:VirtualQuantity>4096</ra sd:VirtualQuantity></Item><Item><rasd:Caption>energy.versatushpc.com.br_Disk1</rasd:Caption><rasd:InstanceId>b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:Insta nceId><rasd:ResourceType>17</rasd:ResourceType><rasd:HostResource>775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:HostResourc e><rasd:Parent>00000000-0000-0000-0000-000000000000</rasd:Parent><rasd:Template>00000000-0000-0000-0000-000000000000</rasd:Template><rasd:ApplicationList></r asd:ApplicationList><rasd:StorageId>d19456e4-0051-456e-b33c-57348a78c2e0</rasd:StorageId><rasd:StoragePoolId>6c54f91e-89bf-45b4-bc48-56e74c4efd5e</rasd:Stora gePoolId><rasd:CreationDate>2020/08/19 20:13:05</rasd:CreationDate><rasd:LastModified>1970/01/01 00:00:00</rasd:LastModified><rasd:last_modified_date>2020/08 /20 18:37:41</rasd:last_modified_date><Type>disk</Type><Device>disk</Device><rasd:Address>{type=drive, bus=0, controller=1, target=0, unit=0}</rasd:Address>< BootOrder>1</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-775b24a9-6a32-431a-831f-4ac9b3b31152</Alias></Item><Item><rasd:Capt ion>Ethernet adapter on legacyservers</rasd:Caption><rasd:InstanceId>e6e37ae1-f263-4986-a039-e8e01e72d1f4</rasd:InstanceId><rasd:ResourceType>10</rasd:Resour ceType><rasd:OtherResourceType>legacyservers</rasd:OtherResourceType><rasd:ResourceSubType>3</rasd:ResourceSubType><rasd:Connection>legacyservers</rasd:Conne ction><rasd:Linked>true</rasd:Linked><rasd:Name>nic1</rasd:Name><rasd:ElementName>nic1</rasd:ElementName><rasd:MACAddress>56:6f:f0:b3:00:23</rasd:MACAddress> <rasd:speed>10000</rasd:speed><Type>interface</Type><Device>bridge</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><I sReadOnly>false</IsReadOnly><Alias>ua-e6e37ae1-f263-4986-a039-e8e01e72d1f4</Alias></Item><Item><rasd:Caption>USB Controller</rasd:Caption><rasd:InstanceId>3< /rasd:InstanceId><rasd:ResourceType>23</rasd:ResourceType><rasd:UsbPolicy>DISABLED</rasd:UsbPolicy></Item><Item><rasd:Caption>Graphical Controller</rasd:Capt ion><rasd:InstanceId>1440c749-728e-4a86-afc1-8237c6055fa5</rasd:InstanceId><rasd:ResourceType>20</rasd:ResourceType><rasd:VirtualQuantity>1</rasd:VirtualQuan tity><rasd:SinglePciQxl>false</rasd:SinglePciQxl><Type>video</Type><Device>vga</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</ IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-1440c749-728e-4a86-afc1-8237c6055fa5</Alias><SpecParams><vram>16384</vram></SpecParams></Item><Item><rasd:C aption>Graphical Framebuffer</rasd:Caption><rasd:InstanceId>603e7f0c-8d28-4c3e-bd90-c5685b752100</rasd:InstanceId><rasd:ResourceType>26</rasd:ResourceType><T ype>graphics</Type><Device>vnc</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>< /Alias></Item><Item><rasd:Caption>CDROM</rasd:Caption><rasd:InstanceId>3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</rasd:InstanceId><rasd:ResourceType>15</rasd:Reso urceType><Type>disk</Type><Device>cdrom</Device><rasd:Address>{type=drive, bus=0, controller=0, target=0, unit=2}</rasd:Address><BootOrder>2</BootOrder><IsPl ugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</Alias><SpecParams><path>CentOS-8.1.1911-x86_64-boot.iso</p ath></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>66f3a2b8-d2c5-4032-9f10-8742d65a0a3e</rasd:InstanceId><Type>controller </Type><Device>scsi</Device><rasd:Address>{type=spapr-vio}</rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Al ias></Alias><SpecParams><index>0</index></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>e065acb2-e7db-4f55-a1df-385f19299b d0</rasd:InstanceId><Type>rng</Type><Device>virtio</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false< /IsReadOnly><Alias>ua-e065acb2-e7db-4f55-a1df-385f19299bd0</Alias><SpecParams><source>urandom</source></SpecParams></Item><Item><rasd:ResourceType>0</rasd:Re
<BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-7b4c4ef6-2a9a-4120-b838-3127db0fd703</Alias><SpecParams><model>vir tio</model></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>5aade6c7-8f77-4fea-a5de-66350b214935</rasd:InstanceId><Type>con
On 27 Aug 2020, at 16:03, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 8:40 PM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hi Michal, On 27 Aug 2020, at 05:08, Michal Skrivanek <michal.skrivanek@redhat.com<mailto:michal.skrivanek@redhat.com>> wrote: On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Okay here we go Arik. With your insight I’ve done the following: # rpm -Va This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done: you should still have host deploy logs on the engine machine. it’s weird it succeeded, unless it somehow happened afterwards? It only succeeded my yum reinstall rampage. yum list installed | cut -f 1 -d " " > file yum -y reinstall `cat file | xargs` Reinstalled everything. Everything worked as expected and I finally added the machine back to the cluster. It’s operational. eh, I wouldn’t trust it much. did you run redeploy at least? I’ve done reinstall on the web interface of the engine. I can reinstall the host, there’s nothing running on it… gonna try a third format. Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64: <PastedGraphic-2.png> So… This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this… how exactly did you import them? could be a bug indeed. we don’t support changing it as it doesn’t make sense, the guest can’t be converted Yeah. I done the normal procedure, added the storage domain to the engine and clicked on “Import VM”. Immediately it was detected as x86_64. Since I wasn’t able to upgrade my environment from 4.3.10 to 4.4.1 due to random errors when redeploying the engine with the backup from 4.3.10, I just reinstalled it, reconfigured everything and them imported the storage domains. I don’t know where the information about architecture is stored in the storage domain, I tried to search for some metadata files inside the domain but nothing come up. Is there a way to force this change? It must be a way. I even tried to import the machine as x86_64. So I can delete the VM and just reattach the disks in a new only, effectively not losing the data, but… <PastedGraphic-1.png> Yeah, so something is broken. The check during the import appears to be OK, but the interface does not me allow to import it to the ppc64le machine, since it’s read as x86_64. Could you please provide the output of the following query from the database: select * from unregistered_ovf_of_entities where entity_name='energy.versatushpc.com.br<http://energy.versatushpc.com.br/>'; Sure, there you go: 46ad1d80-2649-48f5-92e6-e5489d11d30c | energy.versatushpc.com.br<http://energy.versatushpc.com.br> | VM | 1 | | d19456e4-0051-456e-b33c-57348a78c2e0 | <?xml version="1.0" encoding="UTF-8"?><ovf:Envelope xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1/" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim -schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingDa..." xmlns:xsi="http://ww w.w3.org/2001/XMLSchema-instance<http://w.w3.org/2001/XMLSchema-instance>" ovf:version="4.1.0.0"><References><File ovf:href="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af " ovf:id="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="512" ovf:description="Active VM" ovf:disk_storage_type="IMAGE" ovf:cinder_volume_type=""></File></R eferences><NetworkSection><Info>List of networks</Info><Network ovf:name="legacyservers"></Network></NetworkSection><Section xsi:type="ovf:DiskSection_Type"> <Info>List of Virtual Disks</Info><Disk ovf:diskId="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="40" ovf:actual_size="1" ovf:vm_snapshot_id="6de58683-c586 -4e97-b0e8-ee7ee3baf754" ovf:parentRef="" ovf:fileRef="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:format="http://www.vmwa re.com/specifications/vmdk.html#sparse<http://re.com/specifications/vmdk.html#sparse>" ovf:volume-format="RAW" ovf:volume-type="Sparse" ovf:disk-interface="VirtIO_SCSI" ovf:read-only="false" ovf:shareable ="false" ovf:boot="true" ovf:pass-discard="false" ovf:disk-alias="energy.versatushpc.com.br_Disk1" ovf:disk-description="" ovf:wipe-after-delete="false"></Di sk></Section><Content ovf:id="out" xsi:type="ovf:VirtualSystem_Type"><Name>energy.versatushpc.com.br<http://energy.versatushpc.com.br></Name><Description>Holds Kosen backend and frontend prod services (nginx + docker)</Description><Comment></Comment><CreationDate>2020/08/19 20:11:33</CreationDate><ExportDate>2020/08/20 18:37:41</ExportDate><Delet eProtected>false</DeleteProtected><SsoMethod>guest_agent</SsoMethod><IsSmartcardEnabled>false</IsSmartcardEnabled><NumOfIoThreads>1</NumOfIoThreads><TimeZone sourceType><rasd:InstanceId>7b4c4ef6-2a9a-4120-b838-3127db0fd703</rasd:InstanceId><Type>balloon</Type><Device>memballoon</Device><rasd:Address></rasd:Address troller</Type><Device>virtio-scsi</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlugged><IsReadOnly>false</IsReadOnly><Ali as></Alias><SpecParams><ioThreadId></ioThreadId></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>4d4d7bfd-b1e8-45c3-a5e8-7e 0b7773bbf2</rasd:InstanceId><Type>controller</Type><Device>virtio-serial</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlu gged><IsReadOnly>false</IsReadOnly><Alias>58ca7b19-0071-00c0-01d6-000000000212</Alias></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>9 cea63da-7afd-41d4-925f-369f993b280f</rasd:InstanceId><Type>controller</Type><Device>usb</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugg ed>false</IsPlugged><IsReadOnly>false</IsReadOnly><Alias></Alias><SpecParams><index>0</index><model>nec-xhci</model></SpecParams></Item></Section><Section xs i:type="ovf:SnapshotsSection_Type"><Snapshot ovf:id="6de58683-c586-4e97-b0e8-ee7ee3baf754"><Type>ACTIVE</Type><Description>Active VM</Description><CreationDa te>2020/08/19 20:11:33</CreationDate></Snapshot></Section></Content></ovf:Envelope> | | 0 Thank you! Thanks, michal Ideias? On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: What a strange thing is happening here: [root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files: Transaction test succeeded. Running transaction Preparing : 1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch 2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. Verifying : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Installed products updated. Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch I’ve never seen something like this. I’ve already reinstalled the host from the ground and the same thing happens. On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello Arik, This is probably the issue. Output totally empty: [root@power ~]# vdsm-client Host getCapabilities [root@power ~]# Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch Any ideias to try? Thanks. On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues. Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv. Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10. Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix Can you please provide the output of 'vdsm-client Host getCapabilities' on that host? Thanks, _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMNMYMBMWTC7UG...

On Thu, Aug 27, 2020 at 10:13 PM Vinícius Ferrão <ferrao@versatushpc.com.br> wrote:
On 27 Aug 2020, at 16:03, Arik Hadas <ahadas@redhat.com> wrote:
On Thu, Aug 27, 2020 at 8:40 PM Vinícius Ferrão via Users <users@ovirt.org> wrote:
Hi Michal,
On 27 Aug 2020, at 05:08, Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users@ovirt.org> wrote:
Okay here we go Arik.
With your insight I’ve done the following:
# rpm -Va
This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done:
you should still have host deploy logs on the engine machine. it’s weird it succeeded, unless it somehow happened afterwards?
It only succeeded my yum reinstall rampage.
yum list installed | cut -f 1 -d " " > file yum -y reinstall `cat file | xargs`
Reinstalled everything.
Everything worked as expected and I finally added the machine back to the cluster. It’s operational.
eh, I wouldn’t trust it much. did you run redeploy at least?
I’ve done reinstall on the web interface of the engine. I can reinstall the host, there’s nothing running on it… gonna try a third format.
Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64:
<PastedGraphic-2.png>
So…
This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this…
how exactly did you import them? could be a bug indeed. we don’t support changing it as it doesn’t make sense, the guest can’t be converted
Yeah. I done the normal procedure, added the storage domain to the engine and clicked on “Import VM”. Immediately it was detected as x86_64.
Since I wasn’t able to upgrade my environment from 4.3.10 to 4.4.1 due to random errors when redeploying the engine with the backup from 4.3.10, I just reinstalled it, reconfigured everything and them imported the storage domains.
I don’t know where the information about architecture is stored in the storage domain, I tried to search for some metadata files inside the domain but nothing come up. Is there a way to force this change? It must be a way.
I even tried to import the machine as x86_64. So I can delete the VM and just reattach the disks in a new only, effectively not losing the data, but…
<PastedGraphic-1.png>
Yeah, so something is broken. The check during the import appears to be OK, but the interface does not me allow to import it to the ppc64le machine, since it’s read as x86_64.
Could you please provide the output of the following query from the database: select * from unregistered_ovf_of_entities where entity_name=' energy.versatushpc.com.br';
Sure, there you go:
46ad1d80-2649-48f5-92e6-e5489d11d30c | energy.versatushpc.com.br | VM | 1 | | d19456e4-0051-456e-b33c-57348a78c2e0 | <?xml version="1.0" encoding="UTF-8"?><ovf:Envelope xmlns:ovf=" http://schemas.dmtf.org/ovf/envelope/1/" xmlns:rasd=" http://schemas.dmtf.org/wbem/wscim/1/cim -schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd=" http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingDa..." xmlns:xsi="http://ww w.w3.org/2001/XMLSchema-instance" ovf:version="4.1.0.0"><References><File ovf:href="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af " ovf:id="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="512" ovf:description="Active VM" ovf:disk_storage_type="IMAGE" ovf:cinder_volume_type=""></File></R eferences><NetworkSection><Info>List of networks</Info><Network ovf:name="legacyservers"></Network></NetworkSection><Section xsi:type="ovf:DiskSection_Type"> <Info>List of Virtual Disks</Info><Disk ovf:diskId="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="40" ovf:actual_size="1" ovf:vm_snapshot_id="6de58683-c586 -4e97-b0e8-ee7ee3baf754" ovf:parentRef="" ovf:fileRef="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:format="http://www.vmwa re.com/specifications/vmdk.html#sparse" ovf:volume-format="RAW" ovf:volume-type="Sparse" ovf:disk-interface="VirtIO_SCSI" ovf:read-only="false" ovf:shareable ="false" ovf:boot="true" ovf:pass-discard="false" ovf:disk-alias="energy.versatushpc.com.br_Disk1" ovf:disk-description="" ovf:wipe-after-delete="false"></Di sk></Section><Content ovf:id="out" xsi:type="ovf:VirtualSystem_Type"><Name> energy.versatushpc.com.br</Name><Description>Holds Kosen backend and frontend prod services (nginx + docker)</Description><Comment></Comment><CreationDate>2020/08/19 20:11:33</CreationDate><ExportDate>2020/08/20 18:37:41</ExportDate><Delet
eProtected>false</DeleteProtected><SsoMethod>guest_agent</SsoMethod><IsSmartcardEnabled>false</IsSmartcardEnabled><NumOfIoThreads>1</NumOfIoThreads><TimeZone
Etc/GMT</TimeZone><default_boot_sequence>9</default_boot_sequence><Generation>8</Generation><ClusterCompatibilityVersion>4.3</ClusterCompatibilityVersion><V
mType>1</VmType><ResumeBehavior>AUTO_RESUME</ResumeBehavior><MinAllocatedMem>2730</MinAllocatedMem><IsStateless>false</IsStateless><IsRunAndPause>false</IsRu
nAndPause><AutoStartup>false</AutoStartup><Priority>1</Priority><CreatedByUserId>6ea16f22-45d7-11ea-bd83-00163e518b7c</CreatedByUserId><MigrationSupport>0</M
igrationSupport><IsBootMenuEnabled>false</IsBootMenuEnabled><IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled><IsSpiceCopyPasteEnabled>true</IsSpi
ceCopyPasteEnabled><AllowConsoleReconnect>true</AllowConsoleReconnect><ConsoleDisconnectAction>LOCK_SCREEN</ConsoleDisconnectAction><CustomEmulatedMachine></
CustomEmulatedMachine><BiosType>0</BiosType><CustomCpuName></CustomCpuName><PredefinedProperties></PredefinedProperties><UserDefinedProperties></UserDefinedP
roperties><MaxMemorySizeMb>16384</MaxMemorySizeMb><MultiQueuesEnabled>true</MultiQueuesEnabled><UseHostCpu>false</UseHostCpu><ClusterName>Blastoise</ClusterN
ame><TemplateId>00000000-0000-0000-0000-000000000000</TemplateId><TemplateName>Blank</TemplateName><IsInitilized>true</IsInitilized><Origin>0</Origin><quota_
id>32644894-755e-4588-b967-8fb9dc327795</quota_id><DefaultDisplayType>2</DefaultDisplayType><TrustedService>false</TrustedService><OriginalTemplateId>0000000
0-0000-0000-0000-000000000000</OriginalTemplateId><OriginalTemplateName>Blank</OriginalTemplateName><CpuPinning></CpuPinning><UseLatestVersion>false</UseLate stVersion><StopTime>2020/08/20 17:52:35</StopTime><Section ovf:id="46ad1d80-2649-48f5-92e6-e5489d11d30c" ovf:required="false" xsi:type="ovf:OperatingSystemSe ction_Type"><Info>Guest Operating System</Info><Description>other_linux_ppc64</Description></Section><Section xsi:type="ovf:VirtualHardwareSection_Type"><Inf o>2 CPU, 4096 Memory</Info><System><vssd:VirtualSystemType>ENGINE 4.1.0.0</vssd:VirtualSystemType></System><Item><rasd:Caption>2 virtual cpu</rasd:Caption><r asd:Description>Number of virtual CPU</rasd:Description><rasd:InstanceId>1</rasd:InstanceId><rasd:ResourceType>3</rasd:ResourceType><rasd:num_of_sockets>2</r
asd:num_of_sockets><rasd:cpu_per_socket>1</rasd:cpu_per_socket><rasd:threads_per_cpu>1</rasd:threads_per_cpu><rasd:max_num_of_vcpus>16</rasd:max_num_of_vcpus
<rasd:VirtualQuantity>2</rasd:VirtualQuantity></Item><Item><rasd:Caption>4096 MB of memory</rasd:Caption><rasd:Description>Memory Size</rasd:Description><ra
sd:InstanceId>2</rasd:InstanceId><rasd:ResourceType>4</rasd:ResourceType><rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits><rasd:VirtualQuantity>4096</ra
sd:VirtualQuantity></Item><Item><rasd:Caption>energy.versatushpc.com.br_Disk1</rasd:Caption><rasd:InstanceId>b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:Insta
nceId><rasd:ResourceType>17</rasd:ResourceType><rasd:HostResource>775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:HostResourc
e><rasd:Parent>00000000-0000-0000-0000-000000000000</rasd:Parent><rasd:Template>00000000-0000-0000-0000-000000000000</rasd:Template><rasd:ApplicationList></r
asd:ApplicationList><rasd:StorageId>d19456e4-0051-456e-b33c-57348a78c2e0</rasd:StorageId><rasd:StoragePoolId>6c54f91e-89bf-45b4-bc48-56e74c4efd5e</rasd:Stora gePoolId><rasd:CreationDate>2020/08/19 20:13:05</rasd:CreationDate><rasd:LastModified>1970/01/01 00:00:00</rasd:LastModified><rasd:last_modified_date>2020/08 /20 18:37:41</rasd:last_modified_date><Type>disk</Type><Device>disk</Device><rasd:Address>{type=drive, bus=0, controller=1, target=0, unit=0}</rasd:Address><
BootOrder>1</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-775b24a9-6a32-431a-831f-4ac9b3b31152</Alias></Item><Item><rasd:Capt ion>Ethernet adapter on legacyservers</rasd:Caption><rasd:InstanceId>e6e37ae1-f263-4986-a039-e8e01e72d1f4</rasd:InstanceId><rasd:ResourceType>10</rasd:Resour
ceType><rasd:OtherResourceType>legacyservers</rasd:OtherResourceType><rasd:ResourceSubType>3</rasd:ResourceSubType><rasd:Connection>legacyservers</rasd:Conne
ction><rasd:Linked>true</rasd:Linked><rasd:Name>nic1</rasd:Name><rasd:ElementName>nic1</rasd:ElementName><rasd:MACAddress>56:6f:f0:b3:00:23</rasd:MACAddress>
<rasd:speed>10000</rasd:speed><Type>interface</Type><Device>bridge</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><I sReadOnly>false</IsReadOnly><Alias>ua-e6e37ae1-f263-4986-a039-e8e01e72d1f4</Alias></Item><Item><rasd:Caption>USB Controller</rasd:Caption><rasd:InstanceId>3< /rasd:InstanceId><rasd:ResourceType>23</rasd:ResourceType><rasd:UsbPolicy>DISABLED</rasd:UsbPolicy></Item><Item><rasd:Caption>Graphical Controller</rasd:Capt
ion><rasd:InstanceId>1440c749-728e-4a86-afc1-8237c6055fa5</rasd:InstanceId><rasd:ResourceType>20</rasd:ResourceType><rasd:VirtualQuantity>1</rasd:VirtualQuan
tity><rasd:SinglePciQxl>false</rasd:SinglePciQxl><Type>video</Type><Device>vga</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</
IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-1440c749-728e-4a86-afc1-8237c6055fa5</Alias><SpecParams><vram>16384</vram></SpecParams></Item><Item><rasd:C aption>Graphical Framebuffer</rasd:Caption><rasd:InstanceId>603e7f0c-8d28-4c3e-bd90-c5685b752100</rasd:InstanceId><rasd:ResourceType>26</rasd:ResourceType><T
ype>graphics</Type><Device>vnc</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias><
/Alias></Item><Item><rasd:Caption>CDROM</rasd:Caption><rasd:InstanceId>3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</rasd:InstanceId><rasd:ResourceType>15</rasd:Reso urceType><Type>disk</Type><Device>cdrom</Device><rasd:Address>{type=drive, bus=0, controller=0, target=0, unit=2}</rasd:Address><BootOrder>2</BootOrder><IsPl
ugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</Alias><SpecParams><path>CentOS-8.1.1911-x86_64-boot.iso</p
ath></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>66f3a2b8-d2c5-4032-9f10-8742d65a0a3e</rasd:InstanceId><Type>controller
</Type><Device>scsi</Device><rasd:Address>{type=spapr-vio}</rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Al
ias></Alias><SpecParams><index>0</index></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>e065acb2-e7db-4f55-a1df-385f19299b
d0</rasd:InstanceId><Type>rng</Type><Device>virtio</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false<
/IsReadOnly><Alias>ua-e065acb2-e7db-4f55-a1df-385f19299bd0</Alias><SpecParams><source>urandom</source></SpecParams></Item><Item><rasd:ResourceType>0</rasd:Re
sourceType><rasd:InstanceId>7b4c4ef6-2a9a-4120-b838-3127db0fd703</rasd:InstanceId><Type>balloon</Type><Device>memballoon</Device><rasd:Address></rasd:Address
<BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-7b4c4ef6-2a9a-4120-b838-3127db0fd703</Alias><SpecParams><model>vir
tio</model></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>5aade6c7-8f77-4fea-a5de-66350b214935</rasd:InstanceId><Type>con
troller</Type><Device>virtio-scsi</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlugged><IsReadOnly>false</IsReadOnly><Ali
as></Alias><SpecParams><ioThreadId></ioThreadId></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>4d4d7bfd-b1e8-45c3-a5e8-7e
0b7773bbf2</rasd:InstanceId><Type>controller</Type><Device>virtio-serial</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlu
gged><IsReadOnly>false</IsReadOnly><Alias>58ca7b19-0071-00c0-01d6-000000000212</Alias></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>9
cea63da-7afd-41d4-925f-369f993b280f</rasd:InstanceId><Type>controller</Type><Device>usb</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugg ed>false</IsPlugged><IsReadOnly>false</IsReadOnly><Alias></Alias><SpecParams><index>0</index><model>nec-xhci</model></SpecParams></Item></Section><Section xs i:type="ovf:SnapshotsSection_Type"><Snapshot ovf:id="6de58683-c586-4e97-b0e8-ee7ee3baf754"><Type>ACTIVE</Type><Description>Active VM</Description><CreationDa te>2020/08/19 20:11:33</CreationDate></Snapshot></Section></Content></ovf:Envelope> | | 0
Thank you!
thanks so yeah - we may have an issue with that operating system 'other_linux_ppc64' that has the same name as 'other_linux' in our os-info configuration as a possible workaround, assuming all those unregistered VMs you can try to override the architecture with: update unregistered_ovf_of_entities set architecture = 2;
Thanks, michal
Ideias?
On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br> wrote:
What a strange thing is happening here:
[root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client
A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files:
Transaction test succeeded. Running transaction Preparing :
1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch
1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch
2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch
2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked.
/sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked.
Verifying : vdsm-client-4.40.22-1.el8ev.noarch
1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch
2/2 Installed products updated.
Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch
I’ve never seen something like this.
I’ve already reinstalled the host from the ground and the same thing happens.
On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org> wrote:
Hello Arik, This is probably the issue. Output totally empty:
[root@power ~]# vdsm-client Host getCapabilities [root@power ~]#
Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch
Any ideias to try?
Thanks.
On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com> wrote:
On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users < users@ovirt.org> wrote:
Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues.
Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv.
Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10.
Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix
Can you please provide the output of 'vdsm-client Host getCapabilities' on that host?
Thanks,
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFV...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMNMYMBMWTC7UG...

On Thu, Aug 27, 2020 at 10:23 PM Arik Hadas <ahadas@redhat.com> wrote:
On Thu, Aug 27, 2020 at 10:13 PM Vinícius Ferrão < ferrao@versatushpc.com.br> wrote:
On 27 Aug 2020, at 16:03, Arik Hadas <ahadas@redhat.com> wrote:
On Thu, Aug 27, 2020 at 8:40 PM Vinícius Ferrão via Users < users@ovirt.org> wrote:
Hi Michal,
On 27 Aug 2020, at 05:08, Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users@ovirt.org> wrote:
Okay here we go Arik.
With your insight I’ve done the following:
# rpm -Va
This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done:
you should still have host deploy logs on the engine machine. it’s weird it succeeded, unless it somehow happened afterwards?
It only succeeded my yum reinstall rampage.
yum list installed | cut -f 1 -d " " > file yum -y reinstall `cat file | xargs`
Reinstalled everything.
Everything worked as expected and I finally added the machine back to the cluster. It’s operational.
eh, I wouldn’t trust it much. did you run redeploy at least?
I’ve done reinstall on the web interface of the engine. I can reinstall the host, there’s nothing running on it… gonna try a third format.
Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64:
<PastedGraphic-2.png>
So…
This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this…
how exactly did you import them? could be a bug indeed. we don’t support changing it as it doesn’t make sense, the guest can’t be converted
Yeah. I done the normal procedure, added the storage domain to the engine and clicked on “Import VM”. Immediately it was detected as x86_64.
Since I wasn’t able to upgrade my environment from 4.3.10 to 4.4.1 due to random errors when redeploying the engine with the backup from 4.3.10, I just reinstalled it, reconfigured everything and them imported the storage domains.
I don’t know where the information about architecture is stored in the storage domain, I tried to search for some metadata files inside the domain but nothing come up. Is there a way to force this change? It must be a way.
I even tried to import the machine as x86_64. So I can delete the VM and just reattach the disks in a new only, effectively not losing the data, but…
<PastedGraphic-1.png>
Yeah, so something is broken. The check during the import appears to be OK, but the interface does not me allow to import it to the ppc64le machine, since it’s read as x86_64.
Could you please provide the output of the following query from the database: select * from unregistered_ovf_of_entities where entity_name=' energy.versatushpc.com.br';
Sure, there you go:
46ad1d80-2649-48f5-92e6-e5489d11d30c | energy.versatushpc.com.br | VM | 1 | | d19456e4-0051-456e-b33c-57348a78c2e0 | <?xml version="1.0" encoding="UTF-8"?><ovf:Envelope xmlns:ovf=" http://schemas.dmtf.org/ovf/envelope/1/" xmlns:rasd=" http://schemas.dmtf.org/wbem/wscim/1/cim -schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd=" http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingDa..." xmlns:xsi="http://ww w.w3.org/2001/XMLSchema-instance" ovf:version="4.1.0.0"><References><File ovf:href="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af " ovf:id="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="512" ovf:description="Active VM" ovf:disk_storage_type="IMAGE" ovf:cinder_volume_type=""></File></R eferences><NetworkSection><Info>List of networks</Info><Network ovf:name="legacyservers"></Network></NetworkSection><Section xsi:type="ovf:DiskSection_Type"> <Info>List of Virtual Disks</Info><Disk ovf:diskId="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="40" ovf:actual_size="1" ovf:vm_snapshot_id="6de58683-c586 -4e97-b0e8-ee7ee3baf754" ovf:parentRef="" ovf:fileRef="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:format="http://www.vmwa re.com/specifications/vmdk.html#sparse" ovf:volume-format="RAW" ovf:volume-type="Sparse" ovf:disk-interface="VirtIO_SCSI" ovf:read-only="false" ovf:shareable ="false" ovf:boot="true" ovf:pass-discard="false" ovf:disk-alias="energy.versatushpc.com.br_Disk1" ovf:disk-description="" ovf:wipe-after-delete="false"></Di sk></Section><Content ovf:id="out" xsi:type="ovf:VirtualSystem_Type"><Name>energy.versatushpc.com.br</Name><Description>Holds Kosen backend and frontend prod services (nginx + docker)</Description><Comment></Comment><CreationDate>2020/08/19 20:11:33</CreationDate><ExportDate>2020/08/20 18:37:41</ExportDate><Delet
eProtected>false</DeleteProtected><SsoMethod>guest_agent</SsoMethod><IsSmartcardEnabled>false</IsSmartcardEnabled><NumOfIoThreads>1</NumOfIoThreads><TimeZone
Etc/GMT</TimeZone><default_boot_sequence>9</default_boot_sequence><Generation>8</Generation><ClusterCompatibilityVersion>4.3</ClusterCompatibilityVersion><V
mType>1</VmType><ResumeBehavior>AUTO_RESUME</ResumeBehavior><MinAllocatedMem>2730</MinAllocatedMem><IsStateless>false</IsStateless><IsRunAndPause>false</IsRu
nAndPause><AutoStartup>false</AutoStartup><Priority>1</Priority><CreatedByUserId>6ea16f22-45d7-11ea-bd83-00163e518b7c</CreatedByUserId><MigrationSupport>0</M
igrationSupport><IsBootMenuEnabled>false</IsBootMenuEnabled><IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled><IsSpiceCopyPasteEnabled>true</IsSpi
ceCopyPasteEnabled><AllowConsoleReconnect>true</AllowConsoleReconnect><ConsoleDisconnectAction>LOCK_SCREEN</ConsoleDisconnectAction><CustomEmulatedMachine></
CustomEmulatedMachine><BiosType>0</BiosType><CustomCpuName></CustomCpuName><PredefinedProperties></PredefinedProperties><UserDefinedProperties></UserDefinedP
roperties><MaxMemorySizeMb>16384</MaxMemorySizeMb><MultiQueuesEnabled>true</MultiQueuesEnabled><UseHostCpu>false</UseHostCpu><ClusterName>Blastoise</ClusterN
ame><TemplateId>00000000-0000-0000-0000-000000000000</TemplateId><TemplateName>Blank</TemplateName><IsInitilized>true</IsInitilized><Origin>0</Origin><quota_
id>32644894-755e-4588-b967-8fb9dc327795</quota_id><DefaultDisplayType>2</DefaultDisplayType><TrustedService>false</TrustedService><OriginalTemplateId>0000000
0-0000-0000-0000-000000000000</OriginalTemplateId><OriginalTemplateName>Blank</OriginalTemplateName><CpuPinning></CpuPinning><UseLatestVersion>false</UseLate stVersion><StopTime>2020/08/20 17:52:35</StopTime><Section ovf:id="46ad1d80-2649-48f5-92e6-e5489d11d30c" ovf:required="false" xsi:type="ovf:OperatingSystemSe ction_Type"><Info>Guest Operating System</Info><Description>other_linux_ppc64</Description></Section><Section xsi:type="ovf:VirtualHardwareSection_Type"><Inf o>2 CPU, 4096 Memory</Info><System><vssd:VirtualSystemType>ENGINE 4.1.0.0</vssd:VirtualSystemType></System><Item><rasd:Caption>2 virtual cpu</rasd:Caption><r asd:Description>Number of virtual CPU</rasd:Description><rasd:InstanceId>1</rasd:InstanceId><rasd:ResourceType>3</rasd:ResourceType><rasd:num_of_sockets>2</r
asd:num_of_sockets><rasd:cpu_per_socket>1</rasd:cpu_per_socket><rasd:threads_per_cpu>1</rasd:threads_per_cpu><rasd:max_num_of_vcpus>16</rasd:max_num_of_vcpus
<rasd:VirtualQuantity>2</rasd:VirtualQuantity></Item><Item><rasd:Caption>4096 MB of memory</rasd:Caption><rasd:Description>Memory Size</rasd:Description><ra
sd:InstanceId>2</rasd:InstanceId><rasd:ResourceType>4</rasd:ResourceType><rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits><rasd:VirtualQuantity>4096</ra
sd:VirtualQuantity></Item><Item><rasd:Caption>energy.versatushpc.com.br_Disk1</rasd:Caption><rasd:InstanceId>b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:Insta
nceId><rasd:ResourceType>17</rasd:ResourceType><rasd:HostResource>775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:HostResourc
e><rasd:Parent>00000000-0000-0000-0000-000000000000</rasd:Parent><rasd:Template>00000000-0000-0000-0000-000000000000</rasd:Template><rasd:ApplicationList></r
asd:ApplicationList><rasd:StorageId>d19456e4-0051-456e-b33c-57348a78c2e0</rasd:StorageId><rasd:StoragePoolId>6c54f91e-89bf-45b4-bc48-56e74c4efd5e</rasd:Stora gePoolId><rasd:CreationDate>2020/08/19 20:13:05</rasd:CreationDate><rasd:LastModified>1970/01/01 00:00:00</rasd:LastModified><rasd:last_modified_date>2020/08 /20 18:37:41</rasd:last_modified_date><Type>disk</Type><Device>disk</Device><rasd:Address>{type=drive, bus=0, controller=1, target=0, unit=0}</rasd:Address><
BootOrder>1</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-775b24a9-6a32-431a-831f-4ac9b3b31152</Alias></Item><Item><rasd:Capt ion>Ethernet adapter on legacyservers</rasd:Caption><rasd:InstanceId>e6e37ae1-f263-4986-a039-e8e01e72d1f4</rasd:InstanceId><rasd:ResourceType>10</rasd:Resour
ceType><rasd:OtherResourceType>legacyservers</rasd:OtherResourceType><rasd:ResourceSubType>3</rasd:ResourceSubType><rasd:Connection>legacyservers</rasd:Conne
ction><rasd:Linked>true</rasd:Linked><rasd:Name>nic1</rasd:Name><rasd:ElementName>nic1</rasd:ElementName><rasd:MACAddress>56:6f:f0:b3:00:23</rasd:MACAddress>
<rasd:speed>10000</rasd:speed><Type>interface</Type><Device>bridge</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><I sReadOnly>false</IsReadOnly><Alias>ua-e6e37ae1-f263-4986-a039-e8e01e72d1f4</Alias></Item><Item><rasd:Caption>USB Controller</rasd:Caption><rasd:InstanceId>3< /rasd:InstanceId><rasd:ResourceType>23</rasd:ResourceType><rasd:UsbPolicy>DISABLED</rasd:UsbPolicy></Item><Item><rasd:Caption>Graphical Controller</rasd:Capt
ion><rasd:InstanceId>1440c749-728e-4a86-afc1-8237c6055fa5</rasd:InstanceId><rasd:ResourceType>20</rasd:ResourceType><rasd:VirtualQuantity>1</rasd:VirtualQuan
tity><rasd:SinglePciQxl>false</rasd:SinglePciQxl><Type>video</Type><Device>vga</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</
IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-1440c749-728e-4a86-afc1-8237c6055fa5</Alias><SpecParams><vram>16384</vram></SpecParams></Item><Item><rasd:C aption>Graphical Framebuffer</rasd:Caption><rasd:InstanceId>603e7f0c-8d28-4c3e-bd90-c5685b752100</rasd:InstanceId><rasd:ResourceType>26</rasd:ResourceType><T
ype>graphics</Type><Device>vnc</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias><
/Alias></Item><Item><rasd:Caption>CDROM</rasd:Caption><rasd:InstanceId>3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</rasd:InstanceId><rasd:ResourceType>15</rasd:Reso urceType><Type>disk</Type><Device>cdrom</Device><rasd:Address>{type=drive, bus=0, controller=0, target=0, unit=2}</rasd:Address><BootOrder>2</BootOrder><IsPl
ugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</Alias><SpecParams><path>CentOS-8.1.1911-x86_64-boot.iso</p
ath></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>66f3a2b8-d2c5-4032-9f10-8742d65a0a3e</rasd:InstanceId><Type>controller
</Type><Device>scsi</Device><rasd:Address>{type=spapr-vio}</rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Al
ias></Alias><SpecParams><index>0</index></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>e065acb2-e7db-4f55-a1df-385f19299b
d0</rasd:InstanceId><Type>rng</Type><Device>virtio</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false<
/IsReadOnly><Alias>ua-e065acb2-e7db-4f55-a1df-385f19299bd0</Alias><SpecParams><source>urandom</source></SpecParams></Item><Item><rasd:ResourceType>0</rasd:Re
sourceType><rasd:InstanceId>7b4c4ef6-2a9a-4120-b838-3127db0fd703</rasd:InstanceId><Type>balloon</Type><Device>memballoon</Device><rasd:Address></rasd:Address
<BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-7b4c4ef6-2a9a-4120-b838-3127db0fd703</Alias><SpecParams><model>vir
tio</model></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>5aade6c7-8f77-4fea-a5de-66350b214935</rasd:InstanceId><Type>con
troller</Type><Device>virtio-scsi</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlugged><IsReadOnly>false</IsReadOnly><Ali
as></Alias><SpecParams><ioThreadId></ioThreadId></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>4d4d7bfd-b1e8-45c3-a5e8-7e
0b7773bbf2</rasd:InstanceId><Type>controller</Type><Device>virtio-serial</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlu
gged><IsReadOnly>false</IsReadOnly><Alias>58ca7b19-0071-00c0-01d6-000000000212</Alias></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>9
cea63da-7afd-41d4-925f-369f993b280f</rasd:InstanceId><Type>controller</Type><Device>usb</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugg ed>false</IsPlugged><IsReadOnly>false</IsReadOnly><Alias></Alias><SpecParams><index>0</index><model>nec-xhci</model></SpecParams></Item></Section><Section xs i:type="ovf:SnapshotsSection_Type"><Snapshot ovf:id="6de58683-c586-4e97-b0e8-ee7ee3baf754"><Type>ACTIVE</Type><Description>Active VM</Description><CreationDa te>2020/08/19 20:11:33</CreationDate></Snapshot></Section></Content></ovf:Envelope> | | 0
Thank you!
thanks so yeah - we may have an issue with that operating system 'other_linux_ppc64' that has the same name as 'other_linux' in our os-info configuration as a possible workaround, assuming all those unregistered VMs you can try to override the architecture with: update unregistered_ovf_of_entities set architecture = 2;
as a possible workaround, assuming all those unregistered VMs are from clusters with the same architecture, you can try to override the architecture with: *
Thanks, michal
Ideias?
On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br> wrote:
What a strange thing is happening here:
[root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client
A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files:
Transaction test succeeded. Running transaction Preparing :
1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch
1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch
2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch
2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked.
/sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked.
Verifying : vdsm-client-4.40.22-1.el8ev.noarch
1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch
2/2 Installed products updated.
Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch
I’ve never seen something like this.
I’ve already reinstalled the host from the ground and the same thing happens.
On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org> wrote:
Hello Arik, This is probably the issue. Output totally empty:
[root@power ~]# vdsm-client Host getCapabilities [root@power ~]#
Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch
Any ideias to try?
Thanks.
On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com> wrote:
On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users < users@ovirt.org> wrote:
Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues.
Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv.
Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10.
Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix
Can you please provide the output of 'vdsm-client Host getCapabilities' on that host?
Thanks,
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFV...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMNMYMBMWTC7UG...

Etc/GMT</TimeZone><default_boot_sequence>9</default_boot_sequence><Generation>8</Generation><ClusterCompatibilityVersion>4.3</ClusterCompatibilityVersion><V mType>1</VmType><ResumeBehavior>AUTO_RESUME</ResumeBehavior><MinAllocatedMem>2730</MinAllocatedMem><IsStateless>false</IsStateless><IsRunAndPause>false</IsRu nAndPause><AutoStartup>false</AutoStartup><Priority>1</Priority><CreatedByUserId>6ea16f22-45d7-11ea-bd83-00163e518b7c</CreatedByUserId><MigrationSupport>0</M igrationSupport><IsBootMenuEnabled>false</IsBootMenuEnabled><IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled><IsSpiceCopyPasteEnabled>true</IsSpi ceCopyPasteEnabled><AllowConsoleReconnect>true</AllowConsoleReconnect><ConsoleDisconnectAction>LOCK_SCREEN</ConsoleDisconnectAction><CustomEmulatedMachine></ CustomEmulatedMachine><BiosType>0</BiosType><CustomCpuName></CustomCpuName><PredefinedProperties></PredefinedProperties><UserDefinedProperties></UserDefinedP roperties><MaxMemorySizeMb>16384</MaxMemorySizeMb><MultiQueuesEnabled>true</MultiQueuesEnabled><UseHostCpu>false</UseHostCpu><ClusterName>Blastoise</ClusterN ame><TemplateId>00000000-0000-0000-0000-000000000000</TemplateId><TemplateName>Blank</TemplateName><IsInitilized>true</IsInitilized><Origin>0</Origin><quota_ id>32644894-755e-4588-b967-8fb9dc327795</quota_id><DefaultDisplayType>2</DefaultDisplayType><TrustedService>false</TrustedService><OriginalTemplateId>0000000 0-0000-0000-0000-000000000000</OriginalTemplateId><OriginalTemplateName>Blank</OriginalTemplateName><CpuPinning></CpuPinning><UseLatestVersion>false</UseLate stVersion><StopTime>2020/08/20 17:52:35</StopTime><Section ovf:id="46ad1d80-2649-48f5-92e6-e5489d11d30c" ovf:required="false" xsi:type="ovf:OperatingSystemSe ction_Type"><Info>Guest Operating System</Info><Description>other_linux_ppc64</Description></Section><Section xsi:type="ovf:VirtualHardwareSection_Type"><Inf o>2 CPU, 4096 Memory</Info><System><vssd:VirtualSystemType>ENGINE 4.1.0.0</vssd:VirtualSystemType></System><Item><rasd:Caption>2 virtual cpu</rasd:Caption><r asd:Description>Number of virtual CPU</rasd:Description><rasd:InstanceId>1</rasd:InstanceId><rasd:ResourceType>3</rasd:ResourceType><rasd:num_of_sockets>2</r asd:num_of_sockets><rasd:cpu_per_socket>1</rasd:cpu_per_socket><rasd:threads_per_cpu>1</rasd:threads_per_cpu><rasd:max_num_of_vcpus>16</rasd:max_num_of_vcpus <rasd:VirtualQuantity>2</rasd:VirtualQuantity></Item><Item><rasd:Caption>4096 MB of memory</rasd:Caption><rasd:Description>Memory Size</rasd:Description><ra sd:InstanceId>2</rasd:InstanceId><rasd:ResourceType>4</rasd:ResourceType><rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits><rasd:VirtualQuantity>4096</ra sd:VirtualQuantity></Item><Item><rasd:Caption>energy.versatushpc.com.br_Disk1</rasd:Caption><rasd:InstanceId>b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:Insta nceId><rasd:ResourceType>17</rasd:ResourceType><rasd:HostResource>775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:HostResourc e><rasd:Parent>00000000-0000-0000-0000-000000000000</rasd:Parent><rasd:Template>00000000-0000-0000-0000-000000000000</rasd:Template><rasd:ApplicationList></r asd:ApplicationList><rasd:StorageId>d19456e4-0051-456e-b33c-57348a78c2e0</rasd:StorageId><rasd:StoragePoolId>6c54f91e-89bf-45b4-bc48-56e74c4efd5e</rasd:Stora gePoolId><rasd:CreationDate>2020/08/19 20:13:05</rasd:CreationDate><rasd:LastModified>1970/01/01 00:00:00</rasd:LastModified><rasd:last_modified_date>2020/08 /20 18:37:41</rasd:last_modified_date><Type>disk</Type><Device>disk</Device><rasd:Address>{type=drive, bus=0, controller=1, target=0, unit=0}</rasd:Address>< BootOrder>1</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-775b24a9-6a32-431a-831f-4ac9b3b31152</Alias></Item><Item><rasd:Capt ion>Ethernet adapter on legacyservers</rasd:Caption><rasd:InstanceId>e6e37ae1-f263-4986-a039-e8e01e72d1f4</rasd:InstanceId><rasd:ResourceType>10</rasd:Resour ceType><rasd:OtherResourceType>legacyservers</rasd:OtherResourceType><rasd:ResourceSubType>3</rasd:ResourceSubType><rasd:Connection>legacyservers</rasd:Conne ction><rasd:Linked>true</rasd:Linked><rasd:Name>nic1</rasd:Name><rasd:ElementName>nic1</rasd:ElementName><rasd:MACAddress>56:6f:f0:b3:00:23</rasd:MACAddress> <rasd:speed>10000</rasd:speed><Type>interface</Type><Device>bridge</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><I sReadOnly>false</IsReadOnly><Alias>ua-e6e37ae1-f263-4986-a039-e8e01e72d1f4</Alias></Item><Item><rasd:Caption>USB Controller</rasd:Caption><rasd:InstanceId>3< /rasd:InstanceId><rasd:ResourceType>23</rasd:ResourceType><rasd:UsbPolicy>DISABLED</rasd:UsbPolicy></Item><Item><rasd:Caption>Graphical Controller</rasd:Capt ion><rasd:InstanceId>1440c749-728e-4a86-afc1-8237c6055fa5</rasd:InstanceId><rasd:ResourceType>20</rasd:ResourceType><rasd:VirtualQuantity>1</rasd:VirtualQuan tity><rasd:SinglePciQxl>false</rasd:SinglePciQxl><Type>video</Type><Device>vga</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</ IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-1440c749-728e-4a86-afc1-8237c6055fa5</Alias><SpecParams><vram>16384</vram></SpecParams></Item><Item><rasd:C aption>Graphical Framebuffer</rasd:Caption><rasd:InstanceId>603e7f0c-8d28-4c3e-bd90-c5685b752100</rasd:InstanceId><rasd:ResourceType>26</rasd:ResourceType><T ype>graphics</Type><Device>vnc</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>< /Alias></Item><Item><rasd:Caption>CDROM</rasd:Caption><rasd:InstanceId>3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</rasd:InstanceId><rasd:ResourceType>15</rasd:Reso urceType><Type>disk</Type><Device>cdrom</Device><rasd:Address>{type=drive, bus=0, controller=0, target=0, unit=2}</rasd:Address><BootOrder>2</BootOrder><IsPl ugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</Alias><SpecParams><path>CentOS-8.1.1911-x86_64-boot.iso</p ath></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>66f3a2b8-d2c5-4032-9f10-8742d65a0a3e</rasd:InstanceId><Type>controller </Type><Device>scsi</Device><rasd:Address>{type=spapr-vio}</rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Al ias></Alias><SpecParams><index>0</index></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>e065acb2-e7db-4f55-a1df-385f19299b d0</rasd:InstanceId><Type>rng</Type><Device>virtio</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false< /IsReadOnly><Alias>ua-e065acb2-e7db-4f55-a1df-385f19299bd0</Alias><SpecParams><source>urandom</source></SpecParams></Item><Item><rasd:ResourceType>0</rasd:Re
<BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-7b4c4ef6-2a9a-4120-b838-3127db0fd703</Alias><SpecParams><model>vir tio</model></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>5aade6c7-8f77-4fea-a5de-66350b214935</rasd:InstanceId><Type>con
On 27 Aug 2020, at 16:26, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:23 PM Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:13 PM Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: On 27 Aug 2020, at 16:03, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 8:40 PM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hi Michal, On 27 Aug 2020, at 05:08, Michal Skrivanek <michal.skrivanek@redhat.com<mailto:michal.skrivanek@redhat.com>> wrote: On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Okay here we go Arik. With your insight I’ve done the following: # rpm -Va This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done: you should still have host deploy logs on the engine machine. it’s weird it succeeded, unless it somehow happened afterwards? It only succeeded my yum reinstall rampage. yum list installed | cut -f 1 -d " " > file yum -y reinstall `cat file | xargs` Reinstalled everything. Everything worked as expected and I finally added the machine back to the cluster. It’s operational. eh, I wouldn’t trust it much. did you run redeploy at least? I’ve done reinstall on the web interface of the engine. I can reinstall the host, there’s nothing running on it… gonna try a third format. Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64: <PastedGraphic-2.png> So… This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this… how exactly did you import them? could be a bug indeed. we don’t support changing it as it doesn’t make sense, the guest can’t be converted Yeah. I done the normal procedure, added the storage domain to the engine and clicked on “Import VM”. Immediately it was detected as x86_64. Since I wasn’t able to upgrade my environment from 4.3.10 to 4.4.1 due to random errors when redeploying the engine with the backup from 4.3.10, I just reinstalled it, reconfigured everything and them imported the storage domains. I don’t know where the information about architecture is stored in the storage domain, I tried to search for some metadata files inside the domain but nothing come up. Is there a way to force this change? It must be a way. I even tried to import the machine as x86_64. So I can delete the VM and just reattach the disks in a new only, effectively not losing the data, but… <PastedGraphic-1.png> Yeah, so something is broken. The check during the import appears to be OK, but the interface does not me allow to import it to the ppc64le machine, since it’s read as x86_64. Could you please provide the output of the following query from the database: select * from unregistered_ovf_of_entities where entity_name='energy.versatushpc.com.br<http://energy.versatushpc.com.br/>'; Sure, there you go: 46ad1d80-2649-48f5-92e6-e5489d11d30c | energy.versatushpc.com.br<http://energy.versatushpc.com.br/> | VM | 1 | | d19456e4-0051-456e-b33c-57348a78c2e0 | <?xml version="1.0" encoding="UTF-8"?><ovf:Envelope xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1/" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim -schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingDa..." xmlns:xsi="http://ww<http://ww/> w.w3.org/2001/XMLSchema-instance<http://w.w3.org/2001/XMLSchema-instance>" ovf:version="4.1.0.0"><References><File ovf:href="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af " ovf:id="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="512" ovf:description="Active VM" ovf:disk_storage_type="IMAGE" ovf:cinder_volume_type=""></File></R eferences><NetworkSection><Info>List of networks</Info><Network ovf:name="legacyservers"></Network></NetworkSection><Section xsi:type="ovf:DiskSection_Type"> <Info>List of Virtual Disks</Info><Disk ovf:diskId="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="40" ovf:actual_size="1" ovf:vm_snapshot_id="6de58683-c586 -4e97-b0e8-ee7ee3baf754" ovf:parentRef="" ovf:fileRef="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:format="http://www.vmwa<http://www.vmwa/> re.com/specifications/vmdk.html#sparse<http://re.com/specifications/vmdk.html#sparse>" ovf:volume-format="RAW" ovf:volume-type="Sparse" ovf:disk-interface="VirtIO_SCSI" ovf:read-only="false" ovf:shareable ="false" ovf:boot="true" ovf:pass-discard="false" ovf:disk-alias="energy.versatushpc.com.br_Disk1" ovf:disk-description="" ovf:wipe-after-delete="false"></Di sk></Section><Content ovf:id="out" xsi:type="ovf:VirtualSystem_Type"><Name>energy.versatushpc.com.br<http://energy.versatushpc.com.br/></Name><Description>Holds Kosen backend and frontend prod services (nginx + docker)</Description><Comment></Comment><CreationDate>2020/08/19 20:11:33</CreationDate><ExportDate>2020/08/20 18:37:41</ExportDate><Delet eProtected>false</DeleteProtected><SsoMethod>guest_agent</SsoMethod><IsSmartcardEnabled>false</IsSmartcardEnabled><NumOfIoThreads>1</NumOfIoThreads><TimeZone sourceType><rasd:InstanceId>7b4c4ef6-2a9a-4120-b838-3127db0fd703</rasd:InstanceId><Type>balloon</Type><Device>memballoon</Device><rasd:Address></rasd:Address troller</Type><Device>virtio-scsi</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlugged><IsReadOnly>false</IsReadOnly><Ali as></Alias><SpecParams><ioThreadId></ioThreadId></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>4d4d7bfd-b1e8-45c3-a5e8-7e 0b7773bbf2</rasd:InstanceId><Type>controller</Type><Device>virtio-serial</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlu gged><IsReadOnly>false</IsReadOnly><Alias>58ca7b19-0071-00c0-01d6-000000000212</Alias></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>9 cea63da-7afd-41d4-925f-369f993b280f</rasd:InstanceId><Type>controller</Type><Device>usb</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugg ed>false</IsPlugged><IsReadOnly>false</IsReadOnly><Alias></Alias><SpecParams><index>0</index><model>nec-xhci</model></SpecParams></Item></Section><Section xs i:type="ovf:SnapshotsSection_Type"><Snapshot ovf:id="6de58683-c586-4e97-b0e8-ee7ee3baf754"><Type>ACTIVE</Type><Description>Active VM</Description><CreationDa te>2020/08/19 20:11:33</CreationDate></Snapshot></Section></Content></ovf:Envelope> | | 0 Thank you! thanks so yeah - we may have an issue with that operating system 'other_linux_ppc64' that has the same name as 'other_linux' in our os-info configuration as a possible workaround, assuming all those unregistered VMs you can try to override the architecture with: update unregistered_ovf_of_entities set architecture = 2; as a possible workaround, assuming all those unregistered VMs are from clusters with the same architecture, you can try to override the architecture with: * Wooha!!! engine=# update unregistered_ovf_of_entities set architecture = 2; UPDATE 8 [cid:8DFCFE09-C438-4FFB-8BBD-E532FDCB45AC] [cid:902213C2-54AE-4813-BB66-207800C39510] Worked and the VMs are now imported. But… hahaha. I have another issues, any of the three VM’s starts now. Perhaps I’ll reinstall the host for the third time as recommended by Michal, anyway here are the logs that I was able to fetch during the failed power on process: ON THE ENGINE: ==> /var/log/ovirt-engine/engine.log <== 2020-08-27 16:35:59,437-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5e701801 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5e701801 2020-08-27 16:35:59,500-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Running command: RunVmCommand internal: false. Entities affected : ID: ccccd416-c6b4-4c95-8372-417480be5365 Type: VMAction group RUN_VM with role type USER 2020-08-27 16:35:59,506-03 INFO [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Emulated machine 'pseries-rhel8.2.0' which is different than that of the cluster is set for 'jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br>'(ccccd416-c6b4-4c95-8372-417480be5365) 2020-08-27 16:35:59,528-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@14322872'}), log id: 7709ba81 2020-08-27 16:35:59,530-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 7709ba81 2020-08-27 16:35:59,533-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br>]'}), log id: 4a0db679 2020-08-27 16:35:59,534-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateBrokerVDSCommand(HostName = rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br>, CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br>]'}), log id: 25bc7e6e 2020-08-27 16:35:59,548-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] VM <?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"> <name>jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br></name> <uuid>ccccd416-c6b4-4c95-8372-417480be5365</uuid> <memory>536870912</memory> <currentMemory>536870912</currentMemory> <vcpu current="128">384</vcpu> <clock offset="variable" adjustment="0"> <timer name="rtc" tickpolicy="catchup"/> <timer name="pit" tickpolicy="delay"/> </clock> <cpu mode="host-model"> <model>power9</model> <topology cores="16" threads="4" sockets="6"/> <numa> <cell id="0" cpus="0-383" memory="536870912"/> </numa> </cpu> <cputune/> <qemu:capabilities> <qemu:add capability="blockdev"/> <qemu:add capability="incremental-backup"/> </qemu:capabilities> <devices> <input type="tablet" bus="usb"/> <channel type="unix"> <target type="virtio" name="ovirt-guest-agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.ovirt-guest-agent.0"/> </channel> <channel type="unix"> <target type="virtio" name="org.qemu.guest_agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.org.qemu.guest_agent.0"/> </channel> <emulator text="/usr/bin/qemu-system-ppc64"/> <controller type="scsi" model="ibmvscsi" index="0"/> <rng model="virtio"> <backend model="random">/dev/urandom</backend> <alias name="ua-1e18aea0-076a-40d0-9b85-21ac6049a94d"/> </rng> <controller type="usb" model="nec-xhci" index="0"> <alias name="ua-47e67d9f-a191-4dc0-9c09-b2db9f1d373e"/> </controller> <controller type="virtio-serial" index="0" ports="16"> <alias name="ua-4d92fb2f-aaf6-465c-8571-e49e1d12191d"/> </controller> <watchdog model="i6300esb" action="none"> <alias name="ua-7b756cc3-c9ec-4b79-84ef-d6ad15021f1a"/> </watchdog> <graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"> <listen type="network" network="vdsm-ovirtmgmt"/> </graphics> <controller type="scsi" model="virtio-scsi" index="1"> <alias name="ua-8e146e76-e038-4f8a-a526-e7e1c626f54e"/> </controller> <memballoon model="virtio"> <stats period="5"/> <alias name="ua-d8d37c06-de66-4912-bf8d-fc1017c85c68"/> </memballoon> <video> <model type="vga" vram="16384" heads="1"/> <alias name="ua-e96e6050-b1aa-4664-a856-8df923e3dc66"/> </video> <controller type="scsi" index="0"> <address type="spapr-vio"/> </controller> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="servers"/> <driver queues="4" name="vhost"/> <alias name="ua-152c3f8a-69d2-420f-8b6a-c1fb4a11594f"/> <mac address="56:6f:1a:f4:00:03"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="nfs"/> <driver queues="4" name="vhost"/> <alias name="ua-1369da6c-4f9b-4fe3-9f45-7b37ecb34ac2"/> <mac address="56:6f:1a:f4:00:04"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <disk type="file" device="cdrom" snapshot="no"> <driver name="qemu" type="raw" error_policy="report"/> <source file="" startupPolicy="optional"> <seclabel model="dac" type="none" relabel="no"/> </source> <target dev="sdc" bus="scsi"/> <readonly/> <alias name="ua-2d6db7ca-2fe1-4af4-9741-7b5332805d94"/> <address bus="0" controller="0" unit="2" type="drive" target="0"/> </disk> <disk snapshot="no" type="file" device="disk"> <target dev="sda" bus="scsi"/> <source file="/rhev/data-center/804e857c-461d-4642-86c4-7ff4a5e7da47/d19456e4-0051-456e-b33c-57348a78c2e0/images/8100a756-92a7-4160-9a31-5a843810cb61/0183b177-71b5-4c0e-b7d3-becc5da152ce"> <seclabel model="dac" type="none" relabel="no"/> </source> <driver name="qemu" io="threads" type="raw" error_policy="stop" cache="none"/> <alias name="ua-8100a756-92a7-4160-9a31-5a843810cb61"/> <address bus="0" controller="1" unit="0" type="drive" target="0"/> <boot order="1"/> <serial>8100a756-92a7-4160-9a31-5a843810cb61</serial> </disk> <lease> <key>ccccd416-c6b4-4c95-8372-417480be5365</key> <lockspace>d19456e4-0051-456e-b33c-57348a78c2e0</lockspace> <target offset="24117248" path="/rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm/d19456e4-0051-456e-b33c-57348a78c2e0/dom_md/xleases"/> </lease> </devices> <os> <type arch="ppc64" machine="pseries-rhel8.2.0">hvm</type> </os> <metadata> <ovirt-tune:qos/> <ovirt-vm:vm> <ovirt-vm:minGuaranteedMemoryMb type="int">524288</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion> <ovirt-vm:custom/> <ovirt-vm:device mac_address="56:6f:1a:f4:00:04"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device mac_address="56:6f:1a:f4:00:03"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sda"> <ovirt-vm:poolID>804e857c-461d-4642-86c4-7ff4a5e7da47</ovirt-vm:poolID> <ovirt-vm:volumeID>0183b177-71b5-4c0e-b7d3-becc5da152ce</ovirt-vm:volumeID> <ovirt-vm:imageID>8100a756-92a7-4160-9a31-5a843810cb61</ovirt-vm:imageID> <ovirt-vm:domainID>d19456e4-0051-456e-b33c-57348a78c2e0</ovirt-vm:domainID> </ovirt-vm:device> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:resumeBehavior>kill</ovirt-vm:resumeBehavior> </ovirt-vm:vm> </metadata> </domain> 2020-08-27 16:35:59,566-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateBrokerVDSCommand, return: , log id: 25bc7e6e 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateVDSCommand, return: WaitForLaunch, log id: 4a0db679 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,576-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] EVENT_ID: USER_STARTED_VM(153), VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br> was started by admin@internal-authz (Host: rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br>). 2020-08-27 16:36:01,803-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365' was reported as Down on VDS '394e0e68-60f5-42b3-aec4-5d8368efedd1'(rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br>) 2020-08-27 16:36:01,804-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] START, DestroyVDSCommand(HostName = rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br>, DestroyVmVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] FINISH, DestroyVDSCommand, return: , log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365'(jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br>) moved from 'WaitForLaunch' --> 'Down' 2020-08-27 16:36:02,024-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-13) [] EVENT_ID: VM_DOWN_ERROR(119), VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br> is down with error. Exit message: Hook Error: (b'Traceback (most recent call last):\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 124, in <module>\n main(VhostmdConf())\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 47, in __init__\n dom = minidom.parse(path)\n File "/usr/lib64/python3.6/xml/dom/minidom.py", line 1958, in parse\n return expatbuilder.parse(file)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 911, in parse\n result = builder.parseFile(fp)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 211, in parseFile\n parser.Parse("", True)\nxml.parsers.expat.ExpatError: no element found: line 1, column 0\n',). 2020-08-27 16:36:02,025-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] add VM 'ccccd416-c6b4-4c95-8372-417480be5365'(jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br>) to rerun treatment 2020-08-27 16:36:02,029-03 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-13) [] Rerun VM 'ccccd416-c6b4-4c95-8372-417480be5365'. Called from VDS 'rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br>' 2020-08-27 16:36:02,041-03 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br> on Host rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br>. 2020-08-27 16:36:02,066-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5480ad0b 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5480ad0b 2020-08-27 16:36:02,093-03 WARN [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Validation of action 'RunVm' failed for user admin@internal-authz. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS 2020-08-27 16:36:02,093-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,101-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br> (User: admin@internal-authz). 2020-08-27 16:36:02,105-03 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (EE-ManagedThreadFactory-engine-Thread-145180) [71c52499] Running command: ProcessDownVmCommand internal: true. ON THE HOST: /var/log/messages Aug 27 16:36:01 rhvpower python3[73682]: detected unhandled Python exception in '/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd' Aug 27 16:36:01 rhvpower abrt-server[73684]: Deleting problem directory Python3-2020-08-27-16:36:01-73682 (dup of Python3-2020-08-27-16:33:11-73428) Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Activating service name='org.freedesktop.problems' requested by ':1.183' (uid=0 pid=73691 comm="/usr/libexec/platform-python /usr/bin/abrt-action-" label="system_u:system_r:abrt_t:s0-s0:c0.c1023") (using servicehelper) Aug 27 16:36:01 rhvpower dbus-daemon[73694]: [system] Failed to reset fd limit before activating service: org.freedesktop.DBus.Error.AccessDenied: Failed to restore old fd limit: Operation not permitted Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Successfully activated service 'org.freedesktop.problems' Aug 27 16:36:02 rhvpower abrt-server[73684]: /bin/sh: reporter-systemd-journal: command not found Regarding the import problem. Is that really a bug right? I can describe it on Red Hat Bugzilla if I need to. It’s the minimal that I can do for the help. Is it ok? Thanks, Thanks, michal Ideias? On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: What a strange thing is happening here: [root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files: Transaction test succeeded. Running transaction Preparing : 1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch 2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. Verifying : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Installed products updated. Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch I’ve never seen something like this. I’ve already reinstalled the host from the ground and the same thing happens. On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello Arik, This is probably the issue. Output totally empty: [root@power ~]# vdsm-client Host getCapabilities [root@power ~]# Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch Any ideias to try? Thanks. On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues. Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv. Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10. Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix Can you please provide the output of 'vdsm-client Host getCapabilities' on that host? Thanks, _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMNMYMBMWTC7UG...

On Thu, Aug 27, 2020 at 10:39 PM Vinícius Ferrão <ferrao@versatushpc.com.br> wrote:
On 27 Aug 2020, at 16:26, Arik Hadas <ahadas@redhat.com> wrote:
On Thu, Aug 27, 2020 at 10:23 PM Arik Hadas <ahadas@redhat.com> wrote:
On Thu, Aug 27, 2020 at 10:13 PM Vinícius Ferrão < ferrao@versatushpc.com.br> wrote:
On 27 Aug 2020, at 16:03, Arik Hadas <ahadas@redhat.com> wrote:
On Thu, Aug 27, 2020 at 8:40 PM Vinícius Ferrão via Users < users@ovirt.org> wrote:
Hi Michal,
On 27 Aug 2020, at 05:08, Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users@ovirt.org> wrote:
Okay here we go Arik.
With your insight I’ve done the following:
# rpm -Va
This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done:
you should still have host deploy logs on the engine machine. it’s weird it succeeded, unless it somehow happened afterwards?
It only succeeded my yum reinstall rampage.
yum list installed | cut -f 1 -d " " > file yum -y reinstall `cat file | xargs`
Reinstalled everything.
Everything worked as expected and I finally added the machine back to the cluster. It’s operational.
eh, I wouldn’t trust it much. did you run redeploy at least?
I’ve done reinstall on the web interface of the engine. I can reinstall the host, there’s nothing running on it… gonna try a third format.
Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64:
<PastedGraphic-2.png>
So…
This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this…
how exactly did you import them? could be a bug indeed. we don’t support changing it as it doesn’t make sense, the guest can’t be converted
Yeah. I done the normal procedure, added the storage domain to the engine and clicked on “Import VM”. Immediately it was detected as x86_64.
Since I wasn’t able to upgrade my environment from 4.3.10 to 4.4.1 due to random errors when redeploying the engine with the backup from 4.3.10, I just reinstalled it, reconfigured everything and them imported the storage domains.
I don’t know where the information about architecture is stored in the storage domain, I tried to search for some metadata files inside the domain but nothing come up. Is there a way to force this change? It must be a way.
I even tried to import the machine as x86_64. So I can delete the VM and just reattach the disks in a new only, effectively not losing the data, but…
<PastedGraphic-1.png>
Yeah, so something is broken. The check during the import appears to be OK, but the interface does not me allow to import it to the ppc64le machine, since it’s read as x86_64.
Could you please provide the output of the following query from the database: select * from unregistered_ovf_of_entities where entity_name=' energy.versatushpc.com.br';
Sure, there you go:
46ad1d80-2649-48f5-92e6-e5489d11d30c | energy.versatushpc.com.br | VM | 1 | | d19456e4-0051-456e-b33c-57348a78c2e0 | <?xml version="1.0" encoding="UTF-8"?><ovf:Envelope xmlns:ovf=" http://schemas.dmtf.org/ovf/envelope/1/" xmlns:rasd=" http://schemas.dmtf.org/wbem/wscim/1/cim -schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd=" http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingDa..." xmlns:xsi="http://ww w.w3.org/2001/XMLSchema-instance" ovf:version="4.1.0.0"><References><File ovf:href="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af " ovf:id="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="512" ovf:description="Active VM" ovf:disk_storage_type="IMAGE" ovf:cinder_volume_type=""></File></R eferences><NetworkSection><Info>List of networks</Info><Network ovf:name="legacyservers"></Network></NetworkSection><Section xsi:type="ovf:DiskSection_Type"> <Info>List of Virtual Disks</Info><Disk ovf:diskId="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="40" ovf:actual_size="1" ovf:vm_snapshot_id="6de58683-c586 -4e97-b0e8-ee7ee3baf754" ovf:parentRef="" ovf:fileRef="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:format="http://www.vmwa re.com/specifications/vmdk.html#sparse" ovf:volume-format="RAW" ovf:volume-type="Sparse" ovf:disk-interface="VirtIO_SCSI" ovf:read-only="false" ovf:shareable ="false" ovf:boot="true" ovf:pass-discard="false" ovf:disk-alias="energy.versatushpc.com.br_Disk1" ovf:disk-description="" ovf:wipe-after-delete="false"></Di sk></Section><Content ovf:id="out" xsi:type="ovf:VirtualSystem_Type"><Name>energy.versatushpc.com.br</Name><Description>Holds Kosen backend and frontend prod services (nginx + docker)</Description><Comment></Comment><CreationDate>2020/08/19 20:11:33</CreationDate><ExportDate>2020/08/20 18:37:41</ExportDate><Delet
eProtected>false</DeleteProtected><SsoMethod>guest_agent</SsoMethod><IsSmartcardEnabled>false</IsSmartcardEnabled><NumOfIoThreads>1</NumOfIoThreads><TimeZone
Etc/GMT</TimeZone><default_boot_sequence>9</default_boot_sequence><Generation>8</Generation><ClusterCompatibilityVersion>4.3</ClusterCompatibilityVersion><V
mType>1</VmType><ResumeBehavior>AUTO_RESUME</ResumeBehavior><MinAllocatedMem>2730</MinAllocatedMem><IsStateless>false</IsStateless><IsRunAndPause>false</IsRu
nAndPause><AutoStartup>false</AutoStartup><Priority>1</Priority><CreatedByUserId>6ea16f22-45d7-11ea-bd83-00163e518b7c</CreatedByUserId><MigrationSupport>0</M
igrationSupport><IsBootMenuEnabled>false</IsBootMenuEnabled><IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled><IsSpiceCopyPasteEnabled>true</IsSpi
ceCopyPasteEnabled><AllowConsoleReconnect>true</AllowConsoleReconnect><ConsoleDisconnectAction>LOCK_SCREEN</ConsoleDisconnectAction><CustomEmulatedMachine></
CustomEmulatedMachine><BiosType>0</BiosType><CustomCpuName></CustomCpuName><PredefinedProperties></PredefinedProperties><UserDefinedProperties></UserDefinedP
roperties><MaxMemorySizeMb>16384</MaxMemorySizeMb><MultiQueuesEnabled>true</MultiQueuesEnabled><UseHostCpu>false</UseHostCpu><ClusterName>Blastoise</ClusterN
ame><TemplateId>00000000-0000-0000-0000-000000000000</TemplateId><TemplateName>Blank</TemplateName><IsInitilized>true</IsInitilized><Origin>0</Origin><quota_
id>32644894-755e-4588-b967-8fb9dc327795</quota_id><DefaultDisplayType>2</DefaultDisplayType><TrustedService>false</TrustedService><OriginalTemplateId>0000000
0-0000-0000-0000-000000000000</OriginalTemplateId><OriginalTemplateName>Blank</OriginalTemplateName><CpuPinning></CpuPinning><UseLatestVersion>false</UseLate stVersion><StopTime>2020/08/20 17:52:35</StopTime><Section ovf:id="46ad1d80-2649-48f5-92e6-e5489d11d30c" ovf:required="false" xsi:type="ovf:OperatingSystemSe ction_Type"><Info>Guest Operating System</Info><Description>other_linux_ppc64</Description></Section><Section xsi:type="ovf:VirtualHardwareSection_Type"><Inf o>2 CPU, 4096 Memory</Info><System><vssd:VirtualSystemType>ENGINE 4.1.0.0</vssd:VirtualSystemType></System><Item><rasd:Caption>2 virtual cpu</rasd:Caption><r asd:Description>Number of virtual CPU</rasd:Description><rasd:InstanceId>1</rasd:InstanceId><rasd:ResourceType>3</rasd:ResourceType><rasd:num_of_sockets>2</r
asd:num_of_sockets><rasd:cpu_per_socket>1</rasd:cpu_per_socket><rasd:threads_per_cpu>1</rasd:threads_per_cpu><rasd:max_num_of_vcpus>16</rasd:max_num_of_vcpus
<rasd:VirtualQuantity>2</rasd:VirtualQuantity></Item><Item><rasd:Caption>4096 MB of memory</rasd:Caption><rasd:Description>Memory Size</rasd:Description><ra
sd:InstanceId>2</rasd:InstanceId><rasd:ResourceType>4</rasd:ResourceType><rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits><rasd:VirtualQuantity>4096</ra
sd:VirtualQuantity></Item><Item><rasd:Caption>energy.versatushpc.com.br_Disk1</rasd:Caption><rasd:InstanceId>b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:Insta
nceId><rasd:ResourceType>17</rasd:ResourceType><rasd:HostResource>775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:HostResourc
e><rasd:Parent>00000000-0000-0000-0000-000000000000</rasd:Parent><rasd:Template>00000000-0000-0000-0000-000000000000</rasd:Template><rasd:ApplicationList></r
asd:ApplicationList><rasd:StorageId>d19456e4-0051-456e-b33c-57348a78c2e0</rasd:StorageId><rasd:StoragePoolId>6c54f91e-89bf-45b4-bc48-56e74c4efd5e</rasd:Stora gePoolId><rasd:CreationDate>2020/08/19 20:13:05</rasd:CreationDate><rasd:LastModified>1970/01/01 00:00:00</rasd:LastModified><rasd:last_modified_date>2020/08 /20 18:37:41</rasd:last_modified_date><Type>disk</Type><Device>disk</Device><rasd:Address>{type=drive, bus=0, controller=1, target=0, unit=0}</rasd:Address><
BootOrder>1</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-775b24a9-6a32-431a-831f-4ac9b3b31152</Alias></Item><Item><rasd:Capt ion>Ethernet adapter on legacyservers</rasd:Caption><rasd:InstanceId>e6e37ae1-f263-4986-a039-e8e01e72d1f4</rasd:InstanceId><rasd:ResourceType>10</rasd:Resour
ceType><rasd:OtherResourceType>legacyservers</rasd:OtherResourceType><rasd:ResourceSubType>3</rasd:ResourceSubType><rasd:Connection>legacyservers</rasd:Conne
ction><rasd:Linked>true</rasd:Linked><rasd:Name>nic1</rasd:Name><rasd:ElementName>nic1</rasd:ElementName><rasd:MACAddress>56:6f:f0:b3:00:23</rasd:MACAddress>
<rasd:speed>10000</rasd:speed><Type>interface</Type><Device>bridge</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><I sReadOnly>false</IsReadOnly><Alias>ua-e6e37ae1-f263-4986-a039-e8e01e72d1f4</Alias></Item><Item><rasd:Caption>USB Controller</rasd:Caption><rasd:InstanceId>3< /rasd:InstanceId><rasd:ResourceType>23</rasd:ResourceType><rasd:UsbPolicy>DISABLED</rasd:UsbPolicy></Item><Item><rasd:Caption>Graphical Controller</rasd:Capt
ion><rasd:InstanceId>1440c749-728e-4a86-afc1-8237c6055fa5</rasd:InstanceId><rasd:ResourceType>20</rasd:ResourceType><rasd:VirtualQuantity>1</rasd:VirtualQuan
tity><rasd:SinglePciQxl>false</rasd:SinglePciQxl><Type>video</Type><Device>vga</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</
IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-1440c749-728e-4a86-afc1-8237c6055fa5</Alias><SpecParams><vram>16384</vram></SpecParams></Item><Item><rasd:C aption>Graphical Framebuffer</rasd:Caption><rasd:InstanceId>603e7f0c-8d28-4c3e-bd90-c5685b752100</rasd:InstanceId><rasd:ResourceType>26</rasd:ResourceType><T
ype>graphics</Type><Device>vnc</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias><
/Alias></Item><Item><rasd:Caption>CDROM</rasd:Caption><rasd:InstanceId>3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</rasd:InstanceId><rasd:ResourceType>15</rasd:Reso urceType><Type>disk</Type><Device>cdrom</Device><rasd:Address>{type=drive, bus=0, controller=0, target=0, unit=2}</rasd:Address><BootOrder>2</BootOrder><IsPl
ugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</Alias><SpecParams><path>CentOS-8.1.1911-x86_64-boot.iso</p
ath></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>66f3a2b8-d2c5-4032-9f10-8742d65a0a3e</rasd:InstanceId><Type>controller
</Type><Device>scsi</Device><rasd:Address>{type=spapr-vio}</rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Al
ias></Alias><SpecParams><index>0</index></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>e065acb2-e7db-4f55-a1df-385f19299b
d0</rasd:InstanceId><Type>rng</Type><Device>virtio</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false<
/IsReadOnly><Alias>ua-e065acb2-e7db-4f55-a1df-385f19299bd0</Alias><SpecParams><source>urandom</source></SpecParams></Item><Item><rasd:ResourceType>0</rasd:Re
sourceType><rasd:InstanceId>7b4c4ef6-2a9a-4120-b838-3127db0fd703</rasd:InstanceId><Type>balloon</Type><Device>memballoon</Device><rasd:Address></rasd:Address
<BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-7b4c4ef6-2a9a-4120-b838-3127db0fd703</Alias><SpecParams><model>vir
tio</model></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>5aade6c7-8f77-4fea-a5de-66350b214935</rasd:InstanceId><Type>con
troller</Type><Device>virtio-scsi</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlugged><IsReadOnly>false</IsReadOnly><Ali
as></Alias><SpecParams><ioThreadId></ioThreadId></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>4d4d7bfd-b1e8-45c3-a5e8-7e
0b7773bbf2</rasd:InstanceId><Type>controller</Type><Device>virtio-serial</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlu
gged><IsReadOnly>false</IsReadOnly><Alias>58ca7b19-0071-00c0-01d6-000000000212</Alias></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>9
cea63da-7afd-41d4-925f-369f993b280f</rasd:InstanceId><Type>controller</Type><Device>usb</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugg ed>false</IsPlugged><IsReadOnly>false</IsReadOnly><Alias></Alias><SpecParams><index>0</index><model>nec-xhci</model></SpecParams></Item></Section><Section xs i:type="ovf:SnapshotsSection_Type"><Snapshot ovf:id="6de58683-c586-4e97-b0e8-ee7ee3baf754"><Type>ACTIVE</Type><Description>Active VM</Description><CreationDa te>2020/08/19 20:11:33</CreationDate></Snapshot></Section></Content></ovf:Envelope> | | 0
Thank you!
thanks so yeah - we may have an issue with that operating system 'other_linux_ppc64' that has the same name as 'other_linux' in our os-info configuration as a possible workaround, assuming all those unregistered VMs you can try to override the architecture with: update unregistered_ovf_of_entities set architecture = 2;
as a possible workaround, assuming all those unregistered VMs are from clusters with the same architecture, you can try to override the architecture with: *
Wooha!!!
engine=# update unregistered_ovf_of_entities set architecture = 2; UPDATE 8
Worked and the VMs are now imported.
But… hahaha.
I have another issues, any of the three VM’s starts now. Perhaps I’ll reinstall the host for the third time as recommended by Michal, anyway here are the logs that I was able to fetch during the failed power on process:
ON THE ENGINE:
==> /var/log/ovirt-engine/engine.log <== 2020-08-27 16:35:59,437-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5e701801 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5e701801 2020-08-27 16:35:59,500-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Running command: RunVmCommand internal: false. Entities affected : ID: ccccd416-c6b4-4c95-8372-417480be5365 Type: VMAction group RUN_VM with role type USER 2020-08-27 16:35:59,506-03 INFO [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Emulated machine 'pseries-rhel8.2.0' which is different than that of the cluster is set for ' jupyter.nix.versatushpc.com.br'(ccccd416-c6b4-4c95-8372-417480be5365) 2020-08-27 16:35:59,528-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@14322872'}), log id: 7709ba81 2020-08-27 16:35:59,530-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 7709ba81 2020-08-27 16:35:59,533-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [ jupyter.nix.versatushpc.com.br]'}), log id: 4a0db679 2020-08-27 16:35:59,534-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateBrokerVDSCommand(HostName = rhvpower.local.versatushpc.com.br, CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [ jupyter.nix.versatushpc.com.br]'}), log id: 25bc7e6e 2020-08-27 16:35:59,548-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] VM <?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune=" http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"> <name>jupyter.nix.versatushpc.com.br</name> <uuid>ccccd416-c6b4-4c95-8372-417480be5365</uuid> <memory>536870912</memory> <currentMemory>536870912</currentMemory> <vcpu current="128">384</vcpu> <clock offset="variable" adjustment="0"> <timer name="rtc" tickpolicy="catchup"/> <timer name="pit" tickpolicy="delay"/> </clock> <cpu mode="host-model"> <model>power9</model> <topology cores="16" threads="4" sockets="6"/> <numa> <cell id="0" cpus="0-383" memory="536870912"/> </numa> </cpu> <cputune/> <qemu:capabilities> <qemu:add capability="blockdev"/> <qemu:add capability="incremental-backup"/> </qemu:capabilities> <devices> <input type="tablet" bus="usb"/> <channel type="unix"> <target type="virtio" name="ovirt-guest-agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.ovirt-guest-agent.0"/> </channel> <channel type="unix"> <target type="virtio" name="org.qemu.guest_agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.org.qemu.guest_agent.0"/> </channel> <emulator text="/usr/bin/qemu-system-ppc64"/> <controller type="scsi" model="ibmvscsi" index="0"/> <rng model="virtio"> <backend model="random">/dev/urandom</backend> <alias name="ua-1e18aea0-076a-40d0-9b85-21ac6049a94d"/> </rng> <controller type="usb" model="nec-xhci" index="0"> <alias name="ua-47e67d9f-a191-4dc0-9c09-b2db9f1d373e"/> </controller> <controller type="virtio-serial" index="0" ports="16"> <alias name="ua-4d92fb2f-aaf6-465c-8571-e49e1d12191d"/> </controller> <watchdog model="i6300esb" action="none"> <alias name="ua-7b756cc3-c9ec-4b79-84ef-d6ad15021f1a"/> </watchdog> <graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"> <listen type="network" network="vdsm-ovirtmgmt"/> </graphics> <controller type="scsi" model="virtio-scsi" index="1"> <alias name="ua-8e146e76-e038-4f8a-a526-e7e1c626f54e"/> </controller> <memballoon model="virtio"> <stats period="5"/> <alias name="ua-d8d37c06-de66-4912-bf8d-fc1017c85c68"/> </memballoon> <video> <model type="vga" vram="16384" heads="1"/> <alias name="ua-e96e6050-b1aa-4664-a856-8df923e3dc66"/> </video> <controller type="scsi" index="0"> <address type="spapr-vio"/> </controller> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="servers"/> <driver queues="4" name="vhost"/> <alias name="ua-152c3f8a-69d2-420f-8b6a-c1fb4a11594f"/> <mac address="56:6f:1a:f4:00:03"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="nfs"/> <driver queues="4" name="vhost"/> <alias name="ua-1369da6c-4f9b-4fe3-9f45-7b37ecb34ac2"/> <mac address="56:6f:1a:f4:00:04"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <disk type="file" device="cdrom" snapshot="no"> <driver name="qemu" type="raw" error_policy="report"/> <source file="" startupPolicy="optional"> <seclabel model="dac" type="none" relabel="no"/> </source> <target dev="sdc" bus="scsi"/> <readonly/> <alias name="ua-2d6db7ca-2fe1-4af4-9741-7b5332805d94"/> <address bus="0" controller="0" unit="2" type="drive" target="0"/> </disk> <disk snapshot="no" type="file" device="disk"> <target dev="sda" bus="scsi"/> <source file="/rhev/data-center/804e857c-461d-4642-86c4-7ff4a5e7da47/d19456e4-0051-456e-b33c-57348a78c2e0/images/8100a756-92a7-4160-9a31-5a843810cb61/0183b177-71b5-4c0e-b7d3-becc5da152ce"> <seclabel model="dac" type="none" relabel="no"/> </source> <driver name="qemu" io="threads" type="raw" error_policy="stop" cache="none"/> <alias name="ua-8100a756-92a7-4160-9a31-5a843810cb61"/> <address bus="0" controller="1" unit="0" type="drive" target="0"/> <boot order="1"/> <serial>8100a756-92a7-4160-9a31-5a843810cb61</serial> </disk> <lease> <key>ccccd416-c6b4-4c95-8372-417480be5365</key> <lockspace>d19456e4-0051-456e-b33c-57348a78c2e0</lockspace> <target offset="24117248" path="/rhev/data-center/mnt/192.168.10.14: _mnt_pool0_ovirt_vm/d19456e4-0051-456e-b33c-57348a78c2e0/dom_md/xleases"/> </lease> </devices> <os> <type arch="ppc64" machine="pseries-rhel8.2.0">hvm</type> </os> <metadata> <ovirt-tune:qos/> <ovirt-vm:vm> <ovirt-vm:minGuaranteedMemoryMb type="int">524288</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion> <ovirt-vm:custom/> <ovirt-vm:device mac_address="56:6f:1a:f4:00:04"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device mac_address="56:6f:1a:f4:00:03"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sda">
<ovirt-vm:poolID>804e857c-461d-4642-86c4-7ff4a5e7da47</ovirt-vm:poolID>
<ovirt-vm:volumeID>0183b177-71b5-4c0e-b7d3-becc5da152ce</ovirt-vm:volumeID>
<ovirt-vm:imageID>8100a756-92a7-4160-9a31-5a843810cb61</ovirt-vm:imageID>
<ovirt-vm:domainID>d19456e4-0051-456e-b33c-57348a78c2e0</ovirt-vm:domainID> </ovirt-vm:device> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:resumeBehavior>kill</ovirt-vm:resumeBehavior> </ovirt-vm:vm> </metadata> </domain>
2020-08-27 16:35:59,566-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateBrokerVDSCommand, return: , log id: 25bc7e6e 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateVDSCommand, return: WaitForLaunch, log id: 4a0db679 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,576-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] EVENT_ID: USER_STARTED_VM(153), VM jupyter.nix.versatushpc.com.br was started by admin@internal-authz (Host: rhvpower.local.versatushpc.com.br). 2020-08-27 16:36:01,803-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365' was reported as Down on VDS '394e0e68-60f5-42b3-aec4-5d8368efedd1'( rhvpower.local.versatushpc.com.br) 2020-08-27 16:36:01,804-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] START, DestroyVDSCommand(HostName = rhvpower.local.versatushpc.com.br, DestroyVmVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] FINISH, DestroyVDSCommand, return: , log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365'( jupyter.nix.versatushpc.com.br) moved from 'WaitForLaunch' --> 'Down' 2020-08-27 16:36:02,024-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-13) [] EVENT_ID: VM_DOWN_ERROR(119), VM jupyter.nix.versatushpc.com.br is down with error. Exit message: Hook Error: (b'Traceback (most recent call last):\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 124, in <module>\n main(VhostmdConf())\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 47, in __init__\n dom = minidom.parse(path)\n File "/usr/lib64/python3.6/xml/dom/minidom.py", line 1958, in parse\n return expatbuilder.parse(file)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 911, in parse\n result = builder.parseFile(fp)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 211, in parseFile\n parser.Parse("", True)\nxml.parsers.expat.ExpatError: no element found: line 1, column 0\n',).
yeah, I never encountered this issue before - could be a consequence of an improper deployment of that host
2020-08-27 16:36:02,025-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] add VM 'ccccd416-c6b4-4c95-8372-417480be5365'( jupyter.nix.versatushpc.com.br) to rerun treatment 2020-08-27 16:36:02,029-03 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-13) [] Rerun VM 'ccccd416-c6b4-4c95-8372-417480be5365'. Called from VDS ' rhvpower.local.versatushpc.com.br' 2020-08-27 16:36:02,041-03 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM jupyter.nix.versatushpc.com.br on Host rhvpower.local.versatushpc.com.br. 2020-08-27 16:36:02,066-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5480ad0b 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5480ad0b 2020-08-27 16:36:02,093-03 WARN [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Validation of action 'RunVm' failed for user admin@internal-authz. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS 2020-08-27 16:36:02,093-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,101-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM jupyter.nix.versatushpc.com.br (User: admin@internal-authz). 2020-08-27 16:36:02,105-03 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (EE-ManagedThreadFactory-engine-Thread-145180) [71c52499] Running command: ProcessDownVmCommand internal: true.
ON THE HOST:
/var/log/messages Aug 27 16:36:01 rhvpower python3[73682]: detected unhandled Python exception in '/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd' Aug 27 16:36:01 rhvpower abrt-server[73684]: Deleting problem directory Python3-2020-08-27-16:36:01-73682 (dup of Python3-2020-08-27-16:33:11-73428) Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Activating service name='org.freedesktop.problems' requested by ':1.183' (uid=0 pid=73691 comm="/usr/libexec/platform-python /usr/bin/abrt-action-" label="system_u:system_r:abrt_t:s0-s0:c0.c1023") (using servicehelper) Aug 27 16:36:01 rhvpower dbus-daemon[73694]: [system] Failed to reset fd limit before activating service: org.freedesktop.DBus.Error.AccessDenied: Failed to restore old fd limit: Operation not permitted Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Successfully activated service 'org.freedesktop.problems' Aug 27 16:36:02 rhvpower abrt-server[73684]: /bin/sh: reporter-systemd-journal: command not found
Regarding the import problem. Is that really a bug right? I can describe it on Red Hat Bugzilla if I need to. It’s the minimal that I can do for the help. Is it ok?
yes, please do
Thanks,
Thanks, michal
Ideias?
On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br> wrote:
What a strange thing is happening here:
[root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client
A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files:
Transaction test succeeded. Running transaction Preparing :
1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch
1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch
2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch
2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked.
/sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked.
Verifying : vdsm-client-4.40.22-1.el8ev.noarch
1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch
2/2 Installed products updated.
Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch
I’ve never seen something like this.
I’ve already reinstalled the host from the ground and the same thing happens.
On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org> wrote:
Hello Arik, This is probably the issue. Output totally empty:
[root@power ~]# vdsm-client Host getCapabilities [root@power ~]#
Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch
Any ideias to try?
Thanks.
On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com> wrote:
On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users < users@ovirt.org> wrote:
Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues.
Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv.
Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10.
Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix
Can you please provide the output of 'vdsm-client Host getCapabilities' on that host?
Thanks,
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFV...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMNMYMBMWTC7UG...

Etc/GMT</TimeZone><default_boot_sequence>9</default_boot_sequence><Generation>8</Generation><ClusterCompatibilityVersion>4.3</ClusterCompatibilityVersion><V mType>1</VmType><ResumeBehavior>AUTO_RESUME</ResumeBehavior><MinAllocatedMem>2730</MinAllocatedMem><IsStateless>false</IsStateless><IsRunAndPause>false</IsRu nAndPause><AutoStartup>false</AutoStartup><Priority>1</Priority><CreatedByUserId>6ea16f22-45d7-11ea-bd83-00163e518b7c</CreatedByUserId><MigrationSupport>0</M igrationSupport><IsBootMenuEnabled>false</IsBootMenuEnabled><IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled><IsSpiceCopyPasteEnabled>true</IsSpi ceCopyPasteEnabled><AllowConsoleReconnect>true</AllowConsoleReconnect><ConsoleDisconnectAction>LOCK_SCREEN</ConsoleDisconnectAction><CustomEmulatedMachine></ CustomEmulatedMachine><BiosType>0</BiosType><CustomCpuName></CustomCpuName><PredefinedProperties></PredefinedProperties><UserDefinedProperties></UserDefinedP roperties><MaxMemorySizeMb>16384</MaxMemorySizeMb><MultiQueuesEnabled>true</MultiQueuesEnabled><UseHostCpu>false</UseHostCpu><ClusterName>Blastoise</ClusterN ame><TemplateId>00000000-0000-0000-0000-000000000000</TemplateId><TemplateName>Blank</TemplateName><IsInitilized>true</IsInitilized><Origin>0</Origin><quota_ id>32644894-755e-4588-b967-8fb9dc327795</quota_id><DefaultDisplayType>2</DefaultDisplayType><TrustedService>false</TrustedService><OriginalTemplateId>0000000 0-0000-0000-0000-000000000000</OriginalTemplateId><OriginalTemplateName>Blank</OriginalTemplateName><CpuPinning></CpuPinning><UseLatestVersion>false</UseLate stVersion><StopTime>2020/08/20 17:52:35</StopTime><Section ovf:id="46ad1d80-2649-48f5-92e6-e5489d11d30c" ovf:required="false" xsi:type="ovf:OperatingSystemSe ction_Type"><Info>Guest Operating System</Info><Description>other_linux_ppc64</Description></Section><Section xsi:type="ovf:VirtualHardwareSection_Type"><Inf o>2 CPU, 4096 Memory</Info><System><vssd:VirtualSystemType>ENGINE 4.1.0.0</vssd:VirtualSystemType></System><Item><rasd:Caption>2 virtual cpu</rasd:Caption><r asd:Description>Number of virtual CPU</rasd:Description><rasd:InstanceId>1</rasd:InstanceId><rasd:ResourceType>3</rasd:ResourceType><rasd:num_of_sockets>2</r asd:num_of_sockets><rasd:cpu_per_socket>1</rasd:cpu_per_socket><rasd:threads_per_cpu>1</rasd:threads_per_cpu><rasd:max_num_of_vcpus>16</rasd:max_num_of_vcpus <rasd:VirtualQuantity>2</rasd:VirtualQuantity></Item><Item><rasd:Caption>4096 MB of memory</rasd:Caption><rasd:Description>Memory Size</rasd:Description><ra sd:InstanceId>2</rasd:InstanceId><rasd:ResourceType>4</rasd:ResourceType><rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits><rasd:VirtualQuantity>4096</ra sd:VirtualQuantity></Item><Item><rasd:Caption>energy.versatushpc.com.br_Disk1</rasd:Caption><rasd:InstanceId>b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:Insta nceId><rasd:ResourceType>17</rasd:ResourceType><rasd:HostResource>775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:HostResourc e><rasd:Parent>00000000-0000-0000-0000-000000000000</rasd:Parent><rasd:Template>00000000-0000-0000-0000-000000000000</rasd:Template><rasd:ApplicationList></r asd:ApplicationList><rasd:StorageId>d19456e4-0051-456e-b33c-57348a78c2e0</rasd:StorageId><rasd:StoragePoolId>6c54f91e-89bf-45b4-bc48-56e74c4efd5e</rasd:Stora gePoolId><rasd:CreationDate>2020/08/19 20:13:05</rasd:CreationDate><rasd:LastModified>1970/01/01 00:00:00</rasd:LastModified><rasd:last_modified_date>2020/08 /20 18:37:41</rasd:last_modified_date><Type>disk</Type><Device>disk</Device><rasd:Address>{type=drive, bus=0, controller=1, target=0, unit=0}</rasd:Address>< BootOrder>1</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-775b24a9-6a32-431a-831f-4ac9b3b31152</Alias></Item><Item><rasd:Capt ion>Ethernet adapter on legacyservers</rasd:Caption><rasd:InstanceId>e6e37ae1-f263-4986-a039-e8e01e72d1f4</rasd:InstanceId><rasd:ResourceType>10</rasd:Resour ceType><rasd:OtherResourceType>legacyservers</rasd:OtherResourceType><rasd:ResourceSubType>3</rasd:ResourceSubType><rasd:Connection>legacyservers</rasd:Conne ction><rasd:Linked>true</rasd:Linked><rasd:Name>nic1</rasd:Name><rasd:ElementName>nic1</rasd:ElementName><rasd:MACAddress>56:6f:f0:b3:00:23</rasd:MACAddress> <rasd:speed>10000</rasd:speed><Type>interface</Type><Device>bridge</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><I sReadOnly>false</IsReadOnly><Alias>ua-e6e37ae1-f263-4986-a039-e8e01e72d1f4</Alias></Item><Item><rasd:Caption>USB Controller</rasd:Caption><rasd:InstanceId>3< /rasd:InstanceId><rasd:ResourceType>23</rasd:ResourceType><rasd:UsbPolicy>DISABLED</rasd:UsbPolicy></Item><Item><rasd:Caption>Graphical Controller</rasd:Capt ion><rasd:InstanceId>1440c749-728e-4a86-afc1-8237c6055fa5</rasd:InstanceId><rasd:ResourceType>20</rasd:ResourceType><rasd:VirtualQuantity>1</rasd:VirtualQuan tity><rasd:SinglePciQxl>false</rasd:SinglePciQxl><Type>video</Type><Device>vga</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</ IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-1440c749-728e-4a86-afc1-8237c6055fa5</Alias><SpecParams><vram>16384</vram></SpecParams></Item><Item><rasd:C aption>Graphical Framebuffer</rasd:Caption><rasd:InstanceId>603e7f0c-8d28-4c3e-bd90-c5685b752100</rasd:InstanceId><rasd:ResourceType>26</rasd:ResourceType><T ype>graphics</Type><Device>vnc</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>< /Alias></Item><Item><rasd:Caption>CDROM</rasd:Caption><rasd:InstanceId>3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</rasd:InstanceId><rasd:ResourceType>15</rasd:Reso urceType><Type>disk</Type><Device>cdrom</Device><rasd:Address>{type=drive, bus=0, controller=0, target=0, unit=2}</rasd:Address><BootOrder>2</BootOrder><IsPl ugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</Alias><SpecParams><path>CentOS-8.1.1911-x86_64-boot.iso</p ath></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>66f3a2b8-d2c5-4032-9f10-8742d65a0a3e</rasd:InstanceId><Type>controller </Type><Device>scsi</Device><rasd:Address>{type=spapr-vio}</rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Al ias></Alias><SpecParams><index>0</index></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>e065acb2-e7db-4f55-a1df-385f19299b d0</rasd:InstanceId><Type>rng</Type><Device>virtio</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false< /IsReadOnly><Alias>ua-e065acb2-e7db-4f55-a1df-385f19299bd0</Alias><SpecParams><source>urandom</source></SpecParams></Item><Item><rasd:ResourceType>0</rasd:Re
<BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-7b4c4ef6-2a9a-4120-b838-3127db0fd703</Alias><SpecParams><model>vir tio</model></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>5aade6c7-8f77-4fea-a5de-66350b214935</rasd:InstanceId><Type>con
On 27 Aug 2020, at 16:48, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:39 PM Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: On 27 Aug 2020, at 16:26, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:23 PM Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:13 PM Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: On 27 Aug 2020, at 16:03, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 8:40 PM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hi Michal, On 27 Aug 2020, at 05:08, Michal Skrivanek <michal.skrivanek@redhat.com<mailto:michal.skrivanek@redhat.com>> wrote: On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Okay here we go Arik. With your insight I’ve done the following: # rpm -Va This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done: you should still have host deploy logs on the engine machine. it’s weird it succeeded, unless it somehow happened afterwards? It only succeeded my yum reinstall rampage. yum list installed | cut -f 1 -d " " > file yum -y reinstall `cat file | xargs` Reinstalled everything. Everything worked as expected and I finally added the machine back to the cluster. It’s operational. eh, I wouldn’t trust it much. did you run redeploy at least? I’ve done reinstall on the web interface of the engine. I can reinstall the host, there’s nothing running on it… gonna try a third format. Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64: <PastedGraphic-2.png> So… This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this… how exactly did you import them? could be a bug indeed. we don’t support changing it as it doesn’t make sense, the guest can’t be converted Yeah. I done the normal procedure, added the storage domain to the engine and clicked on “Import VM”. Immediately it was detected as x86_64. Since I wasn’t able to upgrade my environment from 4.3.10 to 4.4.1 due to random errors when redeploying the engine with the backup from 4.3.10, I just reinstalled it, reconfigured everything and them imported the storage domains. I don’t know where the information about architecture is stored in the storage domain, I tried to search for some metadata files inside the domain but nothing come up. Is there a way to force this change? It must be a way. I even tried to import the machine as x86_64. So I can delete the VM and just reattach the disks in a new only, effectively not losing the data, but… <PastedGraphic-1.png> Yeah, so something is broken. The check during the import appears to be OK, but the interface does not me allow to import it to the ppc64le machine, since it’s read as x86_64. Could you please provide the output of the following query from the database: select * from unregistered_ovf_of_entities where entity_name='energy.versatushpc.com.br<http://energy.versatushpc.com.br/>'; Sure, there you go: 46ad1d80-2649-48f5-92e6-e5489d11d30c | energy.versatushpc.com.br<http://energy.versatushpc.com.br/> | VM | 1 | | d19456e4-0051-456e-b33c-57348a78c2e0 | <?xml version="1.0" encoding="UTF-8"?><ovf:Envelope xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1/" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim -schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingDa..." xmlns:xsi="http://ww<http://ww/> w.w3.org/2001/XMLSchema-instance<http://w.w3.org/2001/XMLSchema-instance>" ovf:version="4.1.0.0"><References><File ovf:href="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af " ovf:id="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="512" ovf:description="Active VM" ovf:disk_storage_type="IMAGE" ovf:cinder_volume_type=""></File></R eferences><NetworkSection><Info>List of networks</Info><Network ovf:name="legacyservers"></Network></NetworkSection><Section xsi:type="ovf:DiskSection_Type"> <Info>List of Virtual Disks</Info><Disk ovf:diskId="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="40" ovf:actual_size="1" ovf:vm_snapshot_id="6de58683-c586 -4e97-b0e8-ee7ee3baf754" ovf:parentRef="" ovf:fileRef="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:format="http://www.vmwa<http://www.vmwa/> re.com/specifications/vmdk.html#sparse<http://re.com/specifications/vmdk.html#sparse>" ovf:volume-format="RAW" ovf:volume-type="Sparse" ovf:disk-interface="VirtIO_SCSI" ovf:read-only="false" ovf:shareable ="false" ovf:boot="true" ovf:pass-discard="false" ovf:disk-alias="energy.versatushpc.com.br_Disk1" ovf:disk-description="" ovf:wipe-after-delete="false"></Di sk></Section><Content ovf:id="out" xsi:type="ovf:VirtualSystem_Type"><Name>energy.versatushpc.com.br<http://energy.versatushpc.com.br/></Name><Description>Holds Kosen backend and frontend prod services (nginx + docker)</Description><Comment></Comment><CreationDate>2020/08/19 20:11:33</CreationDate><ExportDate>2020/08/20 18:37:41</ExportDate><Delet eProtected>false</DeleteProtected><SsoMethod>guest_agent</SsoMethod><IsSmartcardEnabled>false</IsSmartcardEnabled><NumOfIoThreads>1</NumOfIoThreads><TimeZone sourceType><rasd:InstanceId>7b4c4ef6-2a9a-4120-b838-3127db0fd703</rasd:InstanceId><Type>balloon</Type><Device>memballoon</Device><rasd:Address></rasd:Address troller</Type><Device>virtio-scsi</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlugged><IsReadOnly>false</IsReadOnly><Ali as></Alias><SpecParams><ioThreadId></ioThreadId></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>4d4d7bfd-b1e8-45c3-a5e8-7e 0b7773bbf2</rasd:InstanceId><Type>controller</Type><Device>virtio-serial</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlu gged><IsReadOnly>false</IsReadOnly><Alias>58ca7b19-0071-00c0-01d6-000000000212</Alias></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>9 cea63da-7afd-41d4-925f-369f993b280f</rasd:InstanceId><Type>controller</Type><Device>usb</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugg ed>false</IsPlugged><IsReadOnly>false</IsReadOnly><Alias></Alias><SpecParams><index>0</index><model>nec-xhci</model></SpecParams></Item></Section><Section xs i:type="ovf:SnapshotsSection_Type"><Snapshot ovf:id="6de58683-c586-4e97-b0e8-ee7ee3baf754"><Type>ACTIVE</Type><Description>Active VM</Description><CreationDa te>2020/08/19 20:11:33</CreationDate></Snapshot></Section></Content></ovf:Envelope> | | 0 Thank you! thanks so yeah - we may have an issue with that operating system 'other_linux_ppc64' that has the same name as 'other_linux' in our os-info configuration as a possible workaround, assuming all those unregistered VMs you can try to override the architecture with: update unregistered_ovf_of_entities set architecture = 2; as a possible workaround, assuming all those unregistered VMs are from clusters with the same architecture, you can try to override the architecture with: * Wooha!!! engine=# update unregistered_ovf_of_entities set architecture = 2; UPDATE 8 <PastedGraphic-2.png> <PastedGraphic-3.png> Worked and the VMs are now imported. But… hahaha. I have another issues, any of the three VM’s starts now. Perhaps I’ll reinstall the host for the third time as recommended by Michal, anyway here are the logs that I was able to fetch during the failed power on process: ON THE ENGINE: ==> /var/log/ovirt-engine/engine.log <== 2020-08-27 16:35:59,437-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5e701801 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5e701801 2020-08-27 16:35:59,500-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Running command: RunVmCommand internal: false. Entities affected : ID: ccccd416-c6b4-4c95-8372-417480be5365 Type: VMAction group RUN_VM with role type USER 2020-08-27 16:35:59,506-03 INFO [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Emulated machine 'pseries-rhel8.2.0' which is different than that of the cluster is set for 'jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>'(ccccd416-c6b4-4c95-8372-417480be5365) 2020-08-27 16:35:59,528-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@14322872'}), log id: 7709ba81 2020-08-27 16:35:59,530-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 7709ba81 2020-08-27 16:35:59,533-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>]'}), log id: 4a0db679 2020-08-27 16:35:59,534-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateBrokerVDSCommand(HostName = rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>, CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>]'}), log id: 25bc7e6e 2020-08-27 16:35:59,548-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] VM <?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"> <name>jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/></name> <uuid>ccccd416-c6b4-4c95-8372-417480be5365</uuid> <memory>536870912</memory> <currentMemory>536870912</currentMemory> <vcpu current="128">384</vcpu> <clock offset="variable" adjustment="0"> <timer name="rtc" tickpolicy="catchup"/> <timer name="pit" tickpolicy="delay"/> </clock> <cpu mode="host-model"> <model>power9</model> <topology cores="16" threads="4" sockets="6"/> <numa> <cell id="0" cpus="0-383" memory="536870912"/> </numa> </cpu> <cputune/> <qemu:capabilities> <qemu:add capability="blockdev"/> <qemu:add capability="incremental-backup"/> </qemu:capabilities> <devices> <input type="tablet" bus="usb"/> <channel type="unix"> <target type="virtio" name="ovirt-guest-agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.ovirt-guest-agent.0"/> </channel> <channel type="unix"> <target type="virtio" name="org.qemu.guest_agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.org.qemu.guest_agent.0"/> </channel> <emulator text="/usr/bin/qemu-system-ppc64"/> <controller type="scsi" model="ibmvscsi" index="0"/> <rng model="virtio"> <backend model="random">/dev/urandom</backend> <alias name="ua-1e18aea0-076a-40d0-9b85-21ac6049a94d"/> </rng> <controller type="usb" model="nec-xhci" index="0"> <alias name="ua-47e67d9f-a191-4dc0-9c09-b2db9f1d373e"/> </controller> <controller type="virtio-serial" index="0" ports="16"> <alias name="ua-4d92fb2f-aaf6-465c-8571-e49e1d12191d"/> </controller> <watchdog model="i6300esb" action="none"> <alias name="ua-7b756cc3-c9ec-4b79-84ef-d6ad15021f1a"/> </watchdog> <graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"> <listen type="network" network="vdsm-ovirtmgmt"/> </graphics> <controller type="scsi" model="virtio-scsi" index="1"> <alias name="ua-8e146e76-e038-4f8a-a526-e7e1c626f54e"/> </controller> <memballoon model="virtio"> <stats period="5"/> <alias name="ua-d8d37c06-de66-4912-bf8d-fc1017c85c68"/> </memballoon> <video> <model type="vga" vram="16384" heads="1"/> <alias name="ua-e96e6050-b1aa-4664-a856-8df923e3dc66"/> </video> <controller type="scsi" index="0"> <address type="spapr-vio"/> </controller> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="servers"/> <driver queues="4" name="vhost"/> <alias name="ua-152c3f8a-69d2-420f-8b6a-c1fb4a11594f"/> <mac address="56:6f:1a:f4:00:03"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="nfs"/> <driver queues="4" name="vhost"/> <alias name="ua-1369da6c-4f9b-4fe3-9f45-7b37ecb34ac2"/> <mac address="56:6f:1a:f4:00:04"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <disk type="file" device="cdrom" snapshot="no"> <driver name="qemu" type="raw" error_policy="report"/> <source file="" startupPolicy="optional"> <seclabel model="dac" type="none" relabel="no"/> </source> <target dev="sdc" bus="scsi"/> <readonly/> <alias name="ua-2d6db7ca-2fe1-4af4-9741-7b5332805d94"/> <address bus="0" controller="0" unit="2" type="drive" target="0"/> </disk> <disk snapshot="no" type="file" device="disk"> <target dev="sda" bus="scsi"/> <source file="/rhev/data-center/804e857c-461d-4642-86c4-7ff4a5e7da47/d19456e4-0051-456e-b33c-57348a78c2e0/images/8100a756-92a7-4160-9a31-5a843810cb61/0183b177-71b5-4c0e-b7d3-becc5da152ce"> <seclabel model="dac" type="none" relabel="no"/> </source> <driver name="qemu" io="threads" type="raw" error_policy="stop" cache="none"/> <alias name="ua-8100a756-92a7-4160-9a31-5a843810cb61"/> <address bus="0" controller="1" unit="0" type="drive" target="0"/> <boot order="1"/> <serial>8100a756-92a7-4160-9a31-5a843810cb61</serial> </disk> <lease> <key>ccccd416-c6b4-4c95-8372-417480be5365</key> <lockspace>d19456e4-0051-456e-b33c-57348a78c2e0</lockspace> <target offset="24117248" path="/rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm/d19456e4-0051-456e-b33c-57348a78c2e0/dom_md/xleases"/> </lease> </devices> <os> <type arch="ppc64" machine="pseries-rhel8.2.0">hvm</type> </os> <metadata> <ovirt-tune:qos/> <ovirt-vm:vm> <ovirt-vm:minGuaranteedMemoryMb type="int">524288</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion> <ovirt-vm:custom/> <ovirt-vm:device mac_address="56:6f:1a:f4:00:04"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device mac_address="56:6f:1a:f4:00:03"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sda"> <ovirt-vm:poolID>804e857c-461d-4642-86c4-7ff4a5e7da47</ovirt-vm:poolID> <ovirt-vm:volumeID>0183b177-71b5-4c0e-b7d3-becc5da152ce</ovirt-vm:volumeID> <ovirt-vm:imageID>8100a756-92a7-4160-9a31-5a843810cb61</ovirt-vm:imageID> <ovirt-vm:domainID>d19456e4-0051-456e-b33c-57348a78c2e0</ovirt-vm:domainID> </ovirt-vm:device> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:resumeBehavior>kill</ovirt-vm:resumeBehavior> </ovirt-vm:vm> </metadata> </domain> 2020-08-27 16:35:59,566-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateBrokerVDSCommand, return: , log id: 25bc7e6e 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateVDSCommand, return: WaitForLaunch, log id: 4a0db679 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,576-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] EVENT_ID: USER_STARTED_VM(153), VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> was started by admin@internal-authz (Host: rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>). 2020-08-27 16:36:01,803-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365' was reported as Down on VDS '394e0e68-60f5-42b3-aec4-5d8368efedd1'(rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>) 2020-08-27 16:36:01,804-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] START, DestroyVDSCommand(HostName = rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>, DestroyVmVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] FINISH, DestroyVDSCommand, return: , log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365'(jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>) moved from 'WaitForLaunch' --> 'Down' 2020-08-27 16:36:02,024-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-13) [] EVENT_ID: VM_DOWN_ERROR(119), VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: Hook Error: (b'Traceback (most recent call last):\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 124, in <module>\n main(VhostmdConf())\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 47, in __init__\n dom = minidom.parse(path)\n File "/usr/lib64/python3.6/xml/dom/minidom.py", line 1958, in parse\n return expatbuilder.parse(file)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 911, in parse\n result = builder.parseFile(fp)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 211, in parseFile\n parser.Parse("", True)\nxml.parsers.expat.ExpatError: no element found: line 1, column 0\n',). yeah, I never encountered this issue before - could be a consequence of an improper deployment of that host Starting reinstall right now. But I’ve a question, is this documentation right? For Red Hat Enterprise Linux 8 hosts, little endian, on IBM POWER9 hardware: # subscription-manager repos \ --disable='*' \ --enable=rhv-4-mgmt-agent-for-rhel-8-ppc64le-rpms \ --enable=advanced-virt-for-rhel-8-ppc64le-rpms \ --enable=ansible-2.9-for-rhel-8-ppc64le-rpms I think it’s missing: --enable=rhel-8-for-ppc64le-baseos-rpms \ --enable=rhel-8-for-ppc64le-appstream-rpms This can be found here: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... I assumed that in fact information is missing on this documentation. 2020-08-27 16:36:02,025-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] add VM 'ccccd416-c6b4-4c95-8372-417480be5365'(jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>) to rerun treatment 2020-08-27 16:36:02,029-03 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-13) [] Rerun VM 'ccccd416-c6b4-4c95-8372-417480be5365'. Called from VDS 'rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>' 2020-08-27 16:36:02,041-03 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> on Host rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>. 2020-08-27 16:36:02,066-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5480ad0b 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5480ad0b 2020-08-27 16:36:02,093-03 WARN [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Validation of action 'RunVm' failed for user admin@internal-authz. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS 2020-08-27 16:36:02,093-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,101-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> (User: admin@internal-authz). 2020-08-27 16:36:02,105-03 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (EE-ManagedThreadFactory-engine-Thread-145180) [71c52499] Running command: ProcessDownVmCommand internal: true. ON THE HOST: /var/log/messages Aug 27 16:36:01 rhvpower python3[73682]: detected unhandled Python exception in '/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd' Aug 27 16:36:01 rhvpower abrt-server[73684]: Deleting problem directory Python3-2020-08-27-16:36:01-73682 (dup of Python3-2020-08-27-16:33:11-73428) Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Activating service name='org.freedesktop.problems' requested by ':1.183' (uid=0 pid=73691 comm="/usr/libexec/platform-python /usr/bin/abrt-action-" label="system_u:system_r:abrt_t:s0-s0:c0.c1023") (using servicehelper) Aug 27 16:36:01 rhvpower dbus-daemon[73694]: [system] Failed to reset fd limit before activating service: org.freedesktop.DBus.Error.AccessDenied: Failed to restore old fd limit: Operation not permitted Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Successfully activated service 'org.freedesktop.problems' Aug 27 16:36:02 rhvpower abrt-server[73684]: /bin/sh: reporter-systemd-journal: command not found Regarding the import problem. Is that really a bug right? I can describe it on Red Hat Bugzilla if I need to. It’s the minimal that I can do for the help. Is it ok? yes, please do There you go: https://bugzilla.redhat.com/show_bug.cgi?id=1873322 Thank you guys, I will report back after the reinstallation of the host. Thanks, Thanks, michal Ideias? On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: What a strange thing is happening here: [root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files: Transaction test succeeded. Running transaction Preparing : 1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch 2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. Verifying : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Installed products updated. Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch I’ve never seen something like this. I’ve already reinstalled the host from the ground and the same thing happens. On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello Arik, This is probably the issue. Output totally empty: [root@power ~]# vdsm-client Host getCapabilities [root@power ~]# Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch Any ideias to try? Thanks. On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues. Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv. Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10. Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix Can you please provide the output of 'vdsm-client Host getCapabilities' on that host? Thanks, _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMNMYMBMWTC7UG...

Etc/GMT</TimeZone><default_boot_sequence>9</default_boot_sequence><Generation>8</Generation><ClusterCompatibilityVersion>4.3</ClusterCompatibilityVersion><V mType>1</VmType><ResumeBehavior>AUTO_RESUME</ResumeBehavior><MinAllocatedMem>2730</MinAllocatedMem><IsStateless>false</IsStateless><IsRunAndPause>false</IsRu nAndPause><AutoStartup>false</AutoStartup><Priority>1</Priority><CreatedByUserId>6ea16f22-45d7-11ea-bd83-00163e518b7c</CreatedByUserId><MigrationSupport>0</M igrationSupport><IsBootMenuEnabled>false</IsBootMenuEnabled><IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled><IsSpiceCopyPasteEnabled>true</IsSpi ceCopyPasteEnabled><AllowConsoleReconnect>true</AllowConsoleReconnect><ConsoleDisconnectAction>LOCK_SCREEN</ConsoleDisconnectAction><CustomEmulatedMachine></ CustomEmulatedMachine><BiosType>0</BiosType><CustomCpuName></CustomCpuName><PredefinedProperties></PredefinedProperties><UserDefinedProperties></UserDefinedP roperties><MaxMemorySizeMb>16384</MaxMemorySizeMb><MultiQueuesEnabled>true</MultiQueuesEnabled><UseHostCpu>false</UseHostCpu><ClusterName>Blastoise</ClusterN ame><TemplateId>00000000-0000-0000-0000-000000000000</TemplateId><TemplateName>Blank</TemplateName><IsInitilized>true</IsInitilized><Origin>0</Origin><quota_ id>32644894-755e-4588-b967-8fb9dc327795</quota_id><DefaultDisplayType>2</DefaultDisplayType><TrustedService>false</TrustedService><OriginalTemplateId>0000000 0-0000-0000-0000-000000000000</OriginalTemplateId><OriginalTemplateName>Blank</OriginalTemplateName><CpuPinning></CpuPinning><UseLatestVersion>false</UseLate stVersion><StopTime>2020/08/20 17:52:35</StopTime><Section ovf:id="46ad1d80-2649-48f5-92e6-e5489d11d30c" ovf:required="false" xsi:type="ovf:OperatingSystemSe ction_Type"><Info>Guest Operating System</Info><Description>other_linux_ppc64</Description></Section><Section xsi:type="ovf:VirtualHardwareSection_Type"><Inf o>2 CPU, 4096 Memory</Info><System><vssd:VirtualSystemType>ENGINE 4.1.0.0</vssd:VirtualSystemType></System><Item><rasd:Caption>2 virtual cpu</rasd:Caption><r asd:Description>Number of virtual CPU</rasd:Description><rasd:InstanceId>1</rasd:InstanceId><rasd:ResourceType>3</rasd:ResourceType><rasd:num_of_sockets>2</r asd:num_of_sockets><rasd:cpu_per_socket>1</rasd:cpu_per_socket><rasd:threads_per_cpu>1</rasd:threads_per_cpu><rasd:max_num_of_vcpus>16</rasd:max_num_of_vcpus <rasd:VirtualQuantity>2</rasd:VirtualQuantity></Item><Item><rasd:Caption>4096 MB of memory</rasd:Caption><rasd:Description>Memory Size</rasd:Description><ra sd:InstanceId>2</rasd:InstanceId><rasd:ResourceType>4</rasd:ResourceType><rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits><rasd:VirtualQuantity>4096</ra sd:VirtualQuantity></Item><Item><rasd:Caption>energy.versatushpc.com.br_Disk1</rasd:Caption><rasd:InstanceId>b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:Insta nceId><rasd:ResourceType>17</rasd:ResourceType><rasd:HostResource>775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:HostResourc e><rasd:Parent>00000000-0000-0000-0000-000000000000</rasd:Parent><rasd:Template>00000000-0000-0000-0000-000000000000</rasd:Template><rasd:ApplicationList></r asd:ApplicationList><rasd:StorageId>d19456e4-0051-456e-b33c-57348a78c2e0</rasd:StorageId><rasd:StoragePoolId>6c54f91e-89bf-45b4-bc48-56e74c4efd5e</rasd:Stora gePoolId><rasd:CreationDate>2020/08/19 20:13:05</rasd:CreationDate><rasd:LastModified>1970/01/01 00:00:00</rasd:LastModified><rasd:last_modified_date>2020/08 /20 18:37:41</rasd:last_modified_date><Type>disk</Type><Device>disk</Device><rasd:Address>{type=drive, bus=0, controller=1, target=0, unit=0}</rasd:Address>< BootOrder>1</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-775b24a9-6a32-431a-831f-4ac9b3b31152</Alias></Item><Item><rasd:Capt ion>Ethernet adapter on legacyservers</rasd:Caption><rasd:InstanceId>e6e37ae1-f263-4986-a039-e8e01e72d1f4</rasd:InstanceId><rasd:ResourceType>10</rasd:Resour ceType><rasd:OtherResourceType>legacyservers</rasd:OtherResourceType><rasd:ResourceSubType>3</rasd:ResourceSubType><rasd:Connection>legacyservers</rasd:Conne ction><rasd:Linked>true</rasd:Linked><rasd:Name>nic1</rasd:Name><rasd:ElementName>nic1</rasd:ElementName><rasd:MACAddress>56:6f:f0:b3:00:23</rasd:MACAddress> <rasd:speed>10000</rasd:speed><Type>interface</Type><Device>bridge</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><I sReadOnly>false</IsReadOnly><Alias>ua-e6e37ae1-f263-4986-a039-e8e01e72d1f4</Alias></Item><Item><rasd:Caption>USB Controller</rasd:Caption><rasd:InstanceId>3< /rasd:InstanceId><rasd:ResourceType>23</rasd:ResourceType><rasd:UsbPolicy>DISABLED</rasd:UsbPolicy></Item><Item><rasd:Caption>Graphical Controller</rasd:Capt ion><rasd:InstanceId>1440c749-728e-4a86-afc1-8237c6055fa5</rasd:InstanceId><rasd:ResourceType>20</rasd:ResourceType><rasd:VirtualQuantity>1</rasd:VirtualQuan tity><rasd:SinglePciQxl>false</rasd:SinglePciQxl><Type>video</Type><Device>vga</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</ IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-1440c749-728e-4a86-afc1-8237c6055fa5</Alias><SpecParams><vram>16384</vram></SpecParams></Item><Item><rasd:C aption>Graphical Framebuffer</rasd:Caption><rasd:InstanceId>603e7f0c-8d28-4c3e-bd90-c5685b752100</rasd:InstanceId><rasd:ResourceType>26</rasd:ResourceType><T ype>graphics</Type><Device>vnc</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>< /Alias></Item><Item><rasd:Caption>CDROM</rasd:Caption><rasd:InstanceId>3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</rasd:InstanceId><rasd:ResourceType>15</rasd:Reso urceType><Type>disk</Type><Device>cdrom</Device><rasd:Address>{type=drive, bus=0, controller=0, target=0, unit=2}</rasd:Address><BootOrder>2</BootOrder><IsPl ugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</Alias><SpecParams><path>CentOS-8.1.1911-x86_64-boot.iso</p ath></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>66f3a2b8-d2c5-4032-9f10-8742d65a0a3e</rasd:InstanceId><Type>controller </Type><Device>scsi</Device><rasd:Address>{type=spapr-vio}</rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Al ias></Alias><SpecParams><index>0</index></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>e065acb2-e7db-4f55-a1df-385f19299b d0</rasd:InstanceId><Type>rng</Type><Device>virtio</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false< /IsReadOnly><Alias>ua-e065acb2-e7db-4f55-a1df-385f19299bd0</Alias><SpecParams><source>urandom</source></SpecParams></Item><Item><rasd:ResourceType>0</rasd:Re
<BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-7b4c4ef6-2a9a-4120-b838-3127db0fd703</Alias><SpecParams><model>vir tio</model></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>5aade6c7-8f77-4fea-a5de-66350b214935</rasd:InstanceId><Type>con
On 27 Aug 2020, at 17:50, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: On 27 Aug 2020, at 16:48, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:39 PM Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: On 27 Aug 2020, at 16:26, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:23 PM Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:13 PM Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: On 27 Aug 2020, at 16:03, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 8:40 PM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hi Michal, On 27 Aug 2020, at 05:08, Michal Skrivanek <michal.skrivanek@redhat.com<mailto:michal.skrivanek@redhat.com>> wrote: On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Okay here we go Arik. With your insight I’ve done the following: # rpm -Va This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done: you should still have host deploy logs on the engine machine. it’s weird it succeeded, unless it somehow happened afterwards? It only succeeded my yum reinstall rampage. yum list installed | cut -f 1 -d " " > file yum -y reinstall `cat file | xargs` Reinstalled everything. Everything worked as expected and I finally added the machine back to the cluster. It’s operational. eh, I wouldn’t trust it much. did you run redeploy at least? I’ve done reinstall on the web interface of the engine. I can reinstall the host, there’s nothing running on it… gonna try a third format. Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64: <PastedGraphic-2.png> So… This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this… how exactly did you import them? could be a bug indeed. we don’t support changing it as it doesn’t make sense, the guest can’t be converted Yeah. I done the normal procedure, added the storage domain to the engine and clicked on “Import VM”. Immediately it was detected as x86_64. Since I wasn’t able to upgrade my environment from 4.3.10 to 4.4.1 due to random errors when redeploying the engine with the backup from 4.3.10, I just reinstalled it, reconfigured everything and them imported the storage domains. I don’t know where the information about architecture is stored in the storage domain, I tried to search for some metadata files inside the domain but nothing come up. Is there a way to force this change? It must be a way. I even tried to import the machine as x86_64. So I can delete the VM and just reattach the disks in a new only, effectively not losing the data, but… <PastedGraphic-1.png> Yeah, so something is broken. The check during the import appears to be OK, but the interface does not me allow to import it to the ppc64le machine, since it’s read as x86_64. Could you please provide the output of the following query from the database: select * from unregistered_ovf_of_entities where entity_name='energy.versatushpc.com.br<http://energy.versatushpc.com.br/>'; Sure, there you go: 46ad1d80-2649-48f5-92e6-e5489d11d30c | energy.versatushpc.com.br<http://energy.versatushpc.com.br/> | VM | 1 | | d19456e4-0051-456e-b33c-57348a78c2e0 | <?xml version="1.0" encoding="UTF-8"?><ovf:Envelope xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1/" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim -schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingDa..." xmlns:xsi="http://ww<http://ww/> w.w3.org/2001/XMLSchema-instance<http://w.w3.org/2001/XMLSchema-instance>" ovf:version="4.1.0.0"><References><File ovf:href="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af " ovf:id="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="512" ovf:description="Active VM" ovf:disk_storage_type="IMAGE" ovf:cinder_volume_type=""></File></R eferences><NetworkSection><Info>List of networks</Info><Network ovf:name="legacyservers"></Network></NetworkSection><Section xsi:type="ovf:DiskSection_Type"> <Info>List of Virtual Disks</Info><Disk ovf:diskId="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="40" ovf:actual_size="1" ovf:vm_snapshot_id="6de58683-c586 -4e97-b0e8-ee7ee3baf754" ovf:parentRef="" ovf:fileRef="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:format="http://www.vmwa<http://www.vmwa/> re.com/specifications/vmdk.html#sparse<http://re.com/specifications/vmdk.html#sparse>" ovf:volume-format="RAW" ovf:volume-type="Sparse" ovf:disk-interface="VirtIO_SCSI" ovf:read-only="false" ovf:shareable ="false" ovf:boot="true" ovf:pass-discard="false" ovf:disk-alias="energy.versatushpc.com.br_Disk1" ovf:disk-description="" ovf:wipe-after-delete="false"></Di sk></Section><Content ovf:id="out" xsi:type="ovf:VirtualSystem_Type"><Name>energy.versatushpc.com.br<http://energy.versatushpc.com.br/></Name><Description>Holds Kosen backend and frontend prod services (nginx + docker)</Description><Comment></Comment><CreationDate>2020/08/19 20:11:33</CreationDate><ExportDate>2020/08/20 18:37:41</ExportDate><Delet eProtected>false</DeleteProtected><SsoMethod>guest_agent</SsoMethod><IsSmartcardEnabled>false</IsSmartcardEnabled><NumOfIoThreads>1</NumOfIoThreads><TimeZone sourceType><rasd:InstanceId>7b4c4ef6-2a9a-4120-b838-3127db0fd703</rasd:InstanceId><Type>balloon</Type><Device>memballoon</Device><rasd:Address></rasd:Address troller</Type><Device>virtio-scsi</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlugged><IsReadOnly>false</IsReadOnly><Ali as></Alias><SpecParams><ioThreadId></ioThreadId></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>4d4d7bfd-b1e8-45c3-a5e8-7e 0b7773bbf2</rasd:InstanceId><Type>controller</Type><Device>virtio-serial</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlu gged><IsReadOnly>false</IsReadOnly><Alias>58ca7b19-0071-00c0-01d6-000000000212</Alias></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>9 cea63da-7afd-41d4-925f-369f993b280f</rasd:InstanceId><Type>controller</Type><Device>usb</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugg ed>false</IsPlugged><IsReadOnly>false</IsReadOnly><Alias></Alias><SpecParams><index>0</index><model>nec-xhci</model></SpecParams></Item></Section><Section xs i:type="ovf:SnapshotsSection_Type"><Snapshot ovf:id="6de58683-c586-4e97-b0e8-ee7ee3baf754"><Type>ACTIVE</Type><Description>Active VM</Description><CreationDa te>2020/08/19 20:11:33</CreationDate></Snapshot></Section></Content></ovf:Envelope> | | 0 Thank you! thanks so yeah - we may have an issue with that operating system 'other_linux_ppc64' that has the same name as 'other_linux' in our os-info configuration as a possible workaround, assuming all those unregistered VMs you can try to override the architecture with: update unregistered_ovf_of_entities set architecture = 2; as a possible workaround, assuming all those unregistered VMs are from clusters with the same architecture, you can try to override the architecture with: * Wooha!!! engine=# update unregistered_ovf_of_entities set architecture = 2; UPDATE 8 <PastedGraphic-2.png> <PastedGraphic-3.png> Worked and the VMs are now imported. But… hahaha. I have another issues, any of the three VM’s starts now. Perhaps I’ll reinstall the host for the third time as recommended by Michal, anyway here are the logs that I was able to fetch during the failed power on process: ON THE ENGINE: ==> /var/log/ovirt-engine/engine.log <== 2020-08-27 16:35:59,437-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5e701801 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5e701801 2020-08-27 16:35:59,500-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Running command: RunVmCommand internal: false. Entities affected : ID: ccccd416-c6b4-4c95-8372-417480be5365 Type: VMAction group RUN_VM with role type USER 2020-08-27 16:35:59,506-03 INFO [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Emulated machine 'pseries-rhel8.2.0' which is different than that of the cluster is set for 'jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>'(ccccd416-c6b4-4c95-8372-417480be5365) 2020-08-27 16:35:59,528-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@14322872'}), log id: 7709ba81 2020-08-27 16:35:59,530-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 7709ba81 2020-08-27 16:35:59,533-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>]'}), log id: 4a0db679 2020-08-27 16:35:59,534-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateBrokerVDSCommand(HostName = rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>, CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>]'}), log id: 25bc7e6e 2020-08-27 16:35:59,548-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] VM <?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"> <name>jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/></name> <uuid>ccccd416-c6b4-4c95-8372-417480be5365</uuid> <memory>536870912</memory> <currentMemory>536870912</currentMemory> <vcpu current="128">384</vcpu> <clock offset="variable" adjustment="0"> <timer name="rtc" tickpolicy="catchup"/> <timer name="pit" tickpolicy="delay"/> </clock> <cpu mode="host-model"> <model>power9</model> <topology cores="16" threads="4" sockets="6"/> <numa> <cell id="0" cpus="0-383" memory="536870912"/> </numa> </cpu> <cputune/> <qemu:capabilities> <qemu:add capability="blockdev"/> <qemu:add capability="incremental-backup"/> </qemu:capabilities> <devices> <input type="tablet" bus="usb"/> <channel type="unix"> <target type="virtio" name="ovirt-guest-agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.ovirt-guest-agent.0"/> </channel> <channel type="unix"> <target type="virtio" name="org.qemu.guest_agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.org.qemu.guest_agent.0"/> </channel> <emulator text="/usr/bin/qemu-system-ppc64"/> <controller type="scsi" model="ibmvscsi" index="0"/> <rng model="virtio"> <backend model="random">/dev/urandom</backend> <alias name="ua-1e18aea0-076a-40d0-9b85-21ac6049a94d"/> </rng> <controller type="usb" model="nec-xhci" index="0"> <alias name="ua-47e67d9f-a191-4dc0-9c09-b2db9f1d373e"/> </controller> <controller type="virtio-serial" index="0" ports="16"> <alias name="ua-4d92fb2f-aaf6-465c-8571-e49e1d12191d"/> </controller> <watchdog model="i6300esb" action="none"> <alias name="ua-7b756cc3-c9ec-4b79-84ef-d6ad15021f1a"/> </watchdog> <graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"> <listen type="network" network="vdsm-ovirtmgmt"/> </graphics> <controller type="scsi" model="virtio-scsi" index="1"> <alias name="ua-8e146e76-e038-4f8a-a526-e7e1c626f54e"/> </controller> <memballoon model="virtio"> <stats period="5"/> <alias name="ua-d8d37c06-de66-4912-bf8d-fc1017c85c68"/> </memballoon> <video> <model type="vga" vram="16384" heads="1"/> <alias name="ua-e96e6050-b1aa-4664-a856-8df923e3dc66"/> </video> <controller type="scsi" index="0"> <address type="spapr-vio"/> </controller> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="servers"/> <driver queues="4" name="vhost"/> <alias name="ua-152c3f8a-69d2-420f-8b6a-c1fb4a11594f"/> <mac address="56:6f:1a:f4:00:03"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="nfs"/> <driver queues="4" name="vhost"/> <alias name="ua-1369da6c-4f9b-4fe3-9f45-7b37ecb34ac2"/> <mac address="56:6f:1a:f4:00:04"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <disk type="file" device="cdrom" snapshot="no"> <driver name="qemu" type="raw" error_policy="report"/> <source file="" startupPolicy="optional"> <seclabel model="dac" type="none" relabel="no"/> </source> <target dev="sdc" bus="scsi"/> <readonly/> <alias name="ua-2d6db7ca-2fe1-4af4-9741-7b5332805d94"/> <address bus="0" controller="0" unit="2" type="drive" target="0"/> </disk> <disk snapshot="no" type="file" device="disk"> <target dev="sda" bus="scsi"/> <source file="/rhev/data-center/804e857c-461d-4642-86c4-7ff4a5e7da47/d19456e4-0051-456e-b33c-57348a78c2e0/images/8100a756-92a7-4160-9a31-5a843810cb61/0183b177-71b5-4c0e-b7d3-becc5da152ce"> <seclabel model="dac" type="none" relabel="no"/> </source> <driver name="qemu" io="threads" type="raw" error_policy="stop" cache="none"/> <alias name="ua-8100a756-92a7-4160-9a31-5a843810cb61"/> <address bus="0" controller="1" unit="0" type="drive" target="0"/> <boot order="1"/> <serial>8100a756-92a7-4160-9a31-5a843810cb61</serial> </disk> <lease> <key>ccccd416-c6b4-4c95-8372-417480be5365</key> <lockspace>d19456e4-0051-456e-b33c-57348a78c2e0</lockspace> <target offset="24117248" path="/rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm/d19456e4-0051-456e-b33c-57348a78c2e0/dom_md/xleases"/> </lease> </devices> <os> <type arch="ppc64" machine="pseries-rhel8.2.0">hvm</type> </os> <metadata> <ovirt-tune:qos/> <ovirt-vm:vm> <ovirt-vm:minGuaranteedMemoryMb type="int">524288</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion> <ovirt-vm:custom/> <ovirt-vm:device mac_address="56:6f:1a:f4:00:04"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device mac_address="56:6f:1a:f4:00:03"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sda"> <ovirt-vm:poolID>804e857c-461d-4642-86c4-7ff4a5e7da47</ovirt-vm:poolID> <ovirt-vm:volumeID>0183b177-71b5-4c0e-b7d3-becc5da152ce</ovirt-vm:volumeID> <ovirt-vm:imageID>8100a756-92a7-4160-9a31-5a843810cb61</ovirt-vm:imageID> <ovirt-vm:domainID>d19456e4-0051-456e-b33c-57348a78c2e0</ovirt-vm:domainID> </ovirt-vm:device> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:resumeBehavior>kill</ovirt-vm:resumeBehavior> </ovirt-vm:vm> </metadata> </domain> 2020-08-27 16:35:59,566-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateBrokerVDSCommand, return: , log id: 25bc7e6e 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateVDSCommand, return: WaitForLaunch, log id: 4a0db679 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,576-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] EVENT_ID: USER_STARTED_VM(153), VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> was started by admin@internal-authz (Host: rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>). 2020-08-27 16:36:01,803-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365' was reported as Down on VDS '394e0e68-60f5-42b3-aec4-5d8368efedd1'(rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>) 2020-08-27 16:36:01,804-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] START, DestroyVDSCommand(HostName = rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>, DestroyVmVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] FINISH, DestroyVDSCommand, return: , log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365'(jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>) moved from 'WaitForLaunch' --> 'Down' 2020-08-27 16:36:02,024-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-13) [] EVENT_ID: VM_DOWN_ERROR(119), VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: Hook Error: (b'Traceback (most recent call last):\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 124, in <module>\n main(VhostmdConf())\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 47, in __init__\n dom = minidom.parse(path)\n File "/usr/lib64/python3.6/xml/dom/minidom.py", line 1958, in parse\n return expatbuilder.parse(file)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 911, in parse\n result = builder.parseFile(fp)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 211, in parseFile\n parser.Parse("", True)\nxml.parsers.expat.ExpatError: no element found: line 1, column 0\n',). yeah, I never encountered this issue before - could be a consequence of an improper deployment of that host Starting reinstall right now. But I’ve a question, is this documentation right? For Red Hat Enterprise Linux 8 hosts, little endian, on IBM POWER9 hardware: # subscription-manager repos \ --disable='*' \ --enable=rhv-4-mgmt-agent-for-rhel-8-ppc64le-rpms \ --enable=advanced-virt-for-rhel-8-ppc64le-rpms \ --enable=ansible-2.9-for-rhel-8-ppc64le-rpms I think it’s missing: --enable=rhel-8-for-ppc64le-baseos-rpms \ --enable=rhel-8-for-ppc64le-appstream-rpms This can be found here: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... I assumed that in fact information is missing on this documentation. Double check, it’s missing. It’s impossible to reinstall the machine only with this repositories. I’ll open another bug. 2020-08-27 16:36:02,025-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] add VM 'ccccd416-c6b4-4c95-8372-417480be5365'(jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>) to rerun treatment 2020-08-27 16:36:02,029-03 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-13) [] Rerun VM 'ccccd416-c6b4-4c95-8372-417480be5365'. Called from VDS 'rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>' 2020-08-27 16:36:02,041-03 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> on Host rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>. 2020-08-27 16:36:02,066-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5480ad0b 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5480ad0b 2020-08-27 16:36:02,093-03 WARN [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Validation of action 'RunVm' failed for user admin@internal-authz. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS 2020-08-27 16:36:02,093-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,101-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> (User: admin@internal-authz). 2020-08-27 16:36:02,105-03 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (EE-ManagedThreadFactory-engine-Thread-145180) [71c52499] Running command: ProcessDownVmCommand internal: true. ON THE HOST: /var/log/messages Aug 27 16:36:01 rhvpower python3[73682]: detected unhandled Python exception in '/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd' Aug 27 16:36:01 rhvpower abrt-server[73684]: Deleting problem directory Python3-2020-08-27-16:36:01-73682 (dup of Python3-2020-08-27-16:33:11-73428) Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Activating service name='org.freedesktop.problems' requested by ':1.183' (uid=0 pid=73691 comm="/usr/libexec/platform-python /usr/bin/abrt-action-" label="system_u:system_r:abrt_t:s0-s0:c0.c1023") (using servicehelper) Aug 27 16:36:01 rhvpower dbus-daemon[73694]: [system] Failed to reset fd limit before activating service: org.freedesktop.DBus.Error.AccessDenied: Failed to restore old fd limit: Operation not permitted Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Successfully activated service 'org.freedesktop.problems' Aug 27 16:36:02 rhvpower abrt-server[73684]: /bin/sh: reporter-systemd-journal: command not found Regarding the import problem. Is that really a bug right? I can describe it on Red Hat Bugzilla if I need to. It’s the minimal that I can do for the help. Is it ok? yes, please do There you go: https://bugzilla.redhat.com/show_bug.cgi?id=1873322 Thank you guys, I will report back after the reinstallation of the host. Reinstall now went fine. Now I found that’s something extremely bad when trying to run the VMs. The metadata appears to be corrupted. First it complained about the CPUs, I changed it not the interface just to refresh the metadada: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br> is down with error. Exit message: internal error: process exited while connecting to monitor: 2020-08-27T22:48:59.533367Z qemu-kvm: warning: Number of hotpluggable cpus requested (384) exceeds the recommended cpus supported by KVM (128) 2020-08-27T22:48:59.537530Z qemu-kvm: -numa node,nodeid=0,cpus=0-383,mem=524288: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead 2020-08-27T22:48:59.833812Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. After this the error changed to: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br> is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T22:49:48.876424Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. Then I tried reducing the RAM, but it gave me a warning (not an error): VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br> was configured with 524288MiB of memory while the recommended value range is 256MiB - 65536MiB I’ve lowered it to 65536MiB, and now it complains about multiple SCSI devices: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br> is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0'. So I’ve changed the disk type from VirtIO-SCSI to VirtIO and changed back to VirtIO to VirtIO-SCSI, and some part of the first error came back: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br> is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T23:09:51.753960Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. Now I changed the machine to custom cluster emulation to pseries-7.6.0 and the SCSI error is back. And now I’m stuck with it… VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br> is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0’. Thinking in dumping the VM entirely and reimporting the disks… but I created another one, just with plain settings, to see if something boots on this host, and the result was bad: VM ppc64le is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T23:15:44.669298Z qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=8192: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead 2020-08-27T23:15:44.691077Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. So, any ideias? In the past I had some issues with SXXM in pseries-7.6.0; I’m not sure if it’s the same issue all over again. Thanks, Thanks, Thanks, michal Ideias? On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: What a strange thing is happening here: [root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files: Transaction test succeeded. Running transaction Preparing : 1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch 2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. Verifying : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Installed products updated. Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch I’ve never seen something like this. I’ve already reinstalled the host from the ground and the same thing happens. On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello Arik, This is probably the issue. Output totally empty: [root@power ~]# vdsm-client Host getCapabilities [root@power ~]# Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch Any ideias to try? Thanks. On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues. Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv. Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10. Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix Can you please provide the output of 'vdsm-client Host getCapabilities' on that host? Thanks, _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMNMYMBMWTC7UG... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6T3PWJREF2FJCT...

Hi Vinicius,
On 28 Aug 2020, at 01:17, Vinícius Ferrão via Users <users@ovirt.org> wrote:
On 27 Aug 2020, at 17:50, Vinícius Ferrão via Users <users@ovirt.org <mailto:users@ovirt.org>> wrote:
On 27 Aug 2020, at 16:48, Arik Hadas <ahadas@redhat.com <mailto:ahadas@redhat.com>> wrote:
On Thu, Aug 27, 2020 at 10:39 PM Vinícius Ferrão <ferrao@versatushpc.com.br <mailto:ferrao@versatushpc.com.br>> wrote:
On 27 Aug 2020, at 16:26, Arik Hadas <ahadas@redhat.com <mailto:ahadas@redhat.com>> wrote:
On Thu, Aug 27, 2020 at 10:23 PM Arik Hadas <ahadas@redhat.com <mailto:ahadas@redhat.com>> wrote:
On Thu, Aug 27, 2020 at 10:13 PM Vinícius Ferrão <ferrao@versatushpc.com.br <mailto:ferrao@versatushpc.com.br>> wrote:
On 27 Aug 2020, at 16:03, Arik Hadas <ahadas@redhat.com <mailto:ahadas@redhat.com>> wrote:
On Thu, Aug 27, 2020 at 8:40 PM Vinícius Ferrão via Users <users@ovirt.org <mailto:users@ovirt.org>> wrote: Hi Michal,
On 27 Aug 2020, at 05:08, Michal Skrivanek <michal.skrivanek@redhat.com <mailto:michal.skrivanek@redhat.com>> wrote:
> On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users@ovirt.org <mailto:users@ovirt.org>> wrote: > > Okay here we go Arik. > > With your insight I’ve done the following: > > # rpm -Va > > This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done:
you should still have host deploy logs on the engine machine. it’s weird it succeeded, unless it somehow happened afterwards?
It only succeeded my yum reinstall rampage.
> yum list installed | cut -f 1 -d " " > file > yum -y reinstall `cat file | xargs` > > Reinstalled everything. > > Everything worked as expected and I finally added the machine back to the cluster. It’s operational.
eh, I wouldn’t trust it much. did you run redeploy at least?
I’ve done reinstall on the web interface of the engine. I can reinstall the host, there’s nothing running on it… gonna try a third format.
> > Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64: > > <PastedGraphic-2.png> > > So… > > This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this…
how exactly did you import them? could be a bug indeed. we don’t support changing it as it doesn’t make sense, the guest can’t be converted
Yeah. I done the normal procedure, added the storage domain to the engine and clicked on “Import VM”. Immediately it was detected as x86_64.
Since I wasn’t able to upgrade my environment from 4.3.10 to 4.4.1 due to random errors when redeploying the engine with the backup from 4.3.10, I just reinstalled it, reconfigured everything and them imported the storage domains.
I don’t know where the information about architecture is stored in the storage domain, I tried to search for some metadata files inside the domain but nothing come up. Is there a way to force this change? It must be a way.
I even tried to import the machine as x86_64. So I can delete the VM and just reattach the disks in a new only, effectively not losing the data, but…
<PastedGraphic-1.png>
Yeah, so something is broken. The check during the import appears to be OK, but the interface does not me allow to import it to the ppc64le machine, since it’s read as x86_64.
Could you please provide the output of the following query from the database: select * from unregistered_ovf_of_entities where entity_name='energy.versatushpc.com.br <http://energy.versatushpc.com.br/>';
Sure, there you go:
Etc/GMT</TimeZone><default_boot_sequence>9</default_boot_sequence><Generation>8</Generation><ClusterCompatibilityVersion>4.3</ClusterCompatibilityVersion><V mType>1</VmType><ResumeBehavior>AUTO_RESUME</ResumeBehavior><MinAllocatedMem>2730</MinAllocatedMem><IsStateless>false</IsStateless><IsRunAndPause>false</IsRu nAndPause><AutoStartup>false</AutoStartup><Priority>1</Priority><CreatedByUserId>6ea16f22-45d7-11ea-bd83-00163e518b7c</CreatedByUserId><MigrationSupport>0</M igrationSupport><IsBootMenuEnabled>false</IsBootMenuEnabled><IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled><IsSpiceCopyPasteEnabled>true</IsSpi ceCopyPasteEnabled><AllowConsoleReconnect>true</AllowConsoleReconnect><ConsoleDisconnectAction>LOCK_SCREEN</ConsoleDisconnectAction><CustomEmulatedMachine></ CustomEmulatedMachine><BiosType>0</BiosType><CustomCpuName></CustomCpuName><PredefinedProperties></PredefinedProperties><UserDefinedProperties></UserDefinedP roperties><MaxMemorySizeMb>16384</MaxMemorySizeMb><MultiQueuesEnabled>true</MultiQueuesEnabled><UseHostCpu>false</UseHostCpu><ClusterName>Blastoise</ClusterN ame><TemplateId>00000000-0000-0000-0000-000000000000</TemplateId><TemplateName>Blank</TemplateName><IsInitilized>true</IsInitilized><Origin>0</Origin><quota_ id>32644894-755e-4588-b967-8fb9dc327795</quota_id><DefaultDisplayType>2</DefaultDisplayType><TrustedService>false</TrustedService><OriginalTemplateId>0000000 0-0000-0000-0000-000000000000</OriginalTemplateId><OriginalTemplateName>Blank</OriginalTemplateName><CpuPinning></CpuPinning><UseLatestVersion>false</UseLate stVersion><StopTime>2020/08/20 17:52:35</StopTime><Section ovf:id="46ad1d80-2649-48f5-92e6-e5489d11d30c" ovf:required="false" xsi:type="ovf:OperatingSystemSe ction_Type"><Info>Guest Operating System</Info><Description>other_linux_ppc64</Description></Section><Section xsi:type="ovf:VirtualHardwareSection_Type"><Inf o>2 CPU, 4096 Memory</Info><System><vssd:VirtualSystemType>ENGINE 4.1.0.0</vssd:VirtualSystemType></System><Item><rasd:Caption>2 virtual cpu</rasd:Caption><r asd:Description>Number of virtual CPU</rasd:Description><rasd:InstanceId>1</rasd:InstanceId><rasd:ResourceType>3</rasd:ResourceType><rasd:num_of_sockets>2</r asd:num_of_sockets><rasd:cpu_per_socket>1</rasd:cpu_per_socket><rasd:threads_per_cpu>1</rasd:threads_per_cpu><rasd:max_num_of_vcpus>16</rasd:max_num_of_vcpus <rasd:VirtualQuantity>2</rasd:VirtualQuantity></Item><Item><rasd:Caption>4096 MB of memory</rasd:Caption><rasd:Description>Memory Size</rasd:Description><ra sd:InstanceId>2</rasd:InstanceId><rasd:ResourceType>4</rasd:ResourceType><rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits><rasd:VirtualQuantity>4096</ra sd:VirtualQuantity></Item><Item><rasd:Caption>energy.versatushpc.com.br_Disk1</rasd:Caption><rasd:InstanceId>b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:Insta nceId><rasd:ResourceType>17</rasd:ResourceType><rasd:HostResource>775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:HostResourc e><rasd:Parent>00000000-0000-0000-0000-000000000000</rasd:Parent><rasd:Template>00000000-0000-0000-0000-000000000000</rasd:Template><rasd:ApplicationList></r asd:ApplicationList><rasd:StorageId>d19456e4-0051-456e-b33c-57348a78c2e0</rasd:StorageId><rasd:StoragePoolId>6c54f91e-89bf-45b4-bc48-56e74c4efd5e</rasd:Stora gePoolId><rasd:CreationDate>2020/08/19 20:13:05</rasd:CreationDate><rasd:LastModified>1970/01/01 00:00:00</rasd:LastModified><rasd:last_modified_date>2020/08 /20 18:37:41</rasd:last_modified_date><Type>disk</Type><Device>disk</Device><rasd:Address>{type=drive, bus=0, controller=1, target=0, unit=0}</rasd:Address>< BootOrder>1</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-775b24a9-6a32-431a-831f-4ac9b3b31152</Alias></Item><Item><rasd:Capt ion>Ethernet adapter on legacyservers</rasd:Caption><rasd:InstanceId>e6e37ae1-f263-4986-a039-e8e01e72d1f4</rasd:InstanceId><rasd:ResourceType>10</rasd:Resour ceType><rasd:OtherResourceType>legacyservers</rasd:OtherResourceType><rasd:ResourceSubType>3</rasd:ResourceSubType><rasd:Connection>legacyservers</rasd:Conne ction><rasd:Linked>true</rasd:Linked><rasd:Name>nic1</rasd:Name><rasd:ElementName>nic1</rasd:ElementName><rasd:MACAddress>56:6f:f0:b3:00:23</rasd:MACAddress> <rasd:speed>10000</rasd:speed><Type>interface</Type><Device>bridge</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><I sReadOnly>false</IsReadOnly><Alias>ua-e6e37ae1-f263-4986-a039-e8e01e72d1f4</Alias></Item><Item><rasd:Caption>USB Controller</rasd:Caption><rasd:InstanceId>3< /rasd:InstanceId><rasd:ResourceType>23</rasd:ResourceType><rasd:UsbPolicy>DISABLED</rasd:UsbPolicy></Item><Item><rasd:Caption>Graphical Controller</rasd:Capt ion><rasd:InstanceId>1440c749-728e-4a86-afc1-8237c6055fa5</rasd:InstanceId><rasd:ResourceType>20</rasd:ResourceType><rasd:VirtualQuantity>1</rasd:VirtualQuan tity><rasd:SinglePciQxl>false</rasd:SinglePciQxl><Type>video</Type><Device>vga</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</ IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-1440c749-728e-4a86-afc1-8237c6055fa5</Alias><SpecParams><vram>16384</vram></SpecParams></Item><Item><rasd:C aption>Graphical Framebuffer</rasd:Caption><rasd:InstanceId>603e7f0c-8d28-4c3e-bd90-c5685b752100</rasd:InstanceId><rasd:ResourceType>26</rasd:ResourceType><T ype>graphics</Type><Device>vnc</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>< /Alias></Item><Item><rasd:Caption>CDROM</rasd:Caption><rasd:InstanceId>3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</rasd:InstanceId><rasd:ResourceType>15</rasd:Reso urceType><Type>disk</Type><Device>cdrom</Device><rasd:Address>{type=drive, bus=0, controller=0, target=0, unit=2}</rasd:Address><BootOrder>2</BootOrder><IsPl ugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</Alias><SpecParams><path>CentOS-8.1.1911-x86_64-boot.iso</p ath></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>66f3a2b8-d2c5-4032-9f10-8742d65a0a3e</rasd:InstanceId><Type>controller </Type><Device>scsi</Device><rasd:Address>{type=spapr-vio}</rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Al ias></Alias><SpecParams><index>0</index></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>e065acb2-e7db-4f55-a1df-385f19299b d0</rasd:InstanceId><Type>rng</Type><Device>virtio</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false< /IsReadOnly><Alias>ua-e065acb2-e7db-4f55-a1df-385f19299bd0</Alias><SpecParams><source>urandom</source></SpecParams></Item><Item><rasd:ResourceType>0</rasd:Re
<BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-7b4c4ef6-2a9a-4120-b838-3127db0fd703</Alias><SpecParams><model>vir tio</model></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>5aade6c7-8f77-4fea-a5de-66350b214935</rasd:InstanceId><Type>con
46ad1d80-2649-48f5-92e6-e5489d11d30c | energy.versatushpc.com.br <http://energy.versatushpc.com.br/> | VM | 1 | | d19456e4-0051-456e-b33c-57348a78c2e0 | <?xml version="1.0" encoding="UTF-8"?><ovf:Envelope xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1/ <http://schemas.dmtf.org/ovf/envelope/1/>" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim <http://schemas.dmtf.org/wbem/wscim/1/cim> -schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingDa... <http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData>" xmlns:xsi="http://ww <http://ww/> w.w3.org/2001/XMLSchema-instance <http://w.w3.org/2001/XMLSchema-instance>" ovf:version="4.1.0.0"><References><File ovf:href="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af " ovf:id="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="512" ovf:description="Active VM" ovf:disk_storage_type="IMAGE" ovf:cinder_volume_type=""></File></R eferences><NetworkSection><Info>List of networks</Info><Network ovf:name="legacyservers"></Network></NetworkSection><Section xsi:type="ovf:DiskSection_Type"> <Info>List of Virtual Disks</Info><Disk ovf:diskId="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="40" ovf:actual_size="1" ovf:vm_snapshot_id="6de58683-c586 -4e97-b0e8-ee7ee3baf754" ovf:parentRef="" ovf:fileRef="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:format="http://www.vmwa <http://www.vmwa/> re.com/specifications/vmdk.html#sparse <http://re.com/specifications/vmdk.html#sparse>" ovf:volume-format="RAW" ovf:volume-type="Sparse" ovf:disk-interface="VirtIO_SCSI" ovf:read-only="false" ovf:shareable ="false" ovf:boot="true" ovf:pass-discard="false" ovf:disk-alias="energy.versatushpc.com.br_Disk1" ovf:disk-description="" ovf:wipe-after-delete="false"></Di sk></Section><Content ovf:id="out" xsi:type="ovf:VirtualSystem_Type"><Name>energy.versatushpc.com.br <http://energy.versatushpc.com.br/></Name><Description>Holds Kosen backend and frontend prod services (nginx + docker)</Description><Comment></Comment><CreationDate>2020/08/19 20:11:33</CreationDate><ExportDate>2020/08/20 18:37:41</ExportDate><Delet eProtected>false</DeleteProtected><SsoMethod>guest_agent</SsoMethod><IsSmartcardEnabled>false</IsSmartcardEnabled><NumOfIoThreads>1</NumOfIoThreads><TimeZone sourceType><rasd:InstanceId>7b4c4ef6-2a9a-4120-b838-3127db0fd703</rasd:InstanceId><Type>balloon</Type><Device>memballoon</Device><rasd:Address></rasd:Address troller</Type><Device>virtio-scsi</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlugged><IsReadOnly>false</IsReadOnly><Ali as></Alias><SpecParams><ioThreadId></ioThreadId></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>4d4d7bfd-b1e8-45c3-a5e8-7e 0b7773bbf2</rasd:InstanceId><Type>controller</Type><Device>virtio-serial</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlu gged><IsReadOnly>false</IsReadOnly><Alias>58ca7b19-0071-00c0-01d6-000000000212</Alias></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>9 cea63da-7afd-41d4-925f-369f993b280f</rasd:InstanceId><Type>controller</Type><Device>usb</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugg ed>false</IsPlugged><IsReadOnly>false</IsReadOnly><Alias></Alias><SpecParams><index>0</index><model>nec-xhci</model></SpecParams></Item></Section><Section xs i:type="ovf:SnapshotsSection_Type"><Snapshot ovf:id="6de58683-c586-4e97-b0e8-ee7ee3baf754"><Type>ACTIVE</Type><Description>Active VM</Description><CreationDa te>2020/08/19 20:11:33</CreationDate></Snapshot></Section></Content></ovf:Envelope> | | 0
Thank you!
thanks so yeah - we may have an issue with that operating system 'other_linux_ppc64' that has the same name as 'other_linux' in our os-info configuration as a possible workaround, assuming all those unregistered VMs you can try to override the architecture with: update unregistered_ovf_of_entities set architecture = 2;
as a possible workaround, assuming all those unregistered VMs are from clusters with the same architecture, you can try to override the architecture with: *
Wooha!!!
engine=# update unregistered_ovf_of_entities set architecture = 2; UPDATE 8
<PastedGraphic-2.png>
<PastedGraphic-3.png>
Worked and the VMs are now imported.
But… hahaha.
I have another issues, any of the three VM’s starts now. Perhaps I’ll reinstall the host for the third time as recommended by Michal, anyway here are the logs that I was able to fetch during the failed power on process:
ON THE ENGINE:
==> /var/log/ovirt-engine/engine.log <== 2020-08-27 16:35:59,437-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5e701801 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5e701801 2020-08-27 16:35:59,500-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Running command: RunVmCommand internal: false. Entities affected : ID: ccccd416-c6b4-4c95-8372-417480be5365 Type: VMAction group RUN_VM with role type USER 2020-08-27 16:35:59,506-03 INFO [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Emulated machine 'pseries-rhel8.2.0' which is different than that of the cluster is set for 'jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/>'(ccccd416-c6b4-4c95-8372-417480be5365) 2020-08-27 16:35:59,528-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@14322872'}), log id: 7709ba81 2020-08-27 16:35:59,530-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 7709ba81 2020-08-27 16:35:59,533-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/>]'}), log id: 4a0db679 2020-08-27 16:35:59,534-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateBrokerVDSCommand(HostName = rhvpower.local.versatushpc.com.br <http://rhvpower.local.versatushpc.com.br/>, CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/>]'}), log id: 25bc7e6e 2020-08-27 16:35:59,548-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] VM <?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0 <http://ovirt.org/vm/tune/1.0>" xmlns:ovirt-vm="http://ovirt.org/vm/1.0 <http://ovirt.org/vm/1.0>" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0 <http://libvirt.org/schemas/domain/qemu/1.0>"> <name>jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/></name> <uuid>ccccd416-c6b4-4c95-8372-417480be5365</uuid> <memory>536870912</memory> <currentMemory>536870912</currentMemory> <vcpu current="128">384</vcpu> <clock offset="variable" adjustment="0"> <timer name="rtc" tickpolicy="catchup"/> <timer name="pit" tickpolicy="delay"/> </clock> <cpu mode="host-model"> <model>power9</model> <topology cores="16" threads="4" sockets="6"/> <numa> <cell id="0" cpus="0-383" memory="536870912"/> </numa> </cpu> <cputune/> <qemu:capabilities> <qemu:add capability="blockdev"/> <qemu:add capability="incremental-backup"/> </qemu:capabilities> <devices> <input type="tablet" bus="usb"/> <channel type="unix"> <target type="virtio" name="ovirt-guest-agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.ovirt-guest-agent.0"/> </channel> <channel type="unix"> <target type="virtio" name="org.qemu.guest_agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.org.qemu.guest_agent.0"/> </channel> <emulator text="/usr/bin/qemu-system-ppc64"/> <controller type="scsi" model="ibmvscsi" index="0"/> <rng model="virtio"> <backend model="random">/dev/urandom</backend> <alias name="ua-1e18aea0-076a-40d0-9b85-21ac6049a94d"/> </rng> <controller type="usb" model="nec-xhci" index="0"> <alias name="ua-47e67d9f-a191-4dc0-9c09-b2db9f1d373e"/> </controller> <controller type="virtio-serial" index="0" ports="16"> <alias name="ua-4d92fb2f-aaf6-465c-8571-e49e1d12191d"/> </controller> <watchdog model="i6300esb" action="none"> <alias name="ua-7b756cc3-c9ec-4b79-84ef-d6ad15021f1a"/> </watchdog> <graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"> <listen type="network" network="vdsm-ovirtmgmt"/> </graphics> <controller type="scsi" model="virtio-scsi" index="1"> <alias name="ua-8e146e76-e038-4f8a-a526-e7e1c626f54e"/> </controller> <memballoon model="virtio"> <stats period="5"/> <alias name="ua-d8d37c06-de66-4912-bf8d-fc1017c85c68"/> </memballoon> <video> <model type="vga" vram="16384" heads="1"/> <alias name="ua-e96e6050-b1aa-4664-a856-8df923e3dc66"/> </video> <controller type="scsi" index="0"> <address type="spapr-vio"/> </controller> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="servers"/> <driver queues="4" name="vhost"/> <alias name="ua-152c3f8a-69d2-420f-8b6a-c1fb4a11594f"/> <mac address="56:6f:1a:f4:00:03"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="nfs"/> <driver queues="4" name="vhost"/> <alias name="ua-1369da6c-4f9b-4fe3-9f45-7b37ecb34ac2"/> <mac address="56:6f:1a:f4:00:04"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <disk type="file" device="cdrom" snapshot="no"> <driver name="qemu" type="raw" error_policy="report"/> <source file="" startupPolicy="optional"> <seclabel model="dac" type="none" relabel="no"/> </source> <target dev="sdc" bus="scsi"/> <readonly/> <alias name="ua-2d6db7ca-2fe1-4af4-9741-7b5332805d94"/> <address bus="0" controller="0" unit="2" type="drive" target="0"/> </disk> <disk snapshot="no" type="file" device="disk"> <target dev="sda" bus="scsi"/> <source file="/rhev/data-center/804e857c-461d-4642-86c4-7ff4a5e7da47/d19456e4-0051-456e-b33c-57348a78c2e0/images/8100a756-92a7-4160-9a31-5a843810cb61/0183b177-71b5-4c0e-b7d3-becc5da152ce"> <seclabel model="dac" type="none" relabel="no"/> </source> <driver name="qemu" io="threads" type="raw" error_policy="stop" cache="none"/> <alias name="ua-8100a756-92a7-4160-9a31-5a843810cb61"/> <address bus="0" controller="1" unit="0" type="drive" target="0"/> <boot order="1"/> <serial>8100a756-92a7-4160-9a31-5a843810cb61</serial> </disk> <lease> <key>ccccd416-c6b4-4c95-8372-417480be5365</key> <lockspace>d19456e4-0051-456e-b33c-57348a78c2e0</lockspace> <target offset="24117248" path="/rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm/d19456e4-0051-456e-b33c-57348a78c2e0/dom_md/xleases"/> </lease> </devices> <os> <type arch="ppc64" machine="pseries-rhel8.2.0">hvm</type> </os> <metadata> <ovirt-tune:qos/> <ovirt-vm:vm> <ovirt-vm:minGuaranteedMemoryMb type="int">524288</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion> <ovirt-vm:custom/> <ovirt-vm:device mac_address="56:6f:1a:f4:00:04"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device mac_address="56:6f:1a:f4:00:03"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sda"> <ovirt-vm:poolID>804e857c-461d-4642-86c4-7ff4a5e7da47</ovirt-vm:poolID> <ovirt-vm:volumeID>0183b177-71b5-4c0e-b7d3-becc5da152ce</ovirt-vm:volumeID> <ovirt-vm:imageID>8100a756-92a7-4160-9a31-5a843810cb61</ovirt-vm:imageID> <ovirt-vm:domainID>d19456e4-0051-456e-b33c-57348a78c2e0</ovirt-vm:domainID> </ovirt-vm:device> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:resumeBehavior>kill</ovirt-vm:resumeBehavior> </ovirt-vm:vm> </metadata> </domain>
2020-08-27 16:35:59,566-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateBrokerVDSCommand, return: , log id: 25bc7e6e 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateVDSCommand, return: WaitForLaunch, log id: 4a0db679 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,576-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] EVENT_ID: USER_STARTED_VM(153), VM jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/> was started by admin@internal-authz (Host: rhvpower.local.versatushpc.com.br <http://rhvpower.local.versatushpc.com.br/>). 2020-08-27 16:36:01,803-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365' was reported as Down on VDS '394e0e68-60f5-42b3-aec4-5d8368efedd1'(rhvpower.local.versatushpc.com.br <http://rhvpower.local.versatushpc.com.br/>) 2020-08-27 16:36:01,804-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] START, DestroyVDSCommand(HostName = rhvpower.local.versatushpc.com.br <http://rhvpower.local.versatushpc.com.br/>, DestroyVmVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] FINISH, DestroyVDSCommand, return: , log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365'(jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/>) moved from 'WaitForLaunch' --> 'Down' 2020-08-27 16:36:02,024-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-13) [] EVENT_ID: VM_DOWN_ERROR(119), VM jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: Hook Error: (b'Traceback (most recent call last):\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 124, in <module>\n main(VhostmdConf())\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 47, in __init__\n dom = minidom.parse(path)\n File "/usr/lib64/python3.6/xml/dom/minidom.py", line 1958, in parse\n return expatbuilder.parse(file)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 911, in parse\n result = builder.parseFile(fp)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 211, in parseFile\n parser.Parse("", True)\nxml.parsers.expat.ExpatError: no element found: line 1, column 0\n',).
yeah, I never encountered this issue before - could be a consequence of an improper deployment of that host
Starting reinstall right now.
But I’ve a question, is this documentation right?
For Red Hat Enterprise Linux 8 hosts, little endian, on IBM POWER9 hardware:
# subscription-manager repos \ --disable='*' \ --enable=rhv-4-mgmt-agent-for-rhel-8-ppc64le-rpms \ --enable=advanced-virt-for-rhel-8-ppc64le-rpms \ --enable=ansible-2.9-for-rhel-8-ppc64le-rpms
I think it’s missing:
--enable=rhel-8-for-ppc64le-baseos-rpms \ --enable=rhel-8-for-ppc64le-appstream-rpms
This can be found here: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... <https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_cockpit_web_interface/installing_the_self-hosted_engine_deployment_host_she_cockpit_deploy#Installing_Red_Hat_Enterprise_Linux_Hosts_SHE_deployment_host>
I assumed that in fact information is missing on this documentation.
Double check, it’s missing. It’s impossible to reinstall the machine only with this repositories. I’ll open another bug.
Yes please. When it says disable * and explicitly lists what to enable it does need to include the base channels for sure
2020-08-27 16:36:02,025-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] add VM 'ccccd416-c6b4-4c95-8372-417480be5365'(jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/>) to rerun treatment 2020-08-27 16:36:02,029-03 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-13) [] Rerun VM 'ccccd416-c6b4-4c95-8372-417480be5365'. Called from VDS 'rhvpower.local.versatushpc.com.br <http://rhvpower.local.versatushpc.com.br/>' 2020-08-27 16:36:02,041-03 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/> on Host rhvpower.local.versatushpc.com.br <http://rhvpower.local.versatushpc.com.br/>. 2020-08-27 16:36:02,066-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5480ad0b 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5480ad0b 2020-08-27 16:36:02,093-03 WARN [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Validation of action 'RunVm' failed for user admin@internal-authz. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS 2020-08-27 16:36:02,093-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,101-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/> (User: admin@internal-authz). 2020-08-27 16:36:02,105-03 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (EE-ManagedThreadFactory-engine-Thread-145180) [71c52499] Running command: ProcessDownVmCommand internal: true.
ON THE HOST:
/var/log/messages Aug 27 16:36:01 rhvpower python3[73682]: detected unhandled Python exception in '/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd' Aug 27 16:36:01 rhvpower abrt-server[73684]: Deleting problem directory Python3-2020-08-27-16:36:01-73682 (dup of Python3-2020-08-27-16:33:11-73428) Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Activating service name='org.freedesktop.problems' requested by ':1.183' (uid=0 pid=73691 comm="/usr/libexec/platform-python /usr/bin/abrt-action-" label="system_u:system_r:abrt_t:s0-s0:c0.c1023") (using servicehelper) Aug 27 16:36:01 rhvpower dbus-daemon[73694]: [system] Failed to reset fd limit before activating service: org.freedesktop.DBus.Error.AccessDenied: Failed to restore old fd limit: Operation not permitted Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Successfully activated service 'org.freedesktop.problems' Aug 27 16:36:02 rhvpower abrt-server[73684]: /bin/sh: reporter-systemd-journal: command not found
Regarding the import problem. Is that really a bug right? I can describe it on Red Hat Bugzilla if I need to. It’s the minimal that I can do for the help. Is it ok?
yes, please do
There you go: https://bugzilla.redhat.com/show_bug.cgi?id=1873322 <https://bugzilla.redhat.com/show_bug.cgi?id=1873322>
Thank you guys, I will report back after the reinstallation of the host.
Reinstall now went fine.
Now I found that’s something extremely bad when trying to run the VMs. The metadata appears to be corrupted.
First it complained about the CPUs, I changed it not the interface just to refresh the metadada: VM jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: internal error: process exited while connecting to monitor: 2020-08-27T22:48:59.533367Z qemu-kvm: warning: Number of hotpluggable cpus requested (384) exceeds the recommended cpus supported by KVM (128)
interesting. shouldn’t cause any harm, but it’s worth a follow up
2020-08-27T22:48:59.537530Z qemu-kvm: -numa node,nodeid=0,cpus=0-383,mem=524288: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead 2020-08-27T22:48:59.833812Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off.
what is the cluster level right now? is it 4.3? there was a breaking change in machine types between 4.3 and 4.4 and a incompatible P9 firmware changes. It should work in 4.4 cluster level, just also make sure you have latest greatest P9 firmware.
After this the error changed to: VM jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T22:49:48.876424Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off.
Then I tried reducing the RAM, but it gave me a warning (not an error): VM jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/> was configured with 524288MiB of memory while the recommended value range is 256MiB - 65536MiB
that maximum doesn’t make sense. Maybe still aproblem with wrong os type?
I’ve lowered it to 65536MiB, and now it complains about multiple SCSI devices: VM jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0'.
So I’ve changed the disk type from VirtIO-SCSI to VirtIO and changed back to VirtIO to VirtIO-SCSI, and some part of the first error came back: VM jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T23:09:51.753960Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off.
yeah, you won’t get over it with (I assume) the 4.3 machine type
Now I changed the machine to custom cluster emulation to pseries-7.6.0 and the SCSI error is back. And now I’m stuck with it…
but indeed removing the now broken mitigations and use plain 7.6.0 it’s fine….
VM jupyter.nix.versatushpc.com.br <http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0’.
… but this one is not:) So this is aftter import, right? Can you confirm same problem for a newly created VM?
Thinking in dumping the VM entirely and reimporting the disks… but I created another one, just with plain settings, to see if something boots on this host, and the result was bad: VM ppc64le is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T23:15:44.669298Z qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=8192: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead 2020-08-27T23:15:44.691077Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off.
So, any ideias? In the past I had some issues with SXXM in pseries-7.6.0; I’m not sure if it’s the same issue all over again.
likely yes. Again, once you move to 4.4 cluster level the new el8 machine type is using different spectre mitigations and should work… Thanks, michal
Thanks,
Thanks,
Thanks, michal
> > Ideias? > >> On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br <mailto:ferrao@versatushpc.com.br>> wrote: >> >> What a strange thing is happening here: >> >> [root@power ~]# file /usr/bin/vdsm-client >> /usr/bin/vdsm-client: empty >> [root@power ~]# ls -l /usr/bin/vdsm-client >> -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client >> >> A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files: >> >> Transaction test succeeded. >> Running transaction >> Preparing : 1/1 >> Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch 1/2 >> Cleanup : vdsm-client-4.40.22-1.el8ev.noarch 2/2 >> Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch 2/2 >> /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. >> /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. >> /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. >> /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. >> /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. >> /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. >> /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. >> /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. >> >> /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. >> /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. >> /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. >> /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. >> /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. >> /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. >> /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. >> /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. >> /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. >> >> Verifying : vdsm-client-4.40.22-1.el8ev.noarch 1/2 >> Verifying : vdsm-client-4.40.22-1.el8ev.noarch 2/2 >> Installed products updated. >> >> Reinstalled: >> vdsm-client-4.40.22-1.el8ev.noarch >> >> >> >> I’ve never seen something like this. >> >> I’ve already reinstalled the host from the ground and the same thing happens. >> >> >>> On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org <mailto:users@ovirt.org>> wrote: >>> >>> Hello Arik, >>> This is probably the issue. Output totally empty: >>> >>> [root@power ~]# vdsm-client Host getCapabilities >>> [root@power ~]# >>> >>> Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) >>> ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le >>> ovirt-imageio-client-2.0.8-1.el8ev.ppc64le >>> ovirt-host-4.4.1-4.el8ev.ppc64le >>> ovirt-vmconsole-host-1.0.8-1.el8ev.noarch >>> ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le >>> ovirt-imageio-common-2.0.8-1.el8ev.ppc64le >>> ovirt-vmconsole-1.0.8-1.el8ev.noarch >>> vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch >>> vdsm-hook-fcoe-4.40.22-1.el8ev.noarch >>> vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch >>> vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch >>> vdsm-common-4.40.22-1.el8ev.noarch >>> vdsm-python-4.40.22-1.el8ev.noarch >>> vdsm-jsonrpc-4.40.22-1.el8ev.noarch >>> vdsm-api-4.40.22-1.el8ev.noarch >>> vdsm-yajsonrpc-4.40.22-1.el8ev.noarch >>> vdsm-4.40.22-1.el8ev.ppc64le >>> vdsm-network-4.40.22-1.el8ev.ppc64le >>> vdsm-http-4.40.22-1.el8ev.noarch >>> vdsm-client-4.40.22-1.el8ev.noarch >>> vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch >>> >>> Any ideias to try? >>> >>> Thanks. >>> >>>> On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com <mailto:ahadas@redhat.com>> wrote: >>>> >>>> >>>> >>>> On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org <mailto:users@ovirt.org>> wrote: >>>> Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues. >>>> >>>> Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: >>>> The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv. >>>> >>>> Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10. >>>> >>>> Machine info: >>>> timebase : 512000000 >>>> platform : PowerNV >>>> model : 8335-GTH >>>> machine : PowerNV 8335-GTH >>>> firmware : OPAL >>>> MMU : Radix >>>> >>>> Can you please provide the output of 'vdsm-client Host getCapabilities' on that host? >>>> >>>> >>>> Thanks, >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> >>>> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> >>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> >>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> >>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHVR36WKUHBFDMCQHEJHP/> >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> >>> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> >>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> >>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4U3DIMDKP6I2RNNNA3T/> >> > > _______________________________________________ > Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> > To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> > Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFV... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFVGYHVPTDHDMUSUN7YZS/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMNMYMBMWTC7UG... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMNMYMBMWTC7UGHOZBEGRIUSDZ3QAPPU/>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6T3PWJREF2FJCT... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/6T3PWJREF2FJCT546M45YMN2GHDQLN4J/>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJQ7F63EGKGYHM...

Etc/GMT</TimeZone><default_boot_sequence>9</default_boot_sequence><Generation>8</Generation><ClusterCompatibilityVersion>4.3</ClusterCompatibilityVersion><V mType>1</VmType><ResumeBehavior>AUTO_RESUME</ResumeBehavior><MinAllocatedMem>2730</MinAllocatedMem><IsStateless>false</IsStateless><IsRunAndPause>false</IsRu nAndPause><AutoStartup>false</AutoStartup><Priority>1</Priority><CreatedByUserId>6ea16f22-45d7-11ea-bd83-00163e518b7c</CreatedByUserId><MigrationSupport>0</M igrationSupport><IsBootMenuEnabled>false</IsBootMenuEnabled><IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled><IsSpiceCopyPasteEnabled>true</IsSpi ceCopyPasteEnabled><AllowConsoleReconnect>true</AllowConsoleReconnect><ConsoleDisconnectAction>LOCK_SCREEN</ConsoleDisconnectAction><CustomEmulatedMachine></ CustomEmulatedMachine><BiosType>0</BiosType><CustomCpuName></CustomCpuName><PredefinedProperties></PredefinedProperties><UserDefinedProperties></UserDefinedP roperties><MaxMemorySizeMb>16384</MaxMemorySizeMb><MultiQueuesEnabled>true</MultiQueuesEnabled><UseHostCpu>false</UseHostCpu><ClusterName>Blastoise</ClusterN ame><TemplateId>00000000-0000-0000-0000-000000000000</TemplateId><TemplateName>Blank</TemplateName><IsInitilized>true</IsInitilized><Origin>0</Origin><quota_ id>32644894-755e-4588-b967-8fb9dc327795</quota_id><DefaultDisplayType>2</DefaultDisplayType><TrustedService>false</TrustedService><OriginalTemplateId>0000000 0-0000-0000-0000-000000000000</OriginalTemplateId><OriginalTemplateName>Blank</OriginalTemplateName><CpuPinning></CpuPinning><UseLatestVersion>false</UseLate stVersion><StopTime>2020/08/20 17:52:35</StopTime><Section ovf:id="46ad1d80-2649-48f5-92e6-e5489d11d30c" ovf:required="false" xsi:type="ovf:OperatingSystemSe ction_Type"><Info>Guest Operating System</Info><Description>other_linux_ppc64</Description></Section><Section xsi:type="ovf:VirtualHardwareSection_Type"><Inf o>2 CPU, 4096 Memory</Info><System><vssd:VirtualSystemType>ENGINE 4.1.0.0</vssd:VirtualSystemType></System><Item><rasd:Caption>2 virtual cpu</rasd:Caption><r asd:Description>Number of virtual CPU</rasd:Description><rasd:InstanceId>1</rasd:InstanceId><rasd:ResourceType>3</rasd:ResourceType><rasd:num_of_sockets>2</r asd:num_of_sockets><rasd:cpu_per_socket>1</rasd:cpu_per_socket><rasd:threads_per_cpu>1</rasd:threads_per_cpu><rasd:max_num_of_vcpus>16</rasd:max_num_of_vcpus <rasd:VirtualQuantity>2</rasd:VirtualQuantity></Item><Item><rasd:Caption>4096 MB of memory</rasd:Caption><rasd:Description>Memory Size</rasd:Description><ra sd:InstanceId>2</rasd:InstanceId><rasd:ResourceType>4</rasd:ResourceType><rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits><rasd:VirtualQuantity>4096</ra sd:VirtualQuantity></Item><Item><rasd:Caption>energy.versatushpc.com.br_Disk1</rasd:Caption><rasd:InstanceId>b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:Insta nceId><rasd:ResourceType>17</rasd:ResourceType><rasd:HostResource>775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:HostResourc e><rasd:Parent>00000000-0000-0000-0000-000000000000</rasd:Parent><rasd:Template>00000000-0000-0000-0000-000000000000</rasd:Template><rasd:ApplicationList></r asd:ApplicationList><rasd:StorageId>d19456e4-0051-456e-b33c-57348a78c2e0</rasd:StorageId><rasd:StoragePoolId>6c54f91e-89bf-45b4-bc48-56e74c4efd5e</rasd:Stora gePoolId><rasd:CreationDate>2020/08/19 20:13:05</rasd:CreationDate><rasd:LastModified>1970/01/01 00:00:00</rasd:LastModified><rasd:last_modified_date>2020/08 /20 18:37:41</rasd:last_modified_date><Type>disk</Type><Device>disk</Device><rasd:Address>{type=drive, bus=0, controller=1, target=0, unit=0}</rasd:Address>< BootOrder>1</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-775b24a9-6a32-431a-831f-4ac9b3b31152</Alias></Item><Item><rasd:Capt ion>Ethernet adapter on legacyservers</rasd:Caption><rasd:InstanceId>e6e37ae1-f263-4986-a039-e8e01e72d1f4</rasd:InstanceId><rasd:ResourceType>10</rasd:Resour ceType><rasd:OtherResourceType>legacyservers</rasd:OtherResourceType><rasd:ResourceSubType>3</rasd:ResourceSubType><rasd:Connection>legacyservers</rasd:Conne ction><rasd:Linked>true</rasd:Linked><rasd:Name>nic1</rasd:Name><rasd:ElementName>nic1</rasd:ElementName><rasd:MACAddress>56:6f:f0:b3:00:23</rasd:MACAddress> <rasd:speed>10000</rasd:speed><Type>interface</Type><Device>bridge</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><I sReadOnly>false</IsReadOnly><Alias>ua-e6e37ae1-f263-4986-a039-e8e01e72d1f4</Alias></Item><Item><rasd:Caption>USB Controller</rasd:Caption><rasd:InstanceId>3< /rasd:InstanceId><rasd:ResourceType>23</rasd:ResourceType><rasd:UsbPolicy>DISABLED</rasd:UsbPolicy></Item><Item><rasd:Caption>Graphical Controller</rasd:Capt ion><rasd:InstanceId>1440c749-728e-4a86-afc1-8237c6055fa5</rasd:InstanceId><rasd:ResourceType>20</rasd:ResourceType><rasd:VirtualQuantity>1</rasd:VirtualQuan tity><rasd:SinglePciQxl>false</rasd:SinglePciQxl><Type>video</Type><Device>vga</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</ IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-1440c749-728e-4a86-afc1-8237c6055fa5</Alias><SpecParams><vram>16384</vram></SpecParams></Item><Item><rasd:C aption>Graphical Framebuffer</rasd:Caption><rasd:InstanceId>603e7f0c-8d28-4c3e-bd90-c5685b752100</rasd:InstanceId><rasd:ResourceType>26</rasd:ResourceType><T ype>graphics</Type><Device>vnc</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>< /Alias></Item><Item><rasd:Caption>CDROM</rasd:Caption><rasd:InstanceId>3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</rasd:InstanceId><rasd:ResourceType>15</rasd:Reso urceType><Type>disk</Type><Device>cdrom</Device><rasd:Address>{type=drive, bus=0, controller=0, target=0, unit=2}</rasd:Address><BootOrder>2</BootOrder><IsPl ugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</Alias><SpecParams><path>CentOS-8.1.1911-x86_64-boot.iso</p ath></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>66f3a2b8-d2c5-4032-9f10-8742d65a0a3e</rasd:InstanceId><Type>controller </Type><Device>scsi</Device><rasd:Address>{type=spapr-vio}</rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Al ias></Alias><SpecParams><index>0</index></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>e065acb2-e7db-4f55-a1df-385f19299b d0</rasd:InstanceId><Type>rng</Type><Device>virtio</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false< /IsReadOnly><Alias>ua-e065acb2-e7db-4f55-a1df-385f19299bd0</Alias><SpecParams><source>urandom</source></SpecParams></Item><Item><rasd:ResourceType>0</rasd:Re
<BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-7b4c4ef6-2a9a-4120-b838-3127db0fd703</Alias><SpecParams><model>vir tio</model></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>5aade6c7-8f77-4fea-a5de-66350b214935</rasd:InstanceId><Type>con
Hello! On 28 Aug 2020, at 14:39, Michal Skrivanek <michal.skrivanek@redhat.com<mailto:michal.skrivanek@redhat.com>> wrote: Hi Vinicius, On 28 Aug 2020, at 01:17, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: On 27 Aug 2020, at 17:50, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: On 27 Aug 2020, at 16:48, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:39 PM Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: On 27 Aug 2020, at 16:26, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:23 PM Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:13 PM Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: On 27 Aug 2020, at 16:03, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 8:40 PM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hi Michal, On 27 Aug 2020, at 05:08, Michal Skrivanek <michal.skrivanek@redhat.com<mailto:michal.skrivanek@redhat.com>> wrote: On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Okay here we go Arik. With your insight I’ve done the following: # rpm -Va This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done: you should still have host deploy logs on the engine machine. it’s weird it succeeded, unless it somehow happened afterwards? It only succeeded my yum reinstall rampage. yum list installed | cut -f 1 -d " " > file yum -y reinstall `cat file | xargs` Reinstalled everything. Everything worked as expected and I finally added the machine back to the cluster. It’s operational. eh, I wouldn’t trust it much. did you run redeploy at least? I’ve done reinstall on the web interface of the engine. I can reinstall the host, there’s nothing running on it… gonna try a third format. Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64: <PastedGraphic-2.png> So… This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this… how exactly did you import them? could be a bug indeed. we don’t support changing it as it doesn’t make sense, the guest can’t be converted Yeah. I done the normal procedure, added the storage domain to the engine and clicked on “Import VM”. Immediately it was detected as x86_64. Since I wasn’t able to upgrade my environment from 4.3.10 to 4.4.1 due to random errors when redeploying the engine with the backup from 4.3.10, I just reinstalled it, reconfigured everything and them imported the storage domains. I don’t know where the information about architecture is stored in the storage domain, I tried to search for some metadata files inside the domain but nothing come up. Is there a way to force this change? It must be a way. I even tried to import the machine as x86_64. So I can delete the VM and just reattach the disks in a new only, effectively not losing the data, but… <PastedGraphic-1.png> Yeah, so something is broken. The check during the import appears to be OK, but the interface does not me allow to import it to the ppc64le machine, since it’s read as x86_64. Could you please provide the output of the following query from the database: select * from unregistered_ovf_of_entities where entity_name='energy.versatushpc.com.br<http://energy.versatushpc.com.br/>'; Sure, there you go: 46ad1d80-2649-48f5-92e6-e5489d11d30c | energy.versatushpc.com.br<http://energy.versatushpc.com.br/> | VM | 1 | | d19456e4-0051-456e-b33c-57348a78c2e0 | <?xml version="1.0" encoding="UTF-8"?><ovf:Envelope xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1/" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim -schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingDa..." xmlns:xsi="http://ww<http://ww/> w.w3.org/2001/XMLSchema-instance<http://w.w3.org/2001/XMLSchema-instance>" ovf:version="4.1.0.0"><References><File ovf:href="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af " ovf:id="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="512" ovf:description="Active VM" ovf:disk_storage_type="IMAGE" ovf:cinder_volume_type=""></File></R eferences><NetworkSection><Info>List of networks</Info><Network ovf:name="legacyservers"></Network></NetworkSection><Section xsi:type="ovf:DiskSection_Type"> <Info>List of Virtual Disks</Info><Disk ovf:diskId="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="40" ovf:actual_size="1" ovf:vm_snapshot_id="6de58683-c586 -4e97-b0e8-ee7ee3baf754" ovf:parentRef="" ovf:fileRef="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:format="http://www.vmwa<http://www.vmwa/> re.com/specifications/vmdk.html#sparse<http://re.com/specifications/vmdk.html#sparse>" ovf:volume-format="RAW" ovf:volume-type="Sparse" ovf:disk-interface="VirtIO_SCSI" ovf:read-only="false" ovf:shareable ="false" ovf:boot="true" ovf:pass-discard="false" ovf:disk-alias="energy.versatushpc.com.br_Disk1" ovf:disk-description="" ovf:wipe-after-delete="false"></Di sk></Section><Content ovf:id="out" xsi:type="ovf:VirtualSystem_Type"><Name>energy.versatushpc.com.br<http://energy.versatushpc.com.br/></Name><Description>Holds Kosen backend and frontend prod services (nginx + docker)</Description><Comment></Comment><CreationDate>2020/08/19 20:11:33</CreationDate><ExportDate>2020/08/20 18:37:41</ExportDate><Delet eProtected>false</DeleteProtected><SsoMethod>guest_agent</SsoMethod><IsSmartcardEnabled>false</IsSmartcardEnabled><NumOfIoThreads>1</NumOfIoThreads><TimeZone sourceType><rasd:InstanceId>7b4c4ef6-2a9a-4120-b838-3127db0fd703</rasd:InstanceId><Type>balloon</Type><Device>memballoon</Device><rasd:Address></rasd:Address troller</Type><Device>virtio-scsi</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlugged><IsReadOnly>false</IsReadOnly><Ali as></Alias><SpecParams><ioThreadId></ioThreadId></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>4d4d7bfd-b1e8-45c3-a5e8-7e 0b7773bbf2</rasd:InstanceId><Type>controller</Type><Device>virtio-serial</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlu gged><IsReadOnly>false</IsReadOnly><Alias>58ca7b19-0071-00c0-01d6-000000000212</Alias></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>9 cea63da-7afd-41d4-925f-369f993b280f</rasd:InstanceId><Type>controller</Type><Device>usb</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugg ed>false</IsPlugged><IsReadOnly>false</IsReadOnly><Alias></Alias><SpecParams><index>0</index><model>nec-xhci</model></SpecParams></Item></Section><Section xs i:type="ovf:SnapshotsSection_Type"><Snapshot ovf:id="6de58683-c586-4e97-b0e8-ee7ee3baf754"><Type>ACTIVE</Type><Description>Active VM</Description><CreationDa te>2020/08/19 20:11:33</CreationDate></Snapshot></Section></Content></ovf:Envelope> | | 0 Thank you! thanks so yeah - we may have an issue with that operating system 'other_linux_ppc64' that has the same name as 'other_linux' in our os-info configuration as a possible workaround, assuming all those unregistered VMs you can try to override the architecture with: update unregistered_ovf_of_entities set architecture = 2; as a possible workaround, assuming all those unregistered VMs are from clusters with the same architecture, you can try to override the architecture with: * Wooha!!! engine=# update unregistered_ovf_of_entities set architecture = 2; UPDATE 8 <PastedGraphic-2.png> <PastedGraphic-3.png> Worked and the VMs are now imported. But… hahaha. I have another issues, any of the three VM’s starts now. Perhaps I’ll reinstall the host for the third time as recommended by Michal, anyway here are the logs that I was able to fetch during the failed power on process: ON THE ENGINE: ==> /var/log/ovirt-engine/engine.log <== 2020-08-27 16:35:59,437-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5e701801 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5e701801 2020-08-27 16:35:59,500-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Running command: RunVmCommand internal: false. Entities affected : ID: ccccd416-c6b4-4c95-8372-417480be5365 Type: VMAction group RUN_VM with role type USER 2020-08-27 16:35:59,506-03 INFO [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Emulated machine 'pseries-rhel8.2.0' which is different than that of the cluster is set for 'jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>'(ccccd416-c6b4-4c95-8372-417480be5365) 2020-08-27 16:35:59,528-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@14322872'}), log id: 7709ba81 2020-08-27 16:35:59,530-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 7709ba81 2020-08-27 16:35:59,533-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>]'}), log id: 4a0db679 2020-08-27 16:35:59,534-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateBrokerVDSCommand(HostName = rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>, CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>]'}), log id: 25bc7e6e 2020-08-27 16:35:59,548-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] VM <?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"> <name>jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/></name> <uuid>ccccd416-c6b4-4c95-8372-417480be5365</uuid> <memory>536870912</memory> <currentMemory>536870912</currentMemory> <vcpu current="128">384</vcpu> <clock offset="variable" adjustment="0"> <timer name="rtc" tickpolicy="catchup"/> <timer name="pit" tickpolicy="delay"/> </clock> <cpu mode="host-model"> <model>power9</model> <topology cores="16" threads="4" sockets="6"/> <numa> <cell id="0" cpus="0-383" memory="536870912"/> </numa> </cpu> <cputune/> <qemu:capabilities> <qemu:add capability="blockdev"/> <qemu:add capability="incremental-backup"/> </qemu:capabilities> <devices> <input type="tablet" bus="usb"/> <channel type="unix"> <target type="virtio" name="ovirt-guest-agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.ovirt-guest-agent.0"/> </channel> <channel type="unix"> <target type="virtio" name="org.qemu.guest_agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.org.qemu.guest_agent.0"/> </channel> <emulator text="/usr/bin/qemu-system-ppc64"/> <controller type="scsi" model="ibmvscsi" index="0"/> <rng model="virtio"> <backend model="random">/dev/urandom</backend> <alias name="ua-1e18aea0-076a-40d0-9b85-21ac6049a94d"/> </rng> <controller type="usb" model="nec-xhci" index="0"> <alias name="ua-47e67d9f-a191-4dc0-9c09-b2db9f1d373e"/> </controller> <controller type="virtio-serial" index="0" ports="16"> <alias name="ua-4d92fb2f-aaf6-465c-8571-e49e1d12191d"/> </controller> <watchdog model="i6300esb" action="none"> <alias name="ua-7b756cc3-c9ec-4b79-84ef-d6ad15021f1a"/> </watchdog> <graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"> <listen type="network" network="vdsm-ovirtmgmt"/> </graphics> <controller type="scsi" model="virtio-scsi" index="1"> <alias name="ua-8e146e76-e038-4f8a-a526-e7e1c626f54e"/> </controller> <memballoon model="virtio"> <stats period="5"/> <alias name="ua-d8d37c06-de66-4912-bf8d-fc1017c85c68"/> </memballoon> <video> <model type="vga" vram="16384" heads="1"/> <alias name="ua-e96e6050-b1aa-4664-a856-8df923e3dc66"/> </video> <controller type="scsi" index="0"> <address type="spapr-vio"/> </controller> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="servers"/> <driver queues="4" name="vhost"/> <alias name="ua-152c3f8a-69d2-420f-8b6a-c1fb4a11594f"/> <mac address="56:6f:1a:f4:00:03"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="nfs"/> <driver queues="4" name="vhost"/> <alias name="ua-1369da6c-4f9b-4fe3-9f45-7b37ecb34ac2"/> <mac address="56:6f:1a:f4:00:04"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <disk type="file" device="cdrom" snapshot="no"> <driver name="qemu" type="raw" error_policy="report"/> <source file="" startupPolicy="optional"> <seclabel model="dac" type="none" relabel="no"/> </source> <target dev="sdc" bus="scsi"/> <readonly/> <alias name="ua-2d6db7ca-2fe1-4af4-9741-7b5332805d94"/> <address bus="0" controller="0" unit="2" type="drive" target="0"/> </disk> <disk snapshot="no" type="file" device="disk"> <target dev="sda" bus="scsi"/> <source file="/rhev/data-center/804e857c-461d-4642-86c4-7ff4a5e7da47/d19456e4-0051-456e-b33c-57348a78c2e0/images/8100a756-92a7-4160-9a31-5a843810cb61/0183b177-71b5-4c0e-b7d3-becc5da152ce"> <seclabel model="dac" type="none" relabel="no"/> </source> <driver name="qemu" io="threads" type="raw" error_policy="stop" cache="none"/> <alias name="ua-8100a756-92a7-4160-9a31-5a843810cb61"/> <address bus="0" controller="1" unit="0" type="drive" target="0"/> <boot order="1"/> <serial>8100a756-92a7-4160-9a31-5a843810cb61</serial> </disk> <lease> <key>ccccd416-c6b4-4c95-8372-417480be5365</key> <lockspace>d19456e4-0051-456e-b33c-57348a78c2e0</lockspace> <target offset="24117248" path="/rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm/d19456e4-0051-456e-b33c-57348a78c2e0/dom_md/xleases"/> </lease> </devices> <os> <type arch="ppc64" machine="pseries-rhel8.2.0">hvm</type> </os> <metadata> <ovirt-tune:qos/> <ovirt-vm:vm> <ovirt-vm:minGuaranteedMemoryMb type="int">524288</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion> <ovirt-vm:custom/> <ovirt-vm:device mac_address="56:6f:1a:f4:00:04"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device mac_address="56:6f:1a:f4:00:03"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sda"> <ovirt-vm:poolID>804e857c-461d-4642-86c4-7ff4a5e7da47</ovirt-vm:poolID> <ovirt-vm:volumeID>0183b177-71b5-4c0e-b7d3-becc5da152ce</ovirt-vm:volumeID> <ovirt-vm:imageID>8100a756-92a7-4160-9a31-5a843810cb61</ovirt-vm:imageID> <ovirt-vm:domainID>d19456e4-0051-456e-b33c-57348a78c2e0</ovirt-vm:domainID> </ovirt-vm:device> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:resumeBehavior>kill</ovirt-vm:resumeBehavior> </ovirt-vm:vm> </metadata> </domain> 2020-08-27 16:35:59,566-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateBrokerVDSCommand, return: , log id: 25bc7e6e 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateVDSCommand, return: WaitForLaunch, log id: 4a0db679 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,576-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] EVENT_ID: USER_STARTED_VM(153), VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> was started by admin@internal-authz (Host: rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>). 2020-08-27 16:36:01,803-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365' was reported as Down on VDS '394e0e68-60f5-42b3-aec4-5d8368efedd1'(rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>) 2020-08-27 16:36:01,804-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] START, DestroyVDSCommand(HostName = rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>, DestroyVmVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] FINISH, DestroyVDSCommand, return: , log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365'(jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>) moved from 'WaitForLaunch' --> 'Down' 2020-08-27 16:36:02,024-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-13) [] EVENT_ID: VM_DOWN_ERROR(119), VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: Hook Error: (b'Traceback (most recent call last):\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 124, in <module>\n main(VhostmdConf())\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 47, in __init__\n dom = minidom.parse(path)\n File "/usr/lib64/python3.6/xml/dom/minidom.py", line 1958, in parse\n return expatbuilder.parse(file)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 911, in parse\n result = builder.parseFile(fp)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 211, in parseFile\n parser.Parse("", True)\nxml.parsers.expat.ExpatError: no element found: line 1, column 0\n',). yeah, I never encountered this issue before - could be a consequence of an improper deployment of that host Starting reinstall right now. But I’ve a question, is this documentation right? For Red Hat Enterprise Linux 8 hosts, little endian, on IBM POWER9 hardware: # subscription-manager repos \ --disable='*' \ --enable=rhv-4-mgmt-agent-for-rhel-8-ppc64le-rpms \ --enable=advanced-virt-for-rhel-8-ppc64le-rpms \ --enable=ansible-2.9-for-rhel-8-ppc64le-rpms I think it’s missing: --enable=rhel-8-for-ppc64le-baseos-rpms \ --enable=rhel-8-for-ppc64le-appstream-rpms This can be found here: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... I assumed that in fact information is missing on this documentation. Double check, it’s missing. It’s impossible to reinstall the machine only with this repositories. I’ll open another bug. Yes please. When it says disable * and explicitly lists what to enable it does need to include the base channels for sure Done. https://bugzilla.redhat.com/show_bug.cgi?id=1873360 2020-08-27 16:36:02,025-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] add VM 'ccccd416-c6b4-4c95-8372-417480be5365'(jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>) to rerun treatment 2020-08-27 16:36:02,029-03 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-13) [] Rerun VM 'ccccd416-c6b4-4c95-8372-417480be5365'. Called from VDS 'rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>' 2020-08-27 16:36:02,041-03 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> on Host rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>. 2020-08-27 16:36:02,066-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5480ad0b 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5480ad0b 2020-08-27 16:36:02,093-03 WARN [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Validation of action 'RunVm' failed for user admin@internal-authz. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS 2020-08-27 16:36:02,093-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,101-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> (User: admin@internal-authz). 2020-08-27 16:36:02,105-03 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (EE-ManagedThreadFactory-engine-Thread-145180) [71c52499] Running command: ProcessDownVmCommand internal: true. ON THE HOST: /var/log/messages Aug 27 16:36:01 rhvpower python3[73682]: detected unhandled Python exception in '/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd' Aug 27 16:36:01 rhvpower abrt-server[73684]: Deleting problem directory Python3-2020-08-27-16:36:01-73682 (dup of Python3-2020-08-27-16:33:11-73428) Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Activating service name='org.freedesktop.problems' requested by ':1.183' (uid=0 pid=73691 comm="/usr/libexec/platform-python /usr/bin/abrt-action-" label="system_u:system_r:abrt_t:s0-s0:c0.c1023") (using servicehelper) Aug 27 16:36:01 rhvpower dbus-daemon[73694]: [system] Failed to reset fd limit before activating service: org.freedesktop.DBus.Error.AccessDenied: Failed to restore old fd limit: Operation not permitted Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Successfully activated service 'org.freedesktop.problems' Aug 27 16:36:02 rhvpower abrt-server[73684]: /bin/sh: reporter-systemd-journal: command not found Regarding the import problem. Is that really a bug right? I can describe it on Red Hat Bugzilla if I need to. It’s the minimal that I can do for the help. Is it ok? yes, please do There you go: https://bugzilla.redhat.com/show_bug.cgi?id=1873322 Thank you guys, I will report back after the reinstallation of the host. Reinstall now went fine. Now I found that’s something extremely bad when trying to run the VMs. The metadata appears to be corrupted. First it complained about the CPUs, I changed it not the interface just to refresh the metadada: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: internal error: process exited while connecting to monitor: 2020-08-27T22:48:59.533367Z qemu-kvm: warning: Number of hotpluggable cpus requested (384) exceeds the recommended cpus supported by KVM (128) interesting. shouldn’t cause any harm, but it’s worth a follow up If that was a warning I would be ok, but what a hell… and from where the number 384 came? My machine have 128 threads only... 2020-08-27T22:48:59.537530Z qemu-kvm: -numa node,nodeid=0,cpus=0-383,mem=524288: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead 2020-08-27T22:48:59.833812Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. what is the cluster level right now? is it 4.3? there was a breaking change in machine types between 4.3 and 4.4 and a incompatible P9 firmware changes. It should work in 4.4 cluster level, just also make sure you have latest greatest P9 firmware. The bad news is that the cluster has always been on 4.4 level… take a look at those screenshots: [cid:5F59E3D4-24B0-4C2D-B502-8E98D56D6D05] [cid:0B1320EB-3486-46A1-A827-930F69ECC687] After this the error changed to: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T22:49:48.876424Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. Then I tried reducing the RAM, but it gave me a warning (not an error): VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> was configured with 524288MiB of memory while the recommended value range is 256MiB - 65536MiB that maximum doesn’t make sense. Maybe still aproblem with wrong os type? Wrong OS? I don’t know. :( My settings are pretty basic to be honest, here as pic of the relevant part of the configuration on the offending VM: [cid:007A3579-6D28-421F-AB6E-834275EF57AA] As you can see it only came to life with pseries-rhel7.6.0 without SXXM. I can’t boot with pseries-rhel8.2.0; and I was expecting this with the upgrade from 4.3.10 to 4.4.1. I’ve lowered it to 65536MiB, and now it complains about multiple SCSI devices: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0'. So I’ve changed the disk type from VirtIO-SCSI to VirtIO and changed back to VirtIO to VirtIO-SCSI, and some part of the first error came back: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T23:09:51.753960Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. yeah, you won’t get over it with (I assume) the 4.3 machine type 4.4 :( Now I changed the machine to custom cluster emulation to pseries-7.6.0 and the SCSI error is back. And now I’m stuck with it… but indeed removing the now broken mitigations and use plain 7.6.0 it’s fine…. VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0’. … but this one is not:) So this is aftter import, right? Can you confirm same problem for a newly created VM? It happens :( But take a look in the next answer. Thinking in dumping the VM entirely and reimporting the disks… but I created another one, just with plain settings, to see if something boots on this host, and the result was bad: VM ppc64le is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T23:15:44.669298Z qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=8192: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead 2020-08-27T23:15:44.691077Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. So, any ideias? In the past I had some issues with SXXM in pseries-7.6.0; I’m not sure if it’s the same issue all over again. likely yes. Again, once you move to 4.4 cluster level the new el8 machine type is using different spectre mitigations and should work… Yeah, that’s the bad news. Already running everything on 4.4 level, at least is what the interface tells to me. I’ve done the simplest test possible, created a simple VM without even touching the advanced settings, like in this photo: [cid:3B09A342-8831-48AE-81A6-0AE2B381243D] And it won’t boot: VM ppc64le-dc44 is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-28T18:22:15.390930Z qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=4096: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead 2020-08-28T18:22:15.412920Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. 8/28/203:22:17 PM So, pseries-rhel8.2.0 does not appears to be working, if I change it to pseries-7.6.0 (without SXXM) this happens: VM ppc64le-dc44 is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0’. But I’ve noted if I wait something like 30 to 60 seconds, the VM will eventually boot, and yeah it booted when I was writing this message: VM ppc64le-dc44 started on Host rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br> Regarding my hardware, it’s an AC922 from IBM. It has the latest firmware upgrades in place, for the whiterspoon part and for OpenBMC. There’s nothing else, that I’m aware of, that can be updated… I’ve never able to run pseries-7.6.0-sxxm in the 4.3.10. So… any ideias? Thank you again, the help that you guys are providing is amazing. Thanks, michal Thanks, Thanks, Thanks, michal Ideias? On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: What a strange thing is happening here: [root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files: Transaction test succeeded. Running transaction Preparing : 1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch 2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. Verifying : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Installed products updated. Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch I’ve never seen something like this. I’ve already reinstalled the host from the ground and the same thing happens. On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello Arik, This is probably the issue. Output totally empty: [root@power ~]# vdsm-client Host getCapabilities [root@power ~]# Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch Any ideias to try? Thanks. On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues. Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv. Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10. Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix Can you please provide the output of 'vdsm-client Host getCapabilities' on that host? Thanks, _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMNMYMBMWTC7UG... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6T3PWJREF2FJCT... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJQ7F63EGKGYHM...

Etc/GMT</TimeZone><default_boot_sequence>9</default_boot_sequence><Generation>8</Generation><ClusterCompatibilityVersion>4.3</ClusterCompatibilityVersion><V mType>1</VmType><ResumeBehavior>AUTO_RESUME</ResumeBehavior><MinAllocatedMem>2730</MinAllocatedMem><IsStateless>false</IsStateless><IsRunAndPause>false</IsRu nAndPause><AutoStartup>false</AutoStartup><Priority>1</Priority><CreatedByUserId>6ea16f22-45d7-11ea-bd83-00163e518b7c</CreatedByUserId><MigrationSupport>0</M igrationSupport><IsBootMenuEnabled>false</IsBootMenuEnabled><IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled><IsSpiceCopyPasteEnabled>true</IsSpi ceCopyPasteEnabled><AllowConsoleReconnect>true</AllowConsoleReconnect><ConsoleDisconnectAction>LOCK_SCREEN</ConsoleDisconnectAction><CustomEmulatedMachine></ CustomEmulatedMachine><BiosType>0</BiosType><CustomCpuName></CustomCpuName><PredefinedProperties></PredefinedProperties><UserDefinedProperties></UserDefinedP roperties><MaxMemorySizeMb>16384</MaxMemorySizeMb><MultiQueuesEnabled>true</MultiQueuesEnabled><UseHostCpu>false</UseHostCpu><ClusterName>Blastoise</ClusterN ame><TemplateId>00000000-0000-0000-0000-000000000000</TemplateId><TemplateName>Blank</TemplateName><IsInitilized>true</IsInitilized><Origin>0</Origin><quota_ id>32644894-755e-4588-b967-8fb9dc327795</quota_id><DefaultDisplayType>2</DefaultDisplayType><TrustedService>false</TrustedService><OriginalTemplateId>0000000 0-0000-0000-0000-000000000000</OriginalTemplateId><OriginalTemplateName>Blank</OriginalTemplateName><CpuPinning></CpuPinning><UseLatestVersion>false</UseLate stVersion><StopTime>2020/08/20 17:52:35</StopTime><Section ovf:id="46ad1d80-2649-48f5-92e6-e5489d11d30c" ovf:required="false" xsi:type="ovf:OperatingSystemSe ction_Type"><Info>Guest Operating System</Info><Description>other_linux_ppc64</Description></Section><Section xsi:type="ovf:VirtualHardwareSection_Type"><Inf o>2 CPU, 4096 Memory</Info><System><vssd:VirtualSystemType>ENGINE 4.1.0.0</vssd:VirtualSystemType></System><Item><rasd:Caption>2 virtual cpu</rasd:Caption><r asd:Description>Number of virtual CPU</rasd:Description><rasd:InstanceId>1</rasd:InstanceId><rasd:ResourceType>3</rasd:ResourceType><rasd:num_of_sockets>2</r asd:num_of_sockets><rasd:cpu_per_socket>1</rasd:cpu_per_socket><rasd:threads_per_cpu>1</rasd:threads_per_cpu><rasd:max_num_of_vcpus>16</rasd:max_num_of_vcpus <rasd:VirtualQuantity>2</rasd:VirtualQuantity></Item><Item><rasd:Caption>4096 MB of memory</rasd:Caption><rasd:Description>Memory Size</rasd:Description><ra sd:InstanceId>2</rasd:InstanceId><rasd:ResourceType>4</rasd:ResourceType><rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits><rasd:VirtualQuantity>4096</ra sd:VirtualQuantity></Item><Item><rasd:Caption>energy.versatushpc.com.br_Disk1</rasd:Caption><rasd:InstanceId>b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:Insta nceId><rasd:ResourceType>17</rasd:ResourceType><rasd:HostResource>775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:HostResourc e><rasd:Parent>00000000-0000-0000-0000-000000000000</rasd:Parent><rasd:Template>00000000-0000-0000-0000-000000000000</rasd:Template><rasd:ApplicationList></r asd:ApplicationList><rasd:StorageId>d19456e4-0051-456e-b33c-57348a78c2e0</rasd:StorageId><rasd:StoragePoolId>6c54f91e-89bf-45b4-bc48-56e74c4efd5e</rasd:Stora gePoolId><rasd:CreationDate>2020/08/19 20:13:05</rasd:CreationDate><rasd:LastModified>1970/01/01 00:00:00</rasd:LastModified><rasd:last_modified_date>2020/08 /20 18:37:41</rasd:last_modified_date><Type>disk</Type><Device>disk</Device><rasd:Address>{type=drive, bus=0, controller=1, target=0, unit=0}</rasd:Address>< BootOrder>1</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-775b24a9-6a32-431a-831f-4ac9b3b31152</Alias></Item><Item><rasd:Capt ion>Ethernet adapter on legacyservers</rasd:Caption><rasd:InstanceId>e6e37ae1-f263-4986-a039-e8e01e72d1f4</rasd:InstanceId><rasd:ResourceType>10</rasd:Resour ceType><rasd:OtherResourceType>legacyservers</rasd:OtherResourceType><rasd:ResourceSubType>3</rasd:ResourceSubType><rasd:Connection>legacyservers</rasd:Conne ction><rasd:Linked>true</rasd:Linked><rasd:Name>nic1</rasd:Name><rasd:ElementName>nic1</rasd:ElementName><rasd:MACAddress>56:6f:f0:b3:00:23</rasd:MACAddress> <rasd:speed>10000</rasd:speed><Type>interface</Type><Device>bridge</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><I sReadOnly>false</IsReadOnly><Alias>ua-e6e37ae1-f263-4986-a039-e8e01e72d1f4</Alias></Item><Item><rasd:Caption>USB Controller</rasd:Caption><rasd:InstanceId>3< /rasd:InstanceId><rasd:ResourceType>23</rasd:ResourceType><rasd:UsbPolicy>DISABLED</rasd:UsbPolicy></Item><Item><rasd:Caption>Graphical Controller</rasd:Capt ion><rasd:InstanceId>1440c749-728e-4a86-afc1-8237c6055fa5</rasd:InstanceId><rasd:ResourceType>20</rasd:ResourceType><rasd:VirtualQuantity>1</rasd:VirtualQuan tity><rasd:SinglePciQxl>false</rasd:SinglePciQxl><Type>video</Type><Device>vga</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</ IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-1440c749-728e-4a86-afc1-8237c6055fa5</Alias><SpecParams><vram>16384</vram></SpecParams></Item><Item><rasd:C aption>Graphical Framebuffer</rasd:Caption><rasd:InstanceId>603e7f0c-8d28-4c3e-bd90-c5685b752100</rasd:InstanceId><rasd:ResourceType>26</rasd:ResourceType><T ype>graphics</Type><Device>vnc</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>< /Alias></Item><Item><rasd:Caption>CDROM</rasd:Caption><rasd:InstanceId>3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</rasd:InstanceId><rasd:ResourceType>15</rasd:Reso urceType><Type>disk</Type><Device>cdrom</Device><rasd:Address>{type=drive, bus=0, controller=0, target=0, unit=2}</rasd:Address><BootOrder>2</BootOrder><IsPl ugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</Alias><SpecParams><path>CentOS-8.1.1911-x86_64-boot.iso</p ath></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>66f3a2b8-d2c5-4032-9f10-8742d65a0a3e</rasd:InstanceId><Type>controller </Type><Device>scsi</Device><rasd:Address>{type=spapr-vio}</rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Al ias></Alias><SpecParams><index>0</index></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>e065acb2-e7db-4f55-a1df-385f19299b d0</rasd:InstanceId><Type>rng</Type><Device>virtio</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false< /IsReadOnly><Alias>ua-e065acb2-e7db-4f55-a1df-385f19299bd0</Alias><SpecParams><source>urandom</source></SpecParams></Item><Item><rasd:ResourceType>0</rasd:Re
<BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-7b4c4ef6-2a9a-4120-b838-3127db0fd703</Alias><SpecParams><model>vir tio</model></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>5aade6c7-8f77-4fea-a5de-66350b214935</rasd:InstanceId><Type>con
Bumping this thread once again. @Michal and @Arik, I’ve contacted IBM once again, and they confirmed that my hardware (AC922 8335-GTH) is with the current OpenBMC and Firmware Updates: FIRMWARE VERSION IBM-witherspoon-ibm-OP9-v2.4-4.49-prod FIRMWARE VERSION op940.10-5-0-g22edca685 HARDWARE REVISION cpu : POWER9, altivec supported clock : 3683.000000MHz revision : 2.2 (pvr 004e 1202) timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix The thing is, my machine does not to be recognised as fully patched for Spectre and Meltdown on oVirt/RHV side as far as I was able to conclude. I still can’t run any VM with SXXM nor pseries-rhel8.2.0; the best I can get is pseries-rhel7.6.0. I know that there’s a revision 2.3 of the CPU, but this is not that common, being 2.2 the GA of the processor. Is it relevant? So I’m not sure it’s an IBM issue right now. The whole point of moving from 4.3.10 to 4.4.1 is because I was expecting better support for those mitigations. If I can help to fix this up, in case of this being a RHV/oVirt issue please let me know. There are still the questions regarding the cluster level, that I can feed with more information if needed. But for me the problem was solved because I’ve managed to boot the machine with the pseries-rhel7.6.0 model. Perhaps this was always the issue in first place. Thanks guys. On 28 Aug 2020, at 15:31, Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: Hello! On 28 Aug 2020, at 14:39, Michal Skrivanek <michal.skrivanek@redhat.com<mailto:michal.skrivanek@redhat.com>> wrote: Hi Vinicius, On 28 Aug 2020, at 01:17, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: On 27 Aug 2020, at 17:50, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: On 27 Aug 2020, at 16:48, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:39 PM Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: On 27 Aug 2020, at 16:26, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:23 PM Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:13 PM Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: On 27 Aug 2020, at 16:03, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 8:40 PM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hi Michal, On 27 Aug 2020, at 05:08, Michal Skrivanek <michal.skrivanek@redhat.com<mailto:michal.skrivanek@redhat.com>> wrote: On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Okay here we go Arik. With your insight I’ve done the following: # rpm -Va This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done: you should still have host deploy logs on the engine machine. it’s weird it succeeded, unless it somehow happened afterwards? It only succeeded my yum reinstall rampage. yum list installed | cut -f 1 -d " " > file yum -y reinstall `cat file | xargs` Reinstalled everything. Everything worked as expected and I finally added the machine back to the cluster. It’s operational. eh, I wouldn’t trust it much. did you run redeploy at least? I’ve done reinstall on the web interface of the engine. I can reinstall the host, there’s nothing running on it… gonna try a third format. Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64: <PastedGraphic-2.png> So… This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this… how exactly did you import them? could be a bug indeed. we don’t support changing it as it doesn’t make sense, the guest can’t be converted Yeah. I done the normal procedure, added the storage domain to the engine and clicked on “Import VM”. Immediately it was detected as x86_64. Since I wasn’t able to upgrade my environment from 4.3.10 to 4.4.1 due to random errors when redeploying the engine with the backup from 4.3.10, I just reinstalled it, reconfigured everything and them imported the storage domains. I don’t know where the information about architecture is stored in the storage domain, I tried to search for some metadata files inside the domain but nothing come up. Is there a way to force this change? It must be a way. I even tried to import the machine as x86_64. So I can delete the VM and just reattach the disks in a new only, effectively not losing the data, but… <PastedGraphic-1.png> Yeah, so something is broken. The check during the import appears to be OK, but the interface does not me allow to import it to the ppc64le machine, since it’s read as x86_64. Could you please provide the output of the following query from the database: select * from unregistered_ovf_of_entities where entity_name='energy.versatushpc.com.br<http://energy.versatushpc.com.br/>'; Sure, there you go: 46ad1d80-2649-48f5-92e6-e5489d11d30c | energy.versatushpc.com.br<http://energy.versatushpc.com.br/> | VM | 1 | | d19456e4-0051-456e-b33c-57348a78c2e0 | <?xml version="1.0" encoding="UTF-8"?><ovf:Envelope xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1/" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim -schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingDa..." xmlns:xsi="http://ww<http://ww/> w.w3.org/2001/XMLSchema-instance<http://w.w3.org/2001/XMLSchema-instance>" ovf:version="4.1.0.0"><References><File ovf:href="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af " ovf:id="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="512" ovf:description="Active VM" ovf:disk_storage_type="IMAGE" ovf:cinder_volume_type=""></File></R eferences><NetworkSection><Info>List of networks</Info><Network ovf:name="legacyservers"></Network></NetworkSection><Section xsi:type="ovf:DiskSection_Type"> <Info>List of Virtual Disks</Info><Disk ovf:diskId="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="40" ovf:actual_size="1" ovf:vm_snapshot_id="6de58683-c586 -4e97-b0e8-ee7ee3baf754" ovf:parentRef="" ovf:fileRef="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:format="http://www.vmwa<http://www.vmwa/> re.com/specifications/vmdk.html#sparse<http://re.com/specifications/vmdk.html#sparse>" ovf:volume-format="RAW" ovf:volume-type="Sparse" ovf:disk-interface="VirtIO_SCSI" ovf:read-only="false" ovf:shareable ="false" ovf:boot="true" ovf:pass-discard="false" ovf:disk-alias="energy.versatushpc.com.br_Disk1" ovf:disk-description="" ovf:wipe-after-delete="false"></Di sk></Section><Content ovf:id="out" xsi:type="ovf:VirtualSystem_Type"><Name>energy.versatushpc.com.br<http://energy.versatushpc.com.br/></Name><Description>Holds Kosen backend and frontend prod services (nginx + docker)</Description><Comment></Comment><CreationDate>2020/08/19 20:11:33</CreationDate><ExportDate>2020/08/20 18:37:41</ExportDate><Delet eProtected>false</DeleteProtected><SsoMethod>guest_agent</SsoMethod><IsSmartcardEnabled>false</IsSmartcardEnabled><NumOfIoThreads>1</NumOfIoThreads><TimeZone sourceType><rasd:InstanceId>7b4c4ef6-2a9a-4120-b838-3127db0fd703</rasd:InstanceId><Type>balloon</Type><Device>memballoon</Device><rasd:Address></rasd:Address troller</Type><Device>virtio-scsi</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlugged><IsReadOnly>false</IsReadOnly><Ali as></Alias><SpecParams><ioThreadId></ioThreadId></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>4d4d7bfd-b1e8-45c3-a5e8-7e 0b7773bbf2</rasd:InstanceId><Type>controller</Type><Device>virtio-serial</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlu gged><IsReadOnly>false</IsReadOnly><Alias>58ca7b19-0071-00c0-01d6-000000000212</Alias></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>9 cea63da-7afd-41d4-925f-369f993b280f</rasd:InstanceId><Type>controller</Type><Device>usb</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugg ed>false</IsPlugged><IsReadOnly>false</IsReadOnly><Alias></Alias><SpecParams><index>0</index><model>nec-xhci</model></SpecParams></Item></Section><Section xs i:type="ovf:SnapshotsSection_Type"><Snapshot ovf:id="6de58683-c586-4e97-b0e8-ee7ee3baf754"><Type>ACTIVE</Type><Description>Active VM</Description><CreationDa te>2020/08/19 20:11:33</CreationDate></Snapshot></Section></Content></ovf:Envelope> | | 0 Thank you! thanks so yeah - we may have an issue with that operating system 'other_linux_ppc64' that has the same name as 'other_linux' in our os-info configuration as a possible workaround, assuming all those unregistered VMs you can try to override the architecture with: update unregistered_ovf_of_entities set architecture = 2; as a possible workaround, assuming all those unregistered VMs are from clusters with the same architecture, you can try to override the architecture with: * Wooha!!! engine=# update unregistered_ovf_of_entities set architecture = 2; UPDATE 8 <PastedGraphic-2.png> <PastedGraphic-3.png> Worked and the VMs are now imported. But… hahaha. I have another issues, any of the three VM’s starts now. Perhaps I’ll reinstall the host for the third time as recommended by Michal, anyway here are the logs that I was able to fetch during the failed power on process: ON THE ENGINE: ==> /var/log/ovirt-engine/engine.log <== 2020-08-27 16:35:59,437-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5e701801 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5e701801 2020-08-27 16:35:59,500-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Running command: RunVmCommand internal: false. Entities affected : ID: ccccd416-c6b4-4c95-8372-417480be5365 Type: VMAction group RUN_VM with role type USER 2020-08-27 16:35:59,506-03 INFO [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Emulated machine 'pseries-rhel8.2.0' which is different than that of the cluster is set for 'jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>'(ccccd416-c6b4-4c95-8372-417480be5365) 2020-08-27 16:35:59,528-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@14322872'}), log id: 7709ba81 2020-08-27 16:35:59,530-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 7709ba81 2020-08-27 16:35:59,533-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>]'}), log id: 4a0db679 2020-08-27 16:35:59,534-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateBrokerVDSCommand(HostName = rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>, CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>]'}), log id: 25bc7e6e 2020-08-27 16:35:59,548-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] VM <?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"> <name>jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/></name> <uuid>ccccd416-c6b4-4c95-8372-417480be5365</uuid> <memory>536870912</memory> <currentMemory>536870912</currentMemory> <vcpu current="128">384</vcpu> <clock offset="variable" adjustment="0"> <timer name="rtc" tickpolicy="catchup"/> <timer name="pit" tickpolicy="delay"/> </clock> <cpu mode="host-model"> <model>power9</model> <topology cores="16" threads="4" sockets="6"/> <numa> <cell id="0" cpus="0-383" memory="536870912"/> </numa> </cpu> <cputune/> <qemu:capabilities> <qemu:add capability="blockdev"/> <qemu:add capability="incremental-backup"/> </qemu:capabilities> <devices> <input type="tablet" bus="usb"/> <channel type="unix"> <target type="virtio" name="ovirt-guest-agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.ovirt-guest-agent.0"/> </channel> <channel type="unix"> <target type="virtio" name="org.qemu.guest_agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.org.qemu.guest_agent.0"/> </channel> <emulator text="/usr/bin/qemu-system-ppc64"/> <controller type="scsi" model="ibmvscsi" index="0"/> <rng model="virtio"> <backend model="random">/dev/urandom</backend> <alias name="ua-1e18aea0-076a-40d0-9b85-21ac6049a94d"/> </rng> <controller type="usb" model="nec-xhci" index="0"> <alias name="ua-47e67d9f-a191-4dc0-9c09-b2db9f1d373e"/> </controller> <controller type="virtio-serial" index="0" ports="16"> <alias name="ua-4d92fb2f-aaf6-465c-8571-e49e1d12191d"/> </controller> <watchdog model="i6300esb" action="none"> <alias name="ua-7b756cc3-c9ec-4b79-84ef-d6ad15021f1a"/> </watchdog> <graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"> <listen type="network" network="vdsm-ovirtmgmt"/> </graphics> <controller type="scsi" model="virtio-scsi" index="1"> <alias name="ua-8e146e76-e038-4f8a-a526-e7e1c626f54e"/> </controller> <memballoon model="virtio"> <stats period="5"/> <alias name="ua-d8d37c06-de66-4912-bf8d-fc1017c85c68"/> </memballoon> <video> <model type="vga" vram="16384" heads="1"/> <alias name="ua-e96e6050-b1aa-4664-a856-8df923e3dc66"/> </video> <controller type="scsi" index="0"> <address type="spapr-vio"/> </controller> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="servers"/> <driver queues="4" name="vhost"/> <alias name="ua-152c3f8a-69d2-420f-8b6a-c1fb4a11594f"/> <mac address="56:6f:1a:f4:00:03"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="nfs"/> <driver queues="4" name="vhost"/> <alias name="ua-1369da6c-4f9b-4fe3-9f45-7b37ecb34ac2"/> <mac address="56:6f:1a:f4:00:04"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <disk type="file" device="cdrom" snapshot="no"> <driver name="qemu" type="raw" error_policy="report"/> <source file="" startupPolicy="optional"> <seclabel model="dac" type="none" relabel="no"/> </source> <target dev="sdc" bus="scsi"/> <readonly/> <alias name="ua-2d6db7ca-2fe1-4af4-9741-7b5332805d94"/> <address bus="0" controller="0" unit="2" type="drive" target="0"/> </disk> <disk snapshot="no" type="file" device="disk"> <target dev="sda" bus="scsi"/> <source file="/rhev/data-center/804e857c-461d-4642-86c4-7ff4a5e7da47/d19456e4-0051-456e-b33c-57348a78c2e0/images/8100a756-92a7-4160-9a31-5a843810cb61/0183b177-71b5-4c0e-b7d3-becc5da152ce"> <seclabel model="dac" type="none" relabel="no"/> </source> <driver name="qemu" io="threads" type="raw" error_policy="stop" cache="none"/> <alias name="ua-8100a756-92a7-4160-9a31-5a843810cb61"/> <address bus="0" controller="1" unit="0" type="drive" target="0"/> <boot order="1"/> <serial>8100a756-92a7-4160-9a31-5a843810cb61</serial> </disk> <lease> <key>ccccd416-c6b4-4c95-8372-417480be5365</key> <lockspace>d19456e4-0051-456e-b33c-57348a78c2e0</lockspace> <target offset="24117248" path="/rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm/d19456e4-0051-456e-b33c-57348a78c2e0/dom_md/xleases"/> </lease> </devices> <os> <type arch="ppc64" machine="pseries-rhel8.2.0">hvm</type> </os> <metadata> <ovirt-tune:qos/> <ovirt-vm:vm> <ovirt-vm:minGuaranteedMemoryMb type="int">524288</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion> <ovirt-vm:custom/> <ovirt-vm:device mac_address="56:6f:1a:f4:00:04"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device mac_address="56:6f:1a:f4:00:03"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sda"> <ovirt-vm:poolID>804e857c-461d-4642-86c4-7ff4a5e7da47</ovirt-vm:poolID> <ovirt-vm:volumeID>0183b177-71b5-4c0e-b7d3-becc5da152ce</ovirt-vm:volumeID> <ovirt-vm:imageID>8100a756-92a7-4160-9a31-5a843810cb61</ovirt-vm:imageID> <ovirt-vm:domainID>d19456e4-0051-456e-b33c-57348a78c2e0</ovirt-vm:domainID> </ovirt-vm:device> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:resumeBehavior>kill</ovirt-vm:resumeBehavior> </ovirt-vm:vm> </metadata> </domain> 2020-08-27 16:35:59,566-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateBrokerVDSCommand, return: , log id: 25bc7e6e 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateVDSCommand, return: WaitForLaunch, log id: 4a0db679 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,576-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] EVENT_ID: USER_STARTED_VM(153), VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> was started by admin@internal-authz (Host: rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>). 2020-08-27 16:36:01,803-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365' was reported as Down on VDS '394e0e68-60f5-42b3-aec4-5d8368efedd1'(rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>) 2020-08-27 16:36:01,804-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] START, DestroyVDSCommand(HostName = rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>, DestroyVmVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] FINISH, DestroyVDSCommand, return: , log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365'(jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>) moved from 'WaitForLaunch' --> 'Down' 2020-08-27 16:36:02,024-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-13) [] EVENT_ID: VM_DOWN_ERROR(119), VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: Hook Error: (b'Traceback (most recent call last):\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 124, in <module>\n main(VhostmdConf())\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 47, in __init__\n dom = minidom.parse(path)\n File "/usr/lib64/python3.6/xml/dom/minidom.py", line 1958, in parse\n return expatbuilder.parse(file)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 911, in parse\n result = builder.parseFile(fp)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 211, in parseFile\n parser.Parse("", True)\nxml.parsers.expat.ExpatError: no element found: line 1, column 0\n',). yeah, I never encountered this issue before - could be a consequence of an improper deployment of that host Starting reinstall right now. But I’ve a question, is this documentation right? For Red Hat Enterprise Linux 8 hosts, little endian, on IBM POWER9 hardware: # subscription-manager repos \ --disable='*' \ --enable=rhv-4-mgmt-agent-for-rhel-8-ppc64le-rpms \ --enable=advanced-virt-for-rhel-8-ppc64le-rpms \ --enable=ansible-2.9-for-rhel-8-ppc64le-rpms I think it’s missing: --enable=rhel-8-for-ppc64le-baseos-rpms \ --enable=rhel-8-for-ppc64le-appstream-rpms This can be found here: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... I assumed that in fact information is missing on this documentation. Double check, it’s missing. It’s impossible to reinstall the machine only with this repositories. I’ll open another bug. Yes please. When it says disable * and explicitly lists what to enable it does need to include the base channels for sure Done. https://bugzilla.redhat.com/show_bug.cgi?id=1873360 2020-08-27 16:36:02,025-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] add VM 'ccccd416-c6b4-4c95-8372-417480be5365'(jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>) to rerun treatment 2020-08-27 16:36:02,029-03 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-13) [] Rerun VM 'ccccd416-c6b4-4c95-8372-417480be5365'. Called from VDS 'rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>' 2020-08-27 16:36:02,041-03 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> on Host rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>. 2020-08-27 16:36:02,066-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5480ad0b 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5480ad0b 2020-08-27 16:36:02,093-03 WARN [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Validation of action 'RunVm' failed for user admin@internal-authz. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS 2020-08-27 16:36:02,093-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,101-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> (User: admin@internal-authz). 2020-08-27 16:36:02,105-03 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (EE-ManagedThreadFactory-engine-Thread-145180) [71c52499] Running command: ProcessDownVmCommand internal: true. ON THE HOST: /var/log/messages Aug 27 16:36:01 rhvpower python3[73682]: detected unhandled Python exception in '/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd' Aug 27 16:36:01 rhvpower abrt-server[73684]: Deleting problem directory Python3-2020-08-27-16:36:01-73682 (dup of Python3-2020-08-27-16:33:11-73428) Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Activating service name='org.freedesktop.problems' requested by ':1.183' (uid=0 pid=73691 comm="/usr/libexec/platform-python /usr/bin/abrt-action-" label="system_u:system_r:abrt_t:s0-s0:c0.c1023") (using servicehelper) Aug 27 16:36:01 rhvpower dbus-daemon[73694]: [system] Failed to reset fd limit before activating service: org.freedesktop.DBus.Error.AccessDenied: Failed to restore old fd limit: Operation not permitted Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Successfully activated service 'org.freedesktop.problems' Aug 27 16:36:02 rhvpower abrt-server[73684]: /bin/sh: reporter-systemd-journal: command not found Regarding the import problem. Is that really a bug right? I can describe it on Red Hat Bugzilla if I need to. It’s the minimal that I can do for the help. Is it ok? yes, please do There you go: https://bugzilla.redhat.com/show_bug.cgi?id=1873322 Thank you guys, I will report back after the reinstallation of the host. Reinstall now went fine. Now I found that’s something extremely bad when trying to run the VMs. The metadata appears to be corrupted. First it complained about the CPUs, I changed it not the interface just to refresh the metadada: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: internal error: process exited while connecting to monitor: 2020-08-27T22:48:59.533367Z qemu-kvm: warning: Number of hotpluggable cpus requested (384) exceeds the recommended cpus supported by KVM (128) interesting. shouldn’t cause any harm, but it’s worth a follow up If that was a warning I would be ok, but what a hell… and from where the number 384 came? My machine have 128 threads only... 2020-08-27T22:48:59.537530Z qemu-kvm: -numa node,nodeid=0,cpus=0-383,mem=524288: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead 2020-08-27T22:48:59.833812Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. what is the cluster level right now? is it 4.3? there was a breaking change in machine types between 4.3 and 4.4 and a incompatible P9 firmware changes. It should work in 4.4 cluster level, just also make sure you have latest greatest P9 firmware. The bad news is that the cluster has always been on 4.4 level… take a look at those screenshots: <PastedGraphic-10.png> <PastedGraphic-11.png> After this the error changed to: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T22:49:48.876424Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. Then I tried reducing the RAM, but it gave me a warning (not an error): VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> was configured with 524288MiB of memory while the recommended value range is 256MiB - 65536MiB that maximum doesn’t make sense. Maybe still aproblem with wrong os type? Wrong OS? I don’t know. :( My settings are pretty basic to be honest, here as pic of the relevant part of the configuration on the offending VM: <PastedGraphic-8.png> As you can see it only came to life with pseries-rhel7.6.0 without SXXM. I can’t boot with pseries-rhel8.2.0; and I was expecting this with the upgrade from 4.3.10 to 4.4.1. I’ve lowered it to 65536MiB, and now it complains about multiple SCSI devices: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0'. So I’ve changed the disk type from VirtIO-SCSI to VirtIO and changed back to VirtIO to VirtIO-SCSI, and some part of the first error came back: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T23:09:51.753960Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. yeah, you won’t get over it with (I assume) the 4.3 machine type 4.4 :( Now I changed the machine to custom cluster emulation to pseries-7.6.0 and the SCSI error is back. And now I’m stuck with it… but indeed removing the now broken mitigations and use plain 7.6.0 it’s fine…. VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0’. … but this one is not:) So this is aftter import, right? Can you confirm same problem for a newly created VM? It happens :( But take a look in the next answer. Thinking in dumping the VM entirely and reimporting the disks… but I created another one, just with plain settings, to see if something boots on this host, and the result was bad: VM ppc64le is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T23:15:44.669298Z qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=8192: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead 2020-08-27T23:15:44.691077Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. So, any ideias? In the past I had some issues with SXXM in pseries-7.6.0; I’m not sure if it’s the same issue all over again. likely yes. Again, once you move to 4.4 cluster level the new el8 machine type is using different spectre mitigations and should work… Yeah, that’s the bad news. Already running everything on 4.4 level, at least is what the interface tells to me. I’ve done the simplest test possible, created a simple VM without even touching the advanced settings, like in this photo: <PastedGraphic-9.png> And it won’t boot: VM ppc64le-dc44 is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-28T18:22:15.390930Z qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=4096: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead 2020-08-28T18:22:15.412920Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. 8/28/203:22:17 PM So, pseries-rhel8.2.0 does not appears to be working, if I change it to pseries-7.6.0 (without SXXM) this happens: VM ppc64le-dc44 is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0’. But I’ve noted if I wait something like 30 to 60 seconds, the VM will eventually boot, and yeah it booted when I was writing this message: VM ppc64le-dc44 started on Host rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/> Regarding my hardware, it’s an AC922 from IBM. It has the latest firmware upgrades in place, for the whiterspoon part and for OpenBMC. There’s nothing else, that I’m aware of, that can be updated… I’ve never able to run pseries-7.6.0-sxxm in the 4.3.10. So… any ideias? Thank you again, the help that you guys are providing is amazing. Thanks, michal Thanks, Thanks, Thanks, michal Ideias? On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: What a strange thing is happening here: [root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files: Transaction test succeeded. Running transaction Preparing : 1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch 2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. Verifying : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Installed products updated. Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch I’ve never seen something like this. I’ve already reinstalled the host from the ground and the same thing happens. On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello Arik, This is probably the issue. Output totally empty: [root@power ~]# vdsm-client Host getCapabilities [root@power ~]# Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch Any ideias to try? Thanks. On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues. Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv. Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10. Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix Can you please provide the output of 'vdsm-client Host getCapabilities' on that host? Thanks, _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMNMYMBMWTC7UG... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6T3PWJREF2FJCT... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJQ7F63EGKGYHM...

Etc/GMT</TimeZone><default_boot_sequence>9</default_boot_sequence><Generation>8</Generation><ClusterCompatibilityVersion>4.3</ClusterCompatibilityVersion><V mType>1</VmType><ResumeBehavior>AUTO_RESUME</ResumeBehavior><MinAllocatedMem>2730</MinAllocatedMem><IsStateless>false</IsStateless><IsRunAndPause>false</IsRu nAndPause><AutoStartup>false</AutoStartup><Priority>1</Priority><CreatedByUserId>6ea16f22-45d7-11ea-bd83-00163e518b7c</CreatedByUserId><MigrationSupport>0</M igrationSupport><IsBootMenuEnabled>false</IsBootMenuEnabled><IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled><IsSpiceCopyPasteEnabled>true</IsSpi ceCopyPasteEnabled><AllowConsoleReconnect>true</AllowConsoleReconnect><ConsoleDisconnectAction>LOCK_SCREEN</ConsoleDisconnectAction><CustomEmulatedMachine></ CustomEmulatedMachine><BiosType>0</BiosType><CustomCpuName></CustomCpuName><PredefinedProperties></PredefinedProperties><UserDefinedProperties></UserDefinedP roperties><MaxMemorySizeMb>16384</MaxMemorySizeMb><MultiQueuesEnabled>true</MultiQueuesEnabled><UseHostCpu>false</UseHostCpu><ClusterName>Blastoise</ClusterN ame><TemplateId>00000000-0000-0000-0000-000000000000</TemplateId><TemplateName>Blank</TemplateName><IsInitilized>true</IsInitilized><Origin>0</Origin><quota_ id>32644894-755e-4588-b967-8fb9dc327795</quota_id><DefaultDisplayType>2</DefaultDisplayType><TrustedService>false</TrustedService><OriginalTemplateId>0000000 0-0000-0000-0000-000000000000</OriginalTemplateId><OriginalTemplateName>Blank</OriginalTemplateName><CpuPinning></CpuPinning><UseLatestVersion>false</UseLate stVersion><StopTime>2020/08/20 17:52:35</StopTime><Section ovf:id="46ad1d80-2649-48f5-92e6-e5489d11d30c" ovf:required="false" xsi:type="ovf:OperatingSystemSe ction_Type"><Info>Guest Operating System</Info><Description>other_linux_ppc64</Description></Section><Section xsi:type="ovf:VirtualHardwareSection_Type"><Inf o>2 CPU, 4096 Memory</Info><System><vssd:VirtualSystemType>ENGINE 4.1.0.0</vssd:VirtualSystemType></System><Item><rasd:Caption>2 virtual cpu</rasd:Caption><r asd:Description>Number of virtual CPU</rasd:Description><rasd:InstanceId>1</rasd:InstanceId><rasd:ResourceType>3</rasd:ResourceType><rasd:num_of_sockets>2</r asd:num_of_sockets><rasd:cpu_per_socket>1</rasd:cpu_per_socket><rasd:threads_per_cpu>1</rasd:threads_per_cpu><rasd:max_num_of_vcpus>16</rasd:max_num_of_vcpus <rasd:VirtualQuantity>2</rasd:VirtualQuantity></Item><Item><rasd:Caption>4096 MB of memory</rasd:Caption><rasd:Description>Memory Size</rasd:Description><ra sd:InstanceId>2</rasd:InstanceId><rasd:ResourceType>4</rasd:ResourceType><rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits><rasd:VirtualQuantity>4096</ra sd:VirtualQuantity></Item><Item><rasd:Caption>energy.versatushpc.com.br_Disk1</rasd:Caption><rasd:InstanceId>b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:Insta nceId><rasd:ResourceType>17</rasd:ResourceType><rasd:HostResource>775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:HostResourc e><rasd:Parent>00000000-0000-0000-0000-000000000000</rasd:Parent><rasd:Template>00000000-0000-0000-0000-000000000000</rasd:Template><rasd:ApplicationList></r asd:ApplicationList><rasd:StorageId>d19456e4-0051-456e-b33c-57348a78c2e0</rasd:StorageId><rasd:StoragePoolId>6c54f91e-89bf-45b4-bc48-56e74c4efd5e</rasd:Stora gePoolId><rasd:CreationDate>2020/08/19 20:13:05</rasd:CreationDate><rasd:LastModified>1970/01/01 00:00:00</rasd:LastModified><rasd:last_modified_date>2020/08 /20 18:37:41</rasd:last_modified_date><Type>disk</Type><Device>disk</Device><rasd:Address>{type=drive, bus=0, controller=1, target=0, unit=0}</rasd:Address>< BootOrder>1</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-775b24a9-6a32-431a-831f-4ac9b3b31152</Alias></Item><Item><rasd:Capt ion>Ethernet adapter on legacyservers</rasd:Caption><rasd:InstanceId>e6e37ae1-f263-4986-a039-e8e01e72d1f4</rasd:InstanceId><rasd:ResourceType>10</rasd:Resour ceType><rasd:OtherResourceType>legacyservers</rasd:OtherResourceType><rasd:ResourceSubType>3</rasd:ResourceSubType><rasd:Connection>legacyservers</rasd:Conne ction><rasd:Linked>true</rasd:Linked><rasd:Name>nic1</rasd:Name><rasd:ElementName>nic1</rasd:ElementName><rasd:MACAddress>56:6f:f0:b3:00:23</rasd:MACAddress> <rasd:speed>10000</rasd:speed><Type>interface</Type><Device>bridge</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><I sReadOnly>false</IsReadOnly><Alias>ua-e6e37ae1-f263-4986-a039-e8e01e72d1f4</Alias></Item><Item><rasd:Caption>USB Controller</rasd:Caption><rasd:InstanceId>3< /rasd:InstanceId><rasd:ResourceType>23</rasd:ResourceType><rasd:UsbPolicy>DISABLED</rasd:UsbPolicy></Item><Item><rasd:Caption>Graphical Controller</rasd:Capt ion><rasd:InstanceId>1440c749-728e-4a86-afc1-8237c6055fa5</rasd:InstanceId><rasd:ResourceType>20</rasd:ResourceType><rasd:VirtualQuantity>1</rasd:VirtualQuan tity><rasd:SinglePciQxl>false</rasd:SinglePciQxl><Type>video</Type><Device>vga</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</ IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-1440c749-728e-4a86-afc1-8237c6055fa5</Alias><SpecParams><vram>16384</vram></SpecParams></Item><Item><rasd:C aption>Graphical Framebuffer</rasd:Caption><rasd:InstanceId>603e7f0c-8d28-4c3e-bd90-c5685b752100</rasd:InstanceId><rasd:ResourceType>26</rasd:ResourceType><T ype>graphics</Type><Device>vnc</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>< /Alias></Item><Item><rasd:Caption>CDROM</rasd:Caption><rasd:InstanceId>3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</rasd:InstanceId><rasd:ResourceType>15</rasd:Reso urceType><Type>disk</Type><Device>cdrom</Device><rasd:Address>{type=drive, bus=0, controller=0, target=0, unit=2}</rasd:Address><BootOrder>2</BootOrder><IsPl ugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</Alias><SpecParams><path>CentOS-8.1.1911-x86_64-boot.iso</p ath></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>66f3a2b8-d2c5-4032-9f10-8742d65a0a3e</rasd:InstanceId><Type>controller </Type><Device>scsi</Device><rasd:Address>{type=spapr-vio}</rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Al ias></Alias><SpecParams><index>0</index></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>e065acb2-e7db-4f55-a1df-385f19299b d0</rasd:InstanceId><Type>rng</Type><Device>virtio</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false< /IsReadOnly><Alias>ua-e065acb2-e7db-4f55-a1df-385f19299bd0</Alias><SpecParams><source>urandom</source></SpecParams></Item><Item><rasd:ResourceType>0</rasd:Re
<BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-7b4c4ef6-2a9a-4120-b838-3127db0fd703</Alias><SpecParams><model>vir tio</model></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>5aade6c7-8f77-4fea-a5de-66350b214935</rasd:InstanceId><Type>con
@Michal, @Arik. A colleague from Red Hat figured it out, for whatever reasons protection level is not set to 0 on AC922 machines from IBM by default. It came with level 2, so I had to create a custom file inside the BMC to properly power on a machine in the new 8.2.0 pseries. Bugzilla info is here: https://bugzilla.redhat.com/show_bug.cgi?id=1880774 https://bugzilla.redhat.com/show_bug.cgi?id=1886803 Reporting back to the list, so if anyone with the same issue reads this the solution is available. Thanks all for the time and efforts trying to help. On 2 Sep 2020, at 16:30, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Bumping this thread once again. @Michal and @Arik, I’ve contacted IBM once again, and they confirmed that my hardware (AC922 8335-GTH) is with the current OpenBMC and Firmware Updates: FIRMWARE VERSION IBM-witherspoon-ibm-OP9-v2.4-4.49-prod FIRMWARE VERSION op940.10-5-0-g22edca685 HARDWARE REVISION cpu : POWER9, altivec supported clock : 3683.000000MHz revision : 2.2 (pvr 004e 1202) timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix The thing is, my machine does not to be recognised as fully patched for Spectre and Meltdown on oVirt/RHV side as far as I was able to conclude. I still can’t run any VM with SXXM nor pseries-rhel8.2.0; the best I can get is pseries-rhel7.6.0. I know that there’s a revision 2.3 of the CPU, but this is not that common, being 2.2 the GA of the processor. Is it relevant? So I’m not sure it’s an IBM issue right now. The whole point of moving from 4.3.10 to 4.4.1 is because I was expecting better support for those mitigations. If I can help to fix this up, in case of this being a RHV/oVirt issue please let me know. There are still the questions regarding the cluster level, that I can feed with more information if needed. But for me the problem was solved because I’ve managed to boot the machine with the pseries-rhel7.6.0 model. Perhaps this was always the issue in first place. Thanks guys. On 28 Aug 2020, at 15:31, Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: Hello! On 28 Aug 2020, at 14:39, Michal Skrivanek <michal.skrivanek@redhat.com<mailto:michal.skrivanek@redhat.com>> wrote: Hi Vinicius, On 28 Aug 2020, at 01:17, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: On 27 Aug 2020, at 17:50, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: On 27 Aug 2020, at 16:48, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:39 PM Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: On 27 Aug 2020, at 16:26, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:23 PM Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 10:13 PM Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: On 27 Aug 2020, at 16:03, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Thu, Aug 27, 2020 at 8:40 PM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hi Michal, On 27 Aug 2020, at 05:08, Michal Skrivanek <michal.skrivanek@redhat.com<mailto:michal.skrivanek@redhat.com>> wrote: On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Okay here we go Arik. With your insight I’ve done the following: # rpm -Va This showed what’s zeroed on the machine, since it was a lot of things, I’ve just gone crazy and done: you should still have host deploy logs on the engine machine. it’s weird it succeeded, unless it somehow happened afterwards? It only succeeded my yum reinstall rampage. yum list installed | cut -f 1 -d " " > file yum -y reinstall `cat file | xargs` Reinstalled everything. Everything worked as expected and I finally added the machine back to the cluster. It’s operational. eh, I wouldn’t trust it much. did you run redeploy at least? I’ve done reinstall on the web interface of the engine. I can reinstall the host, there’s nothing running on it… gonna try a third format. Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to import them, the Hosted Engine identifies them as x86_64: <PastedGraphic-2.png> So… This appears to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this… how exactly did you import them? could be a bug indeed. we don’t support changing it as it doesn’t make sense, the guest can’t be converted Yeah. I done the normal procedure, added the storage domain to the engine and clicked on “Import VM”. Immediately it was detected as x86_64. Since I wasn’t able to upgrade my environment from 4.3.10 to 4.4.1 due to random errors when redeploying the engine with the backup from 4.3.10, I just reinstalled it, reconfigured everything and them imported the storage domains. I don’t know where the information about architecture is stored in the storage domain, I tried to search for some metadata files inside the domain but nothing come up. Is there a way to force this change? It must be a way. I even tried to import the machine as x86_64. So I can delete the VM and just reattach the disks in a new only, effectively not losing the data, but… <PastedGraphic-1.png> Yeah, so something is broken. The check during the import appears to be OK, but the interface does not me allow to import it to the ppc64le machine, since it’s read as x86_64. Could you please provide the output of the following query from the database: select * from unregistered_ovf_of_entities where entity_name='energy.versatushpc.com.br<http://energy.versatushpc.com.br/>'; Sure, there you go: 46ad1d80-2649-48f5-92e6-e5489d11d30c | energy.versatushpc.com.br<http://energy.versatushpc.com.br/> | VM | 1 | | d19456e4-0051-456e-b33c-57348a78c2e0 | <?xml version="1.0" encoding="UTF-8"?><ovf:Envelope xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1/" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim -schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingDa..." xmlns:xsi="http://ww<http://ww/> w.w3.org/2001/XMLSchema-instance<http://w.w3.org/2001/XMLSchema-instance>" ovf:version="4.1.0.0"><References><File ovf:href="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af " ovf:id="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="512" ovf:description="Active VM" ovf:disk_storage_type="IMAGE" ovf:cinder_volume_type=""></File></R eferences><NetworkSection><Info>List of networks</Info><Network ovf:name="legacyservers"></Network></NetworkSection><Section xsi:type="ovf:DiskSection_Type"> <Info>List of Virtual Disks</Info><Disk ovf:diskId="b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:size="40" ovf:actual_size="1" ovf:vm_snapshot_id="6de58683-c586 -4e97-b0e8-ee7ee3baf754" ovf:parentRef="" ovf:fileRef="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af" ovf:format="http://www.vmwa<http://www.vmwa/> re.com/specifications/vmdk.html#sparse<http://re.com/specifications/vmdk.html#sparse>" ovf:volume-format="RAW" ovf:volume-type="Sparse" ovf:disk-interface="VirtIO_SCSI" ovf:read-only="false" ovf:shareable ="false" ovf:boot="true" ovf:pass-discard="false" ovf:disk-alias="energy.versatushpc.com.br_Disk1" ovf:disk-description="" ovf:wipe-after-delete="false"></Di sk></Section><Content ovf:id="out" xsi:type="ovf:VirtualSystem_Type"><Name>energy.versatushpc.com.br<http://energy.versatushpc.com.br/></Name><Description>Holds Kosen backend and frontend prod services (nginx + docker)</Description><Comment></Comment><CreationDate>2020/08/19 20:11:33</CreationDate><ExportDate>2020/08/20 18:37:41</ExportDate><Delet eProtected>false</DeleteProtected><SsoMethod>guest_agent</SsoMethod><IsSmartcardEnabled>false</IsSmartcardEnabled><NumOfIoThreads>1</NumOfIoThreads><TimeZone sourceType><rasd:InstanceId>7b4c4ef6-2a9a-4120-b838-3127db0fd703</rasd:InstanceId><Type>balloon</Type><Device>memballoon</Device><rasd:Address></rasd:Address troller</Type><Device>virtio-scsi</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlugged><IsReadOnly>false</IsReadOnly><Ali as></Alias><SpecParams><ioThreadId></ioThreadId></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>4d4d7bfd-b1e8-45c3-a5e8-7e 0b7773bbf2</rasd:InstanceId><Type>controller</Type><Device>virtio-serial</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlu gged><IsReadOnly>false</IsReadOnly><Alias>58ca7b19-0071-00c0-01d6-000000000212</Alias></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>9 cea63da-7afd-41d4-925f-369f993b280f</rasd:InstanceId><Type>controller</Type><Device>usb</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugg ed>false</IsPlugged><IsReadOnly>false</IsReadOnly><Alias></Alias><SpecParams><index>0</index><model>nec-xhci</model></SpecParams></Item></Section><Section xs i:type="ovf:SnapshotsSection_Type"><Snapshot ovf:id="6de58683-c586-4e97-b0e8-ee7ee3baf754"><Type>ACTIVE</Type><Description>Active VM</Description><CreationDa te>2020/08/19 20:11:33</CreationDate></Snapshot></Section></Content></ovf:Envelope> | | 0 Thank you! thanks so yeah - we may have an issue with that operating system 'other_linux_ppc64' that has the same name as 'other_linux' in our os-info configuration as a possible workaround, assuming all those unregistered VMs you can try to override the architecture with: update unregistered_ovf_of_entities set architecture = 2; as a possible workaround, assuming all those unregistered VMs are from clusters with the same architecture, you can try to override the architecture with: * Wooha!!! engine=# update unregistered_ovf_of_entities set architecture = 2; UPDATE 8 <PastedGraphic-2.png> <PastedGraphic-3.png> Worked and the VMs are now imported. But… hahaha. I have another issues, any of the three VM’s starts now. Perhaps I’ll reinstall the host for the third time as recommended by Michal, anyway here are the logs that I was able to fetch during the failed power on process: ON THE ENGINE: ==> /var/log/ovirt-engine/engine.log <== 2020-08-27 16:35:59,437-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5e701801 2020-08-27 16:35:59,446-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5e701801 2020-08-27 16:35:59,500-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Running command: RunVmCommand internal: false. Entities affected : ID: ccccd416-c6b4-4c95-8372-417480be5365 Type: VMAction group RUN_VM with role type USER 2020-08-27 16:35:59,506-03 INFO [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Emulated machine 'pseries-rhel8.2.0' which is different than that of the cluster is set for 'jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>'(ccccd416-c6b4-4c95-8372-417480be5365) 2020-08-27 16:35:59,528-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@14322872'}), log id: 7709ba81 2020-08-27 16:35:59,530-03 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 7709ba81 2020-08-27 16:35:59,533-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>]'}), log id: 4a0db679 2020-08-27 16:35:59,534-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateBrokerVDSCommand(HostName = rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>, CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>]'}), log id: 25bc7e6e 2020-08-27 16:35:59,548-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] VM <?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"> <name>jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/></name> <uuid>ccccd416-c6b4-4c95-8372-417480be5365</uuid> <memory>536870912</memory> <currentMemory>536870912</currentMemory> <vcpu current="128">384</vcpu> <clock offset="variable" adjustment="0"> <timer name="rtc" tickpolicy="catchup"/> <timer name="pit" tickpolicy="delay"/> </clock> <cpu mode="host-model"> <model>power9</model> <topology cores="16" threads="4" sockets="6"/> <numa> <cell id="0" cpus="0-383" memory="536870912"/> </numa> </cpu> <cputune/> <qemu:capabilities> <qemu:add capability="blockdev"/> <qemu:add capability="incremental-backup"/> </qemu:capabilities> <devices> <input type="tablet" bus="usb"/> <channel type="unix"> <target type="virtio" name="ovirt-guest-agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.ovirt-guest-agent.0"/> </channel> <channel type="unix"> <target type="virtio" name="org.qemu.guest_agent.0"/> <source mode="bind" path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.org.qemu.guest_agent.0"/> </channel> <emulator text="/usr/bin/qemu-system-ppc64"/> <controller type="scsi" model="ibmvscsi" index="0"/> <rng model="virtio"> <backend model="random">/dev/urandom</backend> <alias name="ua-1e18aea0-076a-40d0-9b85-21ac6049a94d"/> </rng> <controller type="usb" model="nec-xhci" index="0"> <alias name="ua-47e67d9f-a191-4dc0-9c09-b2db9f1d373e"/> </controller> <controller type="virtio-serial" index="0" ports="16"> <alias name="ua-4d92fb2f-aaf6-465c-8571-e49e1d12191d"/> </controller> <watchdog model="i6300esb" action="none"> <alias name="ua-7b756cc3-c9ec-4b79-84ef-d6ad15021f1a"/> </watchdog> <graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"> <listen type="network" network="vdsm-ovirtmgmt"/> </graphics> <controller type="scsi" model="virtio-scsi" index="1"> <alias name="ua-8e146e76-e038-4f8a-a526-e7e1c626f54e"/> </controller> <memballoon model="virtio"> <stats period="5"/> <alias name="ua-d8d37c06-de66-4912-bf8d-fc1017c85c68"/> </memballoon> <video> <model type="vga" vram="16384" heads="1"/> <alias name="ua-e96e6050-b1aa-4664-a856-8df923e3dc66"/> </video> <controller type="scsi" index="0"> <address type="spapr-vio"/> </controller> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="servers"/> <driver queues="4" name="vhost"/> <alias name="ua-152c3f8a-69d2-420f-8b6a-c1fb4a11594f"/> <mac address="56:6f:1a:f4:00:03"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <interface type="bridge"> <model type="virtio"/> <link state="up"/> <source bridge="nfs"/> <driver queues="4" name="vhost"/> <alias name="ua-1369da6c-4f9b-4fe3-9f45-7b37ecb34ac2"/> <mac address="56:6f:1a:f4:00:04"/> <mtu size="1500"/> <filterref filter="vdsm-no-mac-spoofing"/> <bandwidth/> </interface> <disk type="file" device="cdrom" snapshot="no"> <driver name="qemu" type="raw" error_policy="report"/> <source file="" startupPolicy="optional"> <seclabel model="dac" type="none" relabel="no"/> </source> <target dev="sdc" bus="scsi"/> <readonly/> <alias name="ua-2d6db7ca-2fe1-4af4-9741-7b5332805d94"/> <address bus="0" controller="0" unit="2" type="drive" target="0"/> </disk> <disk snapshot="no" type="file" device="disk"> <target dev="sda" bus="scsi"/> <source file="/rhev/data-center/804e857c-461d-4642-86c4-7ff4a5e7da47/d19456e4-0051-456e-b33c-57348a78c2e0/images/8100a756-92a7-4160-9a31-5a843810cb61/0183b177-71b5-4c0e-b7d3-becc5da152ce"> <seclabel model="dac" type="none" relabel="no"/> </source> <driver name="qemu" io="threads" type="raw" error_policy="stop" cache="none"/> <alias name="ua-8100a756-92a7-4160-9a31-5a843810cb61"/> <address bus="0" controller="1" unit="0" type="drive" target="0"/> <boot order="1"/> <serial>8100a756-92a7-4160-9a31-5a843810cb61</serial> </disk> <lease> <key>ccccd416-c6b4-4c95-8372-417480be5365</key> <lockspace>d19456e4-0051-456e-b33c-57348a78c2e0</lockspace> <target offset="24117248" path="/rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm/d19456e4-0051-456e-b33c-57348a78c2e0/dom_md/xleases"/> </lease> </devices> <os> <type arch="ppc64" machine="pseries-rhel8.2.0">hvm</type> </os> <metadata> <ovirt-tune:qos/> <ovirt-vm:vm> <ovirt-vm:minGuaranteedMemoryMb type="int">524288</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion> <ovirt-vm:custom/> <ovirt-vm:device mac_address="56:6f:1a:f4:00:04"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device mac_address="56:6f:1a:f4:00:03"> <ovirt-vm:custom/> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sda"> <ovirt-vm:poolID>804e857c-461d-4642-86c4-7ff4a5e7da47</ovirt-vm:poolID> <ovirt-vm:volumeID>0183b177-71b5-4c0e-b7d3-becc5da152ce</ovirt-vm:volumeID> <ovirt-vm:imageID>8100a756-92a7-4160-9a31-5a843810cb61</ovirt-vm:imageID> <ovirt-vm:domainID>d19456e4-0051-456e-b33c-57348a78c2e0</ovirt-vm:domainID> </ovirt-vm:device> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:resumeBehavior>kill</ovirt-vm:resumeBehavior> </ovirt-vm:vm> </metadata> </domain> 2020-08-27 16:35:59,566-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateBrokerVDSCommand, return: , log id: 25bc7e6e 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateVDSCommand, return: WaitForLaunch, log id: 4a0db679 2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:35:59,576-03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145178) [b5231d22-4a33-45a6-acf4-3af7669caf96] EVENT_ID: USER_STARTED_VM(153), VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> was started by admin@internal-authz (Host: rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>). 2020-08-27 16:36:01,803-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365' was reported as Down on VDS '394e0e68-60f5-42b3-aec4-5d8368efedd1'(rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>) 2020-08-27 16:36:01,804-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] START, DestroyVDSCommand(HostName = rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>, DestroyVmVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1', vmId='ccccd416-c6b4-4c95-8372-417480be5365', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-13) [] FINISH, DestroyVDSCommand, return: , log id: 39e346b9 2020-08-27 16:36:01,959-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365'(jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>) moved from 'WaitForLaunch' --> 'Down' 2020-08-27 16:36:02,024-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-13) [] EVENT_ID: VM_DOWN_ERROR(119), VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: Hook Error: (b'Traceback (most recent call last):\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 124, in <module>\n main(VhostmdConf())\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line 47, in __init__\n dom = minidom.parse(path)\n File "/usr/lib64/python3.6/xml/dom/minidom.py", line 1958, in parse\n return expatbuilder.parse(file)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 911, in parse\n result = builder.parseFile(fp)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py", line 211, in parseFile\n parser.Parse("", True)\nxml.parsers.expat.ExpatError: no element found: line 1, column 0\n',). yeah, I never encountered this issue before - could be a consequence of an improper deployment of that host Starting reinstall right now. But I’ve a question, is this documentation right? For Red Hat Enterprise Linux 8 hosts, little endian, on IBM POWER9 hardware: # subscription-manager repos \ --disable='*' \ --enable=rhv-4-mgmt-agent-for-rhel-8-ppc64le-rpms \ --enable=advanced-virt-for-rhel-8-ppc64le-rpms \ --enable=ansible-2.9-for-rhel-8-ppc64le-rpms I think it’s missing: --enable=rhel-8-for-ppc64le-baseos-rpms \ --enable=rhel-8-for-ppc64le-appstream-rpms This can be found here: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... I assumed that in fact information is missing on this documentation. Double check, it’s missing. It’s impossible to reinstall the machine only with this repositories. I’ll open another bug. Yes please. When it says disable * and explicitly lists what to enable it does need to include the base channels for sure Done. https://bugzilla.redhat.com/show_bug.cgi?id=1873360 2020-08-27 16:36:02,025-03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-13) [] add VM 'ccccd416-c6b4-4c95-8372-417480be5365'(jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/>) to rerun treatment 2020-08-27 16:36:02,029-03 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-13) [] Rerun VM 'ccccd416-c6b4-4c95-8372-417480be5365'. Called from VDS 'rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>' 2020-08-27 16:36:02,041-03 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> on Host rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/>. 2020-08-27 16:36:02,066-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}), log id: 5480ad0b 2020-08-27 16:36:02,077-03 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5480ad0b 2020-08-27 16:36:02,093-03 WARN [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Validation of action 'RunVm' failed for user admin@internal-authz. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS 2020-08-27 16:36:02,093-03 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-145179) [] Lock freed to object 'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]', sharedLocks=''}' 2020-08-27 16:36:02,101-03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> (User: admin@internal-authz). 2020-08-27 16:36:02,105-03 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (EE-ManagedThreadFactory-engine-Thread-145180) [71c52499] Running command: ProcessDownVmCommand internal: true. ON THE HOST: /var/log/messages Aug 27 16:36:01 rhvpower python3[73682]: detected unhandled Python exception in '/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd' Aug 27 16:36:01 rhvpower abrt-server[73684]: Deleting problem directory Python3-2020-08-27-16:36:01-73682 (dup of Python3-2020-08-27-16:33:11-73428) Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Activating service name='org.freedesktop.problems' requested by ':1.183' (uid=0 pid=73691 comm="/usr/libexec/platform-python /usr/bin/abrt-action-" label="system_u:system_r:abrt_t:s0-s0:c0.c1023") (using servicehelper) Aug 27 16:36:01 rhvpower dbus-daemon[73694]: [system] Failed to reset fd limit before activating service: org.freedesktop.DBus.Error.AccessDenied: Failed to restore old fd limit: Operation not permitted Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Successfully activated service 'org.freedesktop.problems' Aug 27 16:36:02 rhvpower abrt-server[73684]: /bin/sh: reporter-systemd-journal: command not found Regarding the import problem. Is that really a bug right? I can describe it on Red Hat Bugzilla if I need to. It’s the minimal that I can do for the help. Is it ok? yes, please do There you go: https://bugzilla.redhat.com/show_bug.cgi?id=1873322 Thank you guys, I will report back after the reinstallation of the host. Reinstall now went fine. Now I found that’s something extremely bad when trying to run the VMs. The metadata appears to be corrupted. First it complained about the CPUs, I changed it not the interface just to refresh the metadada: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: internal error: process exited while connecting to monitor: 2020-08-27T22:48:59.533367Z qemu-kvm: warning: Number of hotpluggable cpus requested (384) exceeds the recommended cpus supported by KVM (128) interesting. shouldn’t cause any harm, but it’s worth a follow up If that was a warning I would be ok, but what a hell… and from where the number 384 came? My machine have 128 threads only... 2020-08-27T22:48:59.537530Z qemu-kvm: -numa node,nodeid=0,cpus=0-383,mem=524288: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead 2020-08-27T22:48:59.833812Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. what is the cluster level right now? is it 4.3? there was a breaking change in machine types between 4.3 and 4.4 and a incompatible P9 firmware changes. It should work in 4.4 cluster level, just also make sure you have latest greatest P9 firmware. The bad news is that the cluster has always been on 4.4 level… take a look at those screenshots: <PastedGraphic-10.png> <PastedGraphic-11.png> After this the error changed to: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T22:49:48.876424Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. Then I tried reducing the RAM, but it gave me a warning (not an error): VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> was configured with 524288MiB of memory while the recommended value range is 256MiB - 65536MiB that maximum doesn’t make sense. Maybe still aproblem with wrong os type? Wrong OS? I don’t know. :( My settings are pretty basic to be honest, here as pic of the relevant part of the configuration on the offending VM: <PastedGraphic-8.png> As you can see it only came to life with pseries-rhel7.6.0 without SXXM. I can’t boot with pseries-rhel8.2.0; and I was expecting this with the upgrade from 4.3.10 to 4.4.1. I’ve lowered it to 65536MiB, and now it complains about multiple SCSI devices: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0'. So I’ve changed the disk type from VirtIO-SCSI to VirtIO and changed back to VirtIO to VirtIO-SCSI, and some part of the first error came back: VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T23:09:51.753960Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. yeah, you won’t get over it with (I assume) the 4.3 machine type 4.4 :( Now I changed the machine to custom cluster emulation to pseries-7.6.0 and the SCSI error is back. And now I’m stuck with it… but indeed removing the now broken mitigations and use plain 7.6.0 it’s fine…. VM jupyter.nix.versatushpc.com.br<http://jupyter.nix.versatushpc.com.br/> is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0’. … but this one is not:) So this is aftter import, right? Can you confirm same problem for a newly created VM? It happens :( But take a look in the next answer. Thinking in dumping the VM entirely and reimporting the disks… but I created another one, just with plain settings, to see if something boots on this host, and the result was bad: VM ppc64le is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-27T23:15:44.669298Z qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=8192: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead 2020-08-27T23:15:44.691077Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. So, any ideias? In the past I had some issues with SXXM in pseries-7.6.0; I’m not sure if it’s the same issue all over again. likely yes. Again, once you move to 4.4 cluster level the new el8 machine type is using different spectre mitigations and should work… Yeah, that’s the bad news. Already running everything on 4.4 level, at least is what the interface tells to me. I’ve done the simplest test possible, created a simple VM without even touching the advanced settings, like in this photo: <PastedGraphic-9.png> And it won’t boot: VM ppc64le-dc44 is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-08-28T18:22:15.390930Z qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=4096: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead 2020-08-28T18:22:15.412920Z qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off. 8/28/203:22:17 PM So, pseries-rhel8.2.0 does not appears to be working, if I change it to pseries-7.6.0 (without SXXM) this happens: VM ppc64le-dc44 is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0’. But I’ve noted if I wait something like 30 to 60 seconds, the VM will eventually boot, and yeah it booted when I was writing this message: VM ppc64le-dc44 started on Host rhvpower.local.versatushpc.com.br<http://rhvpower.local.versatushpc.com.br/> Regarding my hardware, it’s an AC922 from IBM. It has the latest firmware upgrades in place, for the whiterspoon part and for OpenBMC. There’s nothing else, that I’m aware of, that can be updated… I’ve never able to run pseries-7.6.0-sxxm in the 4.3.10. So… any ideias? Thank you again, the help that you guys are providing is amazing. Thanks, michal Thanks, Thanks, Thanks, michal Ideias? On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao@versatushpc.com.br<mailto:ferrao@versatushpc.com.br>> wrote: What a strange thing is happening here: [root@power ~]# file /usr/bin/vdsm-client /usr/bin/vdsm-client: empty [root@power ~]# ls -l /usr/bin/vdsm-client -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client A lot of files are just empty, I’ve tried reinstalling vdsm-client, it worked, but there’s other zeroed files: Transaction test succeeded. Running transaction Preparing : 1/1 Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Cleanup : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch 2/2 /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked. /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked. /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked. /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked. /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not checked. Verifying : vdsm-client-4.40.22-1.el8ev.noarch 1/2 Verifying : vdsm-client-4.40.22-1.el8ev.noarch 2/2 Installed products updated. Reinstalled: vdsm-client-4.40.22-1.el8ev.noarch I’ve never seen something like this. I’ve already reinstalled the host from the ground and the same thing happens. On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello Arik, This is probably the issue. Output totally empty: [root@power ~]# vdsm-client Host getCapabilities [root@power ~]# Here are the packages installed on the machine: (grepped ovirt and vdsm on rpm -qa) ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le ovirt-imageio-client-2.0.8-1.el8ev.ppc64le ovirt-host-4.4.1-4.el8ev.ppc64le ovirt-vmconsole-host-1.0.8-1.el8ev.noarch ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le ovirt-imageio-common-2.0.8-1.el8ev.ppc64le ovirt-vmconsole-1.0.8-1.el8ev.noarch vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch vdsm-hook-fcoe-4.40.22-1.el8ev.noarch vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch vdsm-common-4.40.22-1.el8ev.noarch vdsm-python-4.40.22-1.el8ev.noarch vdsm-jsonrpc-4.40.22-1.el8ev.noarch vdsm-api-4.40.22-1.el8ev.noarch vdsm-yajsonrpc-4.40.22-1.el8ev.noarch vdsm-4.40.22-1.el8ev.ppc64le vdsm-network-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch Any ideias to try? Thanks. On 26 Aug 2020, at 05:09, Arik Hadas <ahadas@redhat.com<mailto:ahadas@redhat.com>> wrote: On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues. Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is missing the following CPU flags: model_POWER9, powernv. Any ideia of what’s may be happening? The engine runs on x86_64, and I was using this way on 4.3.10. Machine info: timebase : 512000000 platform : PowerNV model : 8335-GTH machine : PowerNV 8335-GTH firmware : OPAL MMU : Radix Can you please provide the output of 'vdsm-client Host getCapabilities' on that host? Thanks, _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPPZHV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V6P4... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBPHFV... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMNMYMBMWTC7UG... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6T3PWJREF2FJCT... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJQ7F63EGKGYHM... _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FNNAUKOD2XFU2K...
participants (3)
-
Arik Hadas
-
Michal Skrivanek
-
Vinícius Ferrão