
Hi, We are deploying the hosted engine on oVirt-Node-4.2.3.1 using the command "hosted-engine --deploy". After providing answers it runs the ansible script and hit the Error when creating glusterfs storage domain. Attached the screenshot of the ERROR. Please help.

On Tue, Jul 3, 2018 at 3:28 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
We are deploying the hosted engine on oVirt-Node-4.2.3.1 using the command "hosted-engine --deploy".
After providing answers it runs the ansible script and hit the Error when creating glusterfs storage domain. Attached the screenshot of the ERROR.
Adding Sahina. Please check/share relevant logs from the host. Thanks. Best regards, -- Didi

It looks like a problem accessing the engine gluster volume. Can you provide the logs from /var/log/gluster/rhev-data*engine.log as well as the vdsm.log from the host. On Wed, Jul 4, 2018 at 11:07 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Jul 3, 2018 at 3:28 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
We are deploying the hosted engine on oVirt-Node-4.2.3.1 using the command "hosted-engine --deploy".
After providing answers it runs the ansible script and hit the Error when creating glusterfs storage domain. Attached the screenshot of the ERROR.
Adding Sahina.
Please check/share relevant logs from the host. Thanks.
Best regards, -- Didi

2018-07-03 14:28 GMT+02:00 Sakhi Hadebe <sakhi@sanren.ac.za>:
Hi,
We are deploying the hosted engine on oVirt-Node-4.2.3.1 using the command "hosted-engine --deploy".
Hi, any reason for using command line instead of the cockpit web ui?
After providing answers it runs the ansible script and hit the Error when creating glusterfs storage domain. Attached the screenshot of the ERROR.
Please help.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/XN5ML4VTDL6BDAAFFBGFXI5KEEZDMGNK/
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig>

On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
Below are the versions of packages installed. Please find the logs attached. Qemu: ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64
Libvirt installed packages: libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 libvirt-libs-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 libvirt-3.9.0-14.el7_5.6.x86_64 libvirt-python-3.9.0-1.el7.x86_64 libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 libvirt-client-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64
Virt-manager: virt-manager-common-1.4.3-3.el7.noarch
oVirt: [root@localhost network-scripts]# rpm -qa | grep ovirt ovirt-setup-lib-1.1.4-1.el7.centos.noarch cockpit-ovirt-dashboard-0.11.28-1.el7.noarch ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-host-4.2.3-1.el7.x86_64 python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 ovirt-host-deploy-1.7.4-1.el7.noarch cockpit-machines-ovirt-169-1.el7.noarch ovirt-hosted-engine-ha-2.2.14-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.11-1.el7.noarch ovirt-engine-appliance-4.2-20180626.1.el7.noarch ovirt-release42-4.2.4-1.el7.noarch ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch
On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I did not select any CPU architecture. It doenst gove me the option to select one. It only states the number of virtual CPUs and the memory for the engine VM.
Looking at the documentation of installing ovirt-release36.rpm....it does allow you to select te CPU, but not when installing ovirt-release42.rpm
On Tuesday, July 10, 2018, Alastair Neil <ajneil.tech@gmail.com> wrote:
what did you select as your CPU architecture when you created the cluster? It looks like the VM is trying to use a CPU type of "Custom", how many nodes in your cluster? I suggest you specify the lowest common denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU architecture of the cluster..
On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I have just re-installed centOS 7 in 3 servers and have configured gluster volumes following this documentation: https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/, But I have installed
http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm
package. Hosted-engine --deploy is failing with this error:
"rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", "4", "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", "--disk", "/var/tmp/localvm0nnJH9/images/eacac30d-0304-4c77-8753-6965e 4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64 kvm domain on x86_64 host is not supported by hypervisor\nDomain installation does not appear to have been successful.\nIf it was, you can restart your domain by running:\n virsh --connect qemu:///system start HostedEngineLocal\notherwise, please restart your installation.", "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain installation does not appear to have been successful.", "If it was, you can restart your domain by running:", " virsh --connect qemu:///system start HostedEngineLocal", "otherwise, please restart your installation."], "stdout": "\nStarting install...", "stdout_lines": ["", "Starting install..."]}
This seems to be in the phase where we create a local vm for the engine. We do this with plain virt-install, nothing fancy. Searching the net for "unsupported configuration: CPU mode 'custom'" finds other relevant reports, you might want to check them. You can see the command in bootstrap_local_vm.yml .
Please check/share versions of relevant packages (libvirt*, qemu*, etc) and relevant logs (libvirt).
Also updating the subject line and adding Simone.
Best regards, -- Didi
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>

Hi, I have managed to fix the error by enabling the DMA Virtualisation in BIOS. I am now hit with a new error: It's failing to add a glusterfs storage domain: [ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: Attached are vdsm and engine log files. On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
Below are the versions of packages installed. Please find the logs attached. Qemu: ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64
Libvirt installed packages: libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 libvirt-libs-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 libvirt-3.9.0-14.el7_5.6.x86_64 libvirt-python-3.9.0-1.el7.x86_64 libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 libvirt-client-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64
Virt-manager: virt-manager-common-1.4.3-3.el7.noarch
oVirt: [root@localhost network-scripts]# rpm -qa | grep ovirt ovirt-setup-lib-1.1.4-1.el7.centos.noarch cockpit-ovirt-dashboard-0.11.28-1.el7.noarch ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-host-4.2.3-1.el7.x86_64 python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 ovirt-host-deploy-1.7.4-1.el7.noarch cockpit-machines-ovirt-169-1.el7.noarch ovirt-hosted-engine-ha-2.2.14-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.11-1.el7.noarch ovirt-engine-appliance-4.2-20180626.1.el7.noarch ovirt-release42-4.2.4-1.el7.noarch ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch
On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I did not select any CPU architecture. It doenst gove me the option to select one. It only states the number of virtual CPUs and the memory for the engine VM.
Looking at the documentation of installing ovirt-release36.rpm....it does allow you to select te CPU, but not when installing ovirt-release42.rpm
On Tuesday, July 10, 2018, Alastair Neil <ajneil.tech@gmail.com> wrote:
what did you select as your CPU architecture when you created the cluster? It looks like the VM is trying to use a CPU type of "Custom", how many nodes in your cluster? I suggest you specify the lowest common denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU architecture of the cluster..
On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I have just re-installed centOS 7 in 3 servers and have configured gluster volumes following this documentation: https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/, But I have installed
http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm
package. Hosted-engine --deploy is failing with this error:
"rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", "4", "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", "--disk", "/var/tmp/localvm0nnJH9/images /eacac30d-0304-4c77-8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64 kvm domain on x86_64 host is not supported by hypervisor\nDomain installation does not appear to have been successful.\nIf it was, you can restart your domain by running:\n virsh --connect qemu:///system start HostedEngineLocal\notherwise, please restart your installation.", "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain installation does not appear to have been successful.", "If it was, you can restart your domain by running:", " virsh --connect qemu:///system start HostedEngineLocal", "otherwise, please restart your installation."], "stdout": "\nStarting install...", "stdout_lines": ["", "Starting install..."]}
This seems to be in the phase where we create a local vm for the engine. We do this with plain virt-install, nothing fancy. Searching the net for "unsupported configuration: CPU mode 'custom'" finds other relevant reports, you might want to check them. You can see the command in bootstrap_local_vm.yml .
Please check/share versions of relevant packages (libvirt*, qemu*, etc) and relevant logs (libvirt).
Also updating the subject line and adding Simone.
Best regards, -- Didi
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>

Is glusterd running on the server: goku.sanren.** There's an error Failed to get volume info: Command execution failed error: Connection failed. Please check if gluster daemon is operational Please check the volume status using "gluster volume status engine" and if all looks ok, attach the mount logs from /var/log/glusterfs On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I have managed to fix the error by enabling the DMA Virtualisation in BIOS. I am now hit with a new error: It's failing to add a glusterfs storage domain:
[ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
Attached are vdsm and engine log files.
On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
Below are the versions of packages installed. Please find the logs attached. Qemu: ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64
Libvirt installed packages: libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 libvirt-libs-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 libvirt-3.9.0-14.el7_5.6.x86_64 libvirt-python-3.9.0-1.el7.x86_64 libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 libvirt-client-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64
Virt-manager: virt-manager-common-1.4.3-3.el7.noarch
oVirt: [root@localhost network-scripts]# rpm -qa | grep ovirt ovirt-setup-lib-1.1.4-1.el7.centos.noarch cockpit-ovirt-dashboard-0.11.28-1.el7.noarch ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-host-4.2.3-1.el7.x86_64 python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 ovirt-host-deploy-1.7.4-1.el7.noarch cockpit-machines-ovirt-169-1.el7.noarch ovirt-hosted-engine-ha-2.2.14-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.11-1.el7.noarch ovirt-engine-appliance-4.2-20180626.1.el7.noarch ovirt-release42-4.2.4-1.el7.noarch ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch
On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I did not select any CPU architecture. It doenst gove me the option to select one. It only states the number of virtual CPUs and the memory for the engine VM.
Looking at the documentation of installing ovirt-release36.rpm....it does allow you to select te CPU, but not when installing ovirt-release42.rpm
On Tuesday, July 10, 2018, Alastair Neil <ajneil.tech@gmail.com> wrote:
what did you select as your CPU architecture when you created the cluster? It looks like the VM is trying to use a CPU type of "Custom", how many nodes in your cluster? I suggest you specify the lowest common denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU architecture of the cluster..
On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
> Hi, > > I have just re-installed centOS 7 in 3 servers and have configured > gluster volumes following this documentation: > https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/, > But I have installed > > http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm > > package. > Hosted-engine --deploy is failing with this error: > > "rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", "4", > "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", > "--disk", "/var/tmp/localvm0nnJH9/images > /eacac30d-0304-4c77-8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", > "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", > "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", > "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", > "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], > "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": > "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", > "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64 > kvm domain on x86_64 host is not supported by hypervisor\nDomain > installation does not appear to have been successful.\nIf it was, you can > restart your domain by running:\n virsh --connect qemu:///system start > HostedEngineLocal\notherwise, please restart your installation.", > "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for > x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain > installation does not appear to have been successful.", "If it was, you can > restart your domain by running:", " virsh --connect qemu:///system start > HostedEngineLocal", "otherwise, please restart your installation."], > "stdout": "\nStarting install...", "stdout_lines": ["", "Starting > install..."]} >
This seems to be in the phase where we create a local vm for the engine. We do this with plain virt-install, nothing fancy. Searching the net for "unsupported configuration: CPU mode 'custom'" finds other relevant reports, you might want to check them. You can see the command in bootstrap_local_vm.yml .
Please check/share versions of relevant packages (libvirt*, qemu*, etc) and relevant logs (libvirt).
Also updating the subject line and adding Simone.
Best regards, -- Didi
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/YHKUKW22QLRVS56XZBXEWOGORFWFEGIA/

Hi Sahina, Yes the glusterd daemon was not running. I have started it and is able to add a glusterfs storage domain. Thank you so much for your help. Oops! I allocated 50GiB for this storage domain and it requires 60GiB. On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose <sabose@redhat.com> wrote:
Is glusterd running on the server: goku.sanren.** There's an error Failed to get volume info: Command execution failed error: Connection failed. Please check if gluster daemon is operational
Please check the volume status using "gluster volume status engine"
and if all looks ok, attach the mount logs from /var/log/glusterfs
On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I have managed to fix the error by enabling the DMA Virtualisation in BIOS. I am now hit with a new error: It's failing to add a glusterfs storage domain:
[ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
Attached are vdsm and engine log files.
On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
Below are the versions of packages installed. Please find the logs attached. Qemu: ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64
Libvirt installed packages: libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 libvirt-libs-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 libvirt-3.9.0-14.el7_5.6.x86_64 libvirt-python-3.9.0-1.el7.x86_64 libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 libvirt-client-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64
Virt-manager: virt-manager-common-1.4.3-3.el7.noarch
oVirt: [root@localhost network-scripts]# rpm -qa | grep ovirt ovirt-setup-lib-1.1.4-1.el7.centos.noarch cockpit-ovirt-dashboard-0.11.28-1.el7.noarch ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-host-4.2.3-1.el7.x86_64 python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 ovirt-host-deploy-1.7.4-1.el7.noarch cockpit-machines-ovirt-169-1.el7.noarch ovirt-hosted-engine-ha-2.2.14-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.11-1.el7.noarch ovirt-engine-appliance-4.2-20180626.1.el7.noarch ovirt-release42-4.2.4-1.el7.noarch ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch
On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I did not select any CPU architecture. It doenst gove me the option to select one. It only states the number of virtual CPUs and the memory for the engine VM.
Looking at the documentation of installing ovirt-release36.rpm....it does allow you to select te CPU, but not when installing ovirt-release42.rpm
On Tuesday, July 10, 2018, Alastair Neil <ajneil.tech@gmail.com> wrote:
> what did you select as your CPU architecture when you created the > cluster? It looks like the VM is trying to use a CPU type of "Custom", how > many nodes in your cluster? I suggest you specify the lowest common > denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU > architecture of the cluster.. > > On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe <sakhi@sanren.ac.za> > wrote: > >> Hi, >> >> I have just re-installed centOS 7 in 3 servers and have configured >> gluster volumes following this documentation: >> https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/, >> But I have installed >> >> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm >> >> package. >> Hosted-engine --deploy is failing with this error: >> >> "rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", >> "4", "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", >> "--disk", "/var/tmp/localvm0nnJH9/images >> /eacac30d-0304-4c77-8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", >> "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", >> "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", >> "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", >> "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], >> "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": >> "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", >> "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64 >> kvm domain on x86_64 host is not supported by hypervisor\nDomain >> installation does not appear to have been successful.\nIf it was, you can >> restart your domain by running:\n virsh --connect qemu:///system start >> HostedEngineLocal\notherwise, please restart your installation.", >> "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for >> x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain >> installation does not appear to have been successful.", "If it was, you can >> restart your domain by running:", " virsh --connect qemu:///system start >> HostedEngineLocal", "otherwise, please restart your installation."], >> "stdout": "\nStarting install...", "stdout_lines": ["", "Starting >> install..."]} >> > This seems to be in the phase where we create a local vm for the engine. We do this with plain virt-install, nothing fancy. Searching the net for "unsupported configuration: CPU mode 'custom'" finds other relevant reports, you might want to check them. You can see the command in bootstrap_local_vm.yml .
Please check/share versions of relevant packages (libvirt*, qemu*, etc) and relevant logs (libvirt).
Also updating the subject line and adding Simone.
Best regards, -- Didi
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/YHKUKW22QLRVS56XZBXEWOGORFWFEGIA/
-- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>

Thank you all for your help, I have managed to deploy the engine successfully. It was a quiet a lesson. On Wed, Jul 11, 2018 at 11:55 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi Sahina,
Yes the glusterd daemon was not running. I have started it and is able to add a glusterfs storage domain. Thank you so much for your help.
Oops! I allocated 50GiB for this storage domain and it requires 60GiB.
On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose <sabose@redhat.com> wrote:
Is glusterd running on the server: goku.sanren.** There's an error Failed to get volume info: Command execution failed error: Connection failed. Please check if gluster daemon is operational
Please check the volume status using "gluster volume status engine"
and if all looks ok, attach the mount logs from /var/log/glusterfs
On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I have managed to fix the error by enabling the DMA Virtualisation in BIOS. I am now hit with a new error: It's failing to add a glusterfs storage domain:
[ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
Attached are vdsm and engine log files.
On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
Below are the versions of packages installed. Please find the logs attached. Qemu: ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64
Libvirt installed packages: libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 libvirt-libs-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 libvirt-3.9.0-14.el7_5.6.x86_64 libvirt-python-3.9.0-1.el7.x86_64 libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 libvirt-client-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64
Virt-manager: virt-manager-common-1.4.3-3.el7.noarch
oVirt: [root@localhost network-scripts]# rpm -qa | grep ovirt ovirt-setup-lib-1.1.4-1.el7.centos.noarch cockpit-ovirt-dashboard-0.11.28-1.el7.noarch ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-host-4.2.3-1.el7.x86_64 python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 ovirt-host-deploy-1.7.4-1.el7.noarch cockpit-machines-ovirt-169-1.el7.noarch ovirt-hosted-engine-ha-2.2.14-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.11-1.el7.noarch ovirt-engine-appliance-4.2-20180626.1.el7.noarch ovirt-release42-4.2.4-1.el7.noarch ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch
On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
> Hi, > > I did not select any CPU architecture. It doenst gove me the option > to select one. It only states the number of virtual CPUs and the memory for > the engine VM. > > Looking at the documentation of installing ovirt-release36.rpm....it > does allow you to select te CPU, but not when installing ovirt-release42.rpm > > On Tuesday, July 10, 2018, Alastair Neil <ajneil.tech@gmail.com> > wrote: > >> what did you select as your CPU architecture when you created the >> cluster? It looks like the VM is trying to use a CPU type of "Custom", how >> many nodes in your cluster? I suggest you specify the lowest common >> denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU >> architecture of the cluster.. >> >> On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe <sakhi@sanren.ac.za> >> wrote: >> >>> Hi, >>> >>> I have just re-installed centOS 7 in 3 servers and have configured >>> gluster volumes following this documentation: >>> https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/, >>> But I have installed >>> >>> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm >>> >>> package. >>> Hosted-engine --deploy is failing with this error: >>> >>> "rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", >>> "4", "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", >>> "--disk", "/var/tmp/localvm0nnJH9/images >>> /eacac30d-0304-4c77-8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", >>> "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", >>> "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", >>> "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", >>> "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], >>> "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": >>> "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", >>> "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64 >>> kvm domain on x86_64 host is not supported by hypervisor\nDomain >>> installation does not appear to have been successful.\nIf it was, you can >>> restart your domain by running:\n virsh --connect qemu:///system start >>> HostedEngineLocal\notherwise, please restart your installation.", >>> "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for >>> x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain >>> installation does not appear to have been successful.", "If it was, you can >>> restart your domain by running:", " virsh --connect qemu:///system start >>> HostedEngineLocal", "otherwise, please restart your installation."], >>> "stdout": "\nStarting install...", "stdout_lines": ["", "Starting >>> install..."]} >>> >> This seems to be in the phase where we create a local vm for the engine. We do this with plain virt-install, nothing fancy. Searching the net for "unsupported configuration: CPU mode 'custom'" finds other relevant reports, you might want to check them. You can see the command in bootstrap_local_vm.yml .
Please check/share versions of relevant packages (libvirt*, qemu*, etc) and relevant logs (libvirt).
Also updating the subject line and adding Simone.
Best regards, -- Didi
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/YHKUKW22QLRVS56XZBXEWOGORFWFEGIA/
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>

Hi, I am sorry to bother you again. I am trying to deploy an oVirt engine for oVirtNode-4.2.5.1. I get the same error I encountered before: [ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: The glusterd daemon is running. During the deployment of the engine it sets the engine entry in the /etc/hosts file with the IP Address of 192.168.124.* which it gets form the virbr0 bridge interface. I stopped the bridge and deleted it, but still giving the same error. Not sure what causes it to use that interface. Please help! But I give the engine an IP of 192.168.1.10 same subnet as my gateway and my ovirtmgmt bridge. Attached is the ifconfig output of my Node, engine.log and vdsm.log. Your assistance is always appreciated On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose <sabose@redhat.com> wrote:
Is glusterd running on the server: goku.sanren.** There's an error Failed to get volume info: Command execution failed error: Connection failed. Please check if gluster daemon is operational
Please check the volume status using "gluster volume status engine"
and if all looks ok, attach the mount logs from /var/log/glusterfs
On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I have managed to fix the error by enabling the DMA Virtualisation in BIOS. I am now hit with a new error: It's failing to add a glusterfs storage domain:
[ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
Attached are vdsm and engine log files.
On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
Below are the versions of packages installed. Please find the logs attached. Qemu: ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64
Libvirt installed packages: libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 libvirt-libs-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 libvirt-3.9.0-14.el7_5.6.x86_64 libvirt-python-3.9.0-1.el7.x86_64 libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 libvirt-client-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64
Virt-manager: virt-manager-common-1.4.3-3.el7.noarch
oVirt: [root@localhost network-scripts]# rpm -qa | grep ovirt ovirt-setup-lib-1.1.4-1.el7.centos.noarch cockpit-ovirt-dashboard-0.11.28-1.el7.noarch ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-host-4.2.3-1.el7.x86_64 python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 ovirt-host-deploy-1.7.4-1.el7.noarch cockpit-machines-ovirt-169-1.el7.noarch ovirt-hosted-engine-ha-2.2.14-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.11-1.el7.noarch ovirt-engine-appliance-4.2-20180626.1.el7.noarch ovirt-release42-4.2.4-1.el7.noarch ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch
On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I did not select any CPU architecture. It doenst gove me the option to select one. It only states the number of virtual CPUs and the memory for the engine VM.
Looking at the documentation of installing ovirt-release36.rpm....it does allow you to select te CPU, but not when installing ovirt-release42.rpm
On Tuesday, July 10, 2018, Alastair Neil <ajneil.tech@gmail.com> wrote:
> what did you select as your CPU architecture when you created the > cluster? It looks like the VM is trying to use a CPU type of "Custom", how > many nodes in your cluster? I suggest you specify the lowest common > denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU > architecture of the cluster.. > > On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe <sakhi@sanren.ac.za> > wrote: > >> Hi, >> >> I have just re-installed centOS 7 in 3 servers and have configured >> gluster volumes following this documentation: >> https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/, >> But I have installed >> >> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm >> >> package. >> Hosted-engine --deploy is failing with this error: >> >> "rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", >> "4", "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", >> "--disk", "/var/tmp/localvm0nnJH9/images >> /eacac30d-0304-4c77-8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", >> "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", >> "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", >> "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", >> "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], >> "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": >> "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", >> "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64 >> kvm domain on x86_64 host is not supported by hypervisor\nDomain >> installation does not appear to have been successful.\nIf it was, you can >> restart your domain by running:\n virsh --connect qemu:///system start >> HostedEngineLocal\notherwise, please restart your installation.", >> "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for >> x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain >> installation does not appear to have been successful.", "If it was, you can >> restart your domain by running:", " virsh --connect qemu:///system start >> HostedEngineLocal", "otherwise, please restart your installation."], >> "stdout": "\nStarting install...", "stdout_lines": ["", "Starting >> install..."]} >> > This seems to be in the phase where we create a local vm for the engine. We do this with plain virt-install, nothing fancy. Searching the net for "unsupported configuration: CPU mode 'custom'" finds other relevant reports, you might want to check them. You can see the command in bootstrap_local_vm.yml .
Please check/share versions of relevant packages (libvirt*, qemu*, etc) and relevant logs (libvirt).
Also updating the subject line and adding Simone.
Best regards, -- Didi
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/YHKUKW22QLRVS56XZBXEWOGORFWFEGIA/
-- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>

On Wed, Aug 29, 2018 at 8:39 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I am sorry to bother you again.
I am trying to deploy an oVirt engine for oVirtNode-4.2.5.1. I get the same error I encountered before:
[ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
The glusterd daemon is running.
mounting 172.16.4.18:/engine at /rhev/data-center/mnt/glusterSD/172.16.4.18:_engine (mount:204) 2018-08-29 16:47:28,846+0200 ERROR (jsonrpc/3) [storage.HSM] Could not connect to storageServer (hsm:2398) Can you try to see if you are able to mount 172.16.4.18:/engine on the server you're deploying Hosted Engine using "mount -t glusterfs 172.16.4.18:/engine /mnt/test"
During the deployment of the engine it sets the engine entry in the /etc/hosts file with the IP Address of 192.168.124.* which it gets form the virbr0 bridge interface. I stopped the bridge and deleted it, but still giving the same error. Not sure what causes it to use that interface. Please help!
But I give the engine an IP of 192.168.1.10 same subnet as my gateway and my ovirtmgmt bridge. Attached is the ifconfig output of my Node, engine.log and vdsm.log.
Your assistance is always appreciated
On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose <sabose@redhat.com> wrote:
Is glusterd running on the server: goku.sanren.** There's an error Failed to get volume info: Command execution failed error: Connection failed. Please check if gluster daemon is operational
Please check the volume status using "gluster volume status engine"
and if all looks ok, attach the mount logs from /var/log/glusterfs
On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I have managed to fix the error by enabling the DMA Virtualisation in BIOS. I am now hit with a new error: It's failing to add a glusterfs storage domain:
[ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
Attached are vdsm and engine log files.
On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
Below are the versions of packages installed. Please find the logs attached. Qemu: ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64
Libvirt installed packages: libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 libvirt-libs-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 libvirt-3.9.0-14.el7_5.6.x86_64 libvirt-python-3.9.0-1.el7.x86_64 libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 libvirt-client-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64
Virt-manager: virt-manager-common-1.4.3-3.el7.noarch
oVirt: [root@localhost network-scripts]# rpm -qa | grep ovirt ovirt-setup-lib-1.1.4-1.el7.centos.noarch cockpit-ovirt-dashboard-0.11.28-1.el7.noarch ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-host-4.2.3-1.el7.x86_64 python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 ovirt-host-deploy-1.7.4-1.el7.noarch cockpit-machines-ovirt-169-1.el7.noarch ovirt-hosted-engine-ha-2.2.14-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.11-1.el7.noarch ovirt-engine-appliance-4.2-20180626.1.el7.noarch ovirt-release42-4.2.4-1.el7.noarch ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch
On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
> Hi, > > I did not select any CPU architecture. It doenst gove me the option > to select one. It only states the number of virtual CPUs and the memory for > the engine VM. > > Looking at the documentation of installing ovirt-release36.rpm....it > does allow you to select te CPU, but not when installing ovirt-release42.rpm > > On Tuesday, July 10, 2018, Alastair Neil <ajneil.tech@gmail.com> > wrote: > >> what did you select as your CPU architecture when you created the >> cluster? It looks like the VM is trying to use a CPU type of "Custom", how >> many nodes in your cluster? I suggest you specify the lowest common >> denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU >> architecture of the cluster.. >> >> On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe <sakhi@sanren.ac.za> >> wrote: >> >>> Hi, >>> >>> I have just re-installed centOS 7 in 3 servers and have configured >>> gluster volumes following this documentation: >>> https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/, >>> But I have installed >>> >>> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm >>> >>> package. >>> Hosted-engine --deploy is failing with this error: >>> >>> "rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", >>> "4", "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", >>> "--disk", "/var/tmp/localvm0nnJH9/images >>> /eacac30d-0304-4c77-8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", >>> "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", >>> "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", >>> "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", >>> "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], >>> "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": >>> "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", >>> "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64 >>> kvm domain on x86_64 host is not supported by hypervisor\nDomain >>> installation does not appear to have been successful.\nIf it was, you can >>> restart your domain by running:\n virsh --connect qemu:///system start >>> HostedEngineLocal\notherwise, please restart your installation.", >>> "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for >>> x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain >>> installation does not appear to have been successful.", "If it was, you can >>> restart your domain by running:", " virsh --connect qemu:///system start >>> HostedEngineLocal", "otherwise, please restart your installation."], >>> "stdout": "\nStarting install...", "stdout_lines": ["", "Starting >>> install..."]} >>> >> This seems to be in the phase where we create a local vm for the engine. We do this with plain virt-install, nothing fancy. Searching the net for "unsupported configuration: CPU mode 'custom'" finds other relevant reports, you might want to check them. You can see the command in bootstrap_local_vm.yml .
Please check/share versions of relevant packages (libvirt*, qemu*, etc) and relevant logs (libvirt).
Also updating the subject line and adding Simone.
Best regards, -- Didi
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/YHKUKW22QLRVS56XZBXEWOGORFWFEGIA/
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>

Hi Sahina, I am sorry I can't reproduce the error nor access the logs since I did a fresh installed pn nodes. However now I can't even react that far because the engine deployment fails to start the host up: [ INFO ] TASK [Wait for the host to be up] [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "goku.sanren.ac.za", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": "sanren.ac.za", "subject": "O=sanren.ac.za,CN=goku.sanren.ac.za"}, "cluster": {"href": "/ovirt-engine/api/clusters/1ca368cc-b052-11e8-b7de-00163e008187", "id": "1ca368cc-b052-11e8-b7de-00163e008187"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/1c575995-70b1-43f7-b348-4a9788e070cd", "id": "1c575995-70b1-43f7-b348-4a9788e070cd", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "goku.sanren.ac.za", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:B3/PDH551EFid93fm6PoRryi6/cXuVE8yNgiiiROh84", "port": 22}, "statistics": [], "status": "install_failed", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false} Please help. On Mon, Sep 3, 2018 at 1:34 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, Aug 29, 2018 at 8:39 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I am sorry to bother you again.
I am trying to deploy an oVirt engine for oVirtNode-4.2.5.1. I get the same error I encountered before:
[ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
The glusterd daemon is running.
mounting 172.16.4.18:/engine at /rhev/data-center/mnt/ glusterSD/172.16.4.18:_engine (mount:204) 2018-08-29 16:47:28,846+0200 ERROR (jsonrpc/3) [storage.HSM] Could not connect to storageServer (hsm:2398)
Can you try to see if you are able to mount 172.16.4.18:/engine on the server you're deploying Hosted Engine using "mount -t glusterfs 172.16.4.18:/engine /mnt/test"
During the deployment of the engine it sets the engine entry in the /etc/hosts file with the IP Address of 192.168.124.* which it gets form the virbr0 bridge interface. I stopped the bridge and deleted it, but still giving the same error. Not sure what causes it to use that interface. Please help!
But I give the engine an IP of 192.168.1.10 same subnet as my gateway and my ovirtmgmt bridge. Attached is the ifconfig output of my Node, engine.log and vdsm.log.
Your assistance is always appreciated
On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose <sabose@redhat.com> wrote:
Is glusterd running on the server: goku.sanren.** There's an error Failed to get volume info: Command execution failed error: Connection failed. Please check if gluster daemon is operational
Please check the volume status using "gluster volume status engine"
and if all looks ok, attach the mount logs from /var/log/glusterfs
On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I have managed to fix the error by enabling the DMA Virtualisation in BIOS. I am now hit with a new error: It's failing to add a glusterfs storage domain:
[ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
Attached are vdsm and engine log files.
On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
Below are the versions of packages installed. Please find the logs attached. Qemu: ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64
Libvirt installed packages: libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 libvirt-libs-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 libvirt-3.9.0-14.el7_5.6.x86_64 libvirt-python-3.9.0-1.el7.x86_64 libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 libvirt-client-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64
Virt-manager: virt-manager-common-1.4.3-3.el7.noarch
oVirt: [root@localhost network-scripts]# rpm -qa | grep ovirt ovirt-setup-lib-1.1.4-1.el7.centos.noarch cockpit-ovirt-dashboard-0.11.28-1.el7.noarch ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch ovirt-host-dependencies-4.2.3-1.el7.x86_64 ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch ovirt-host-4.2.3-1.el7.x86_64 python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 ovirt-host-deploy-1.7.4-1.el7.noarch cockpit-machines-ovirt-169-1.el7.noarch ovirt-hosted-engine-ha-2.2.14-1.el7.noarch ovirt-vmconsole-1.0.5-4.el7.centos.noarch ovirt-provider-ovn-driver-1.2.11-1.el7.noarch ovirt-engine-appliance-4.2-20180626.1.el7.noarch ovirt-release42-4.2.4-1.el7.noarch ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch
On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David <didi@redhat.com> wrote:
> On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe <sakhi@sanren.ac.za> > wrote: > >> Hi, >> >> I did not select any CPU architecture. It doenst gove me the option >> to select one. It only states the number of virtual CPUs and the memory for >> the engine VM. >> >> Looking at the documentation of installing >> ovirt-release36.rpm....it does allow you to select te CPU, but not when >> installing ovirt-release42.rpm >> >> On Tuesday, July 10, 2018, Alastair Neil <ajneil.tech@gmail.com> >> wrote: >> >>> what did you select as your CPU architecture when you created the >>> cluster? It looks like the VM is trying to use a CPU type of "Custom", how >>> many nodes in your cluster? I suggest you specify the lowest common >>> denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU >>> architecture of the cluster.. >>> >>> On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe <sakhi@sanren.ac.za> >>> wrote: >>> >>>> Hi, >>>> >>>> I have just re-installed centOS 7 in 3 servers and have >>>> configured gluster volumes following this documentation: >>>> https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/, >>>> But I have installed >>>> >>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm >>>> >>>> package. >>>> Hosted-engine --deploy is failing with this error: >>>> >>>> "rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", >>>> "4", "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", >>>> "--disk", "/var/tmp/localvm0nnJH9/images >>>> /eacac30d-0304-4c77-8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", >>>> "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", >>>> "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", >>>> "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", >>>> "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], >>>> "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": >>>> "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", >>>> "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64 >>>> kvm domain on x86_64 host is not supported by hypervisor\nDomain >>>> installation does not appear to have been successful.\nIf it was, you can >>>> restart your domain by running:\n virsh --connect qemu:///system start >>>> HostedEngineLocal\notherwise, please restart your installation.", >>>> "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for >>>> x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain >>>> installation does not appear to have been successful.", "If it was, you can >>>> restart your domain by running:", " virsh --connect qemu:///system start >>>> HostedEngineLocal", "otherwise, please restart your installation."], >>>> "stdout": "\nStarting install...", "stdout_lines": ["", "Starting >>>> install..."]} >>>> >>> > This seems to be in the phase where we create a local vm for the > engine. We do this with plain virt-install, nothing fancy. Searching the > net for "unsupported configuration: CPU mode 'custom'" finds other relevant > reports, you might want to check them. You can see the command in > bootstrap_local_vm.yml . > > Please check/share versions of relevant packages (libvirt*, qemu*, > etc) and relevant logs (libvirt). > > Also updating the subject line and adding Simone. > > Best regards, > -- > Didi >
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/YHKUKW22QLRVS56XZBXEWOGORFWFEGIA/
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>

On Tue, Sep 4, 2018 at 6:07 PM Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi Sahina,
I am sorry I can't reproduce the error nor access the logs since I did a fresh installed pn nodes. However now I can't even react that far because the engine deployment fails to start the host up:
[ INFO ] TASK [Wait for the host to be up] [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "goku.sanren.ac.za", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": " sanren.ac.za", "subject": "O=sanren.ac.za,CN=goku.sanren.ac.za"}, "cluster": {"href": "/ovirt-engine/api/clusters/1ca368cc-b052-11e8-b7de-00163e008187", "id": "1ca368cc-b052-11e8-b7de-00163e008187"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/ 1c575995-70b1-43f7-b348-4a9788e070cd", "id": "1c575995-70b1-43f7-b348-4a9788e070cd", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "goku.sanren.ac.za", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:B3/PDH551EFid93fm6PoRryi6/cXuVE8yNgiiiROh84", "port": 22}, "statistics": [], "status": "install_failed", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false}
"status": "install_failed"
You have to check host-deploy logs to get a details error message.
Please help.
On Mon, Sep 3, 2018 at 1:34 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, Aug 29, 2018 at 8:39 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I am sorry to bother you again.
I am trying to deploy an oVirt engine for oVirtNode-4.2.5.1. I get the same error I encountered before:
[ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
The glusterd daemon is running.
mounting 172.16.4.18:/engine at /rhev/data-center/mnt/glusterSD/172.16.4.18:_engine (mount:204) 2018-08-29 16:47:28,846+0200 ERROR (jsonrpc/3) [storage.HSM] Could not connect to storageServer (hsm:2398)
Can you try to see if you are able to mount 172.16.4.18:/engine on the server you're deploying Hosted Engine using "mount -t glusterfs 172.16.4.18:/engine /mnt/test"
During the deployment of the engine it sets the engine entry in the /etc/hosts file with the IP Address of 192.168.124.* which it gets form the virbr0 bridge interface. I stopped the bridge and deleted it, but still giving the same error. Not sure what causes it to use that interface. Please help!
But I give the engine an IP of 192.168.1.10 same subnet as my gateway and my ovirtmgmt bridge. Attached is the ifconfig output of my Node, engine.log and vdsm.log.
Your assistance is always appreciated
On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose <sabose@redhat.com> wrote:
Is glusterd running on the server: goku.sanren.** There's an error Failed to get volume info: Command execution failed error: Connection failed. Please check if gluster daemon is operational
Please check the volume status using "gluster volume status engine"
and if all looks ok, attach the mount logs from /var/log/glusterfs
On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I have managed to fix the error by enabling the DMA Virtualisation in BIOS. I am now hit with a new error: It's failing to add a glusterfs storage domain:
[ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
Attached are vdsm and engine log files.
On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
> Hi, > > Below are the versions of packages installed. Please find the logs > attached. > Qemu: > ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch > libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 > qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 > qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 > qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64 > > Libvirt installed packages: > libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 > libvirt-libs-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 > libvirt-3.9.0-14.el7_5.6.x86_64 > libvirt-python-3.9.0-1.el7.x86_64 > libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 > libvirt-client-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 > libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64 > > Virt-manager: > virt-manager-common-1.4.3-3.el7.noarch > > oVirt: > [root@localhost network-scripts]# rpm -qa | grep ovirt > ovirt-setup-lib-1.1.4-1.el7.centos.noarch > cockpit-ovirt-dashboard-0.11.28-1.el7.noarch > ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch > ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch > ovirt-host-dependencies-4.2.3-1.el7.x86_64 > ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch > ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch > ovirt-host-4.2.3-1.el7.x86_64 > python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 > ovirt-host-deploy-1.7.4-1.el7.noarch > cockpit-machines-ovirt-169-1.el7.noarch > ovirt-hosted-engine-ha-2.2.14-1.el7.noarch > ovirt-vmconsole-1.0.5-4.el7.centos.noarch > ovirt-provider-ovn-driver-1.2.11-1.el7.noarch > ovirt-engine-appliance-4.2-20180626.1.el7.noarch > ovirt-release42-4.2.4-1.el7.noarch > ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch > > > > > > > On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David <didi@redhat.com > > wrote: > >> On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe <sakhi@sanren.ac.za> >> wrote: >> >>> Hi, >>> >>> I did not select any CPU architecture. It doenst gove me the >>> option to select one. It only states the number of virtual CPUs and the >>> memory for the engine VM. >>> >>> Looking at the documentation of installing >>> ovirt-release36.rpm....it does allow you to select te CPU, but not when >>> installing ovirt-release42.rpm >>> >>> On Tuesday, July 10, 2018, Alastair Neil <ajneil.tech@gmail.com> >>> wrote: >>> >>>> what did you select as your CPU architecture when you created the >>>> cluster? It looks like the VM is trying to use a CPU type of "Custom", how >>>> many nodes in your cluster? I suggest you specify the lowest common >>>> denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU >>>> architecture of the cluster.. >>>> >>>> On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe <sakhi@sanren.ac.za> >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> I have just re-installed centOS 7 in 3 servers and have >>>>> configured gluster volumes following this documentation: >>>>> https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/, >>>>> But I have installed >>>>> >>>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm >>>>> >>>>> package. >>>>> Hosted-engine --deploy is failing with this error: >>>>> >>>>> "rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", >>>>> "4", "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", >>>>> "--disk", >>>>> "/var/tmp/localvm0nnJH9/images/eacac30d-0304-4c77-8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", >>>>> "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", >>>>> "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", >>>>> "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", >>>>> "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], >>>>> "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": >>>>> "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", >>>>> "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64 >>>>> kvm domain on x86_64 host is not supported by hypervisor\nDomain >>>>> installation does not appear to have been successful.\nIf it was, you can >>>>> restart your domain by running:\n virsh --connect qemu:///system start >>>>> HostedEngineLocal\notherwise, please restart your installation.", >>>>> "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for >>>>> x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain >>>>> installation does not appear to have been successful.", "If it was, you can >>>>> restart your domain by running:", " virsh --connect qemu:///system start >>>>> HostedEngineLocal", "otherwise, please restart your installation."], >>>>> "stdout": "\nStarting install...", "stdout_lines": ["", "Starting >>>>> install..."]} >>>>> >>>> >> This seems to be in the phase where we create a local vm for the >> engine. We do this with plain virt-install, nothing fancy. Searching the >> net for "unsupported configuration: CPU mode 'custom'" finds other relevant >> reports, you might want to check them. You can see the command in >> bootstrap_local_vm.yml . >> >> Please check/share versions of relevant packages (libvirt*, qemu*, >> etc) and relevant logs (libvirt). >> >> Also updating the subject line and adding Simone. >> >> Best regards, >> -- >> Didi >> > > > > -- > Regards, > Sakhi Hadebe > > Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR > > Tel: +27 12 841 2308 <+27128414213> > Fax: +27 12 841 4223 <+27128414223> > Cell: +27 71 331 9622 <+27823034657> > Email: sakhi@sanren.ac.za <shadebe@csir.co.za> > >
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YHKUKW22QLRVS5...
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DS7O33ZSGV5ZAH...

Hi All, The host deploy logs are showing the below errors: [root@garlic engine-logs-2018-09-05T08:48:22Z]# cat /var/log/ovirt-hosted-engine-setup/engine-logs-2018-09-05T08\:34\:55Z/ovirt-engine/host-deploy/ovirt-host-deploy-20180905103605-garlic.sanren.ac.za-543b536b.log | grep -i error 2018-09-05 10:35:46,909+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' 2018-09-05 10:35:47,116 [ERROR] __main__.py:8011:MainThread @identity.py:145 - Reload of consumer identity cert /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such file or directory: '/etc/pki/consumer/key.pem' 2018-09-05 10:35:47,383+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' 2018-09-05 10:35:47,593 [ERROR] __main__.py:8011:MainThread @identity.py:145 - Reload of consumer identity cert /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such file or directory: '/etc/pki/consumer/key.pem' 2018-09-05 10:35:48,245+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' Job for ovirt-imageio-daemon.service failed because the control process exited with error code. See "systemctl status ovirt-imageio-daemon.service" and "journalctl -xe" for details. RuntimeError: Failed to start service 'ovirt-imageio-daemon' 2018-09-05 10:36:05,098+0200 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Closing up': Failed to start service 'ovirt-imageio-daemon' 2018-09-05 10:36:05,099+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-09-05 10:36:05,099+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>, RuntimeError("Failed to start service 'ovirt-imageio-daemon'",), <traceback object at 0x7f4de25ff320>)]' 2018-09-05 10:36:05,106+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-09-05 10:36:05,106+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>, RuntimeError("Failed to start service 'ovirt-imageio-daemon'",), <traceback object at 0x7f4de25ff320>)]' I coudn't find anything helpful from the internet. On Tue, Sep 4, 2018 at 6:46 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Tue, Sep 4, 2018 at 6:07 PM Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi Sahina,
I am sorry I can't reproduce the error nor access the logs since I did a fresh installed pn nodes. However now I can't even react that far because the engine deployment fails to start the host up:
[ INFO ] TASK [Wait for the host to be up] [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "goku.sanren.ac.za", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": " sanren.ac.za", "subject": "O=sanren.ac.za,CN=goku.sanren.ac.za"}, "cluster": {"href": "/ovirt-engine/api/clusters/1ca368cc-b052-11e8-b7de- 00163e008187", "id": "1ca368cc-b052-11e8-b7de-00163e008187"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/1c575995-70b1-43f7-b348-4a9788e070cd", "id": "1c575995-70b1-43f7-b348-4a9788e070cd", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "goku.sanren.ac.za", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:B3/PDH551EFid93fm6PoRryi6/cXuVE8yNgiiiROh84", "port": 22}, "statistics": [], "status": "install_failed", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false}
"status": "install_failed"
You have to check host-deploy logs to get a details error message.
Please help.
On Mon, Sep 3, 2018 at 1:34 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, Aug 29, 2018 at 8:39 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I am sorry to bother you again.
I am trying to deploy an oVirt engine for oVirtNode-4.2.5.1. I get the same error I encountered before:
[ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
The glusterd daemon is running.
mounting 172.16.4.18:/engine at /rhev/data-center/mnt/ glusterSD/172.16.4.18:_engine (mount:204) 2018-08-29 16:47:28,846+0200 ERROR (jsonrpc/3) [storage.HSM] Could not connect to storageServer (hsm:2398)
Can you try to see if you are able to mount 172.16.4.18:/engine on the server you're deploying Hosted Engine using "mount -t glusterfs 172.16.4.18:/engine /mnt/test"
During the deployment of the engine it sets the engine entry in the /etc/hosts file with the IP Address of 192.168.124.* which it gets form the virbr0 bridge interface. I stopped the bridge and deleted it, but still giving the same error. Not sure what causes it to use that interface. Please help!
But I give the engine an IP of 192.168.1.10 same subnet as my gateway and my ovirtmgmt bridge. Attached is the ifconfig output of my Node, engine.log and vdsm.log.
Your assistance is always appreciated
On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose <sabose@redhat.com> wrote:
Is glusterd running on the server: goku.sanren.** There's an error Failed to get volume info: Command execution failed error: Connection failed. Please check if gluster daemon is operational
Please check the volume status using "gluster volume status engine"
and if all looks ok, attach the mount logs from /var/log/glusterfs
On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I have managed to fix the error by enabling the DMA Virtualisation in BIOS. I am now hit with a new error: It's failing to add a glusterfs storage domain:
[ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
Attached are vdsm and engine log files.
On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
> > > On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe <sakhi@sanren.ac.za> > wrote: > >> Hi, >> >> Below are the versions of packages installed. Please find the logs >> attached. >> Qemu: >> ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch >> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 >> qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 >> qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 >> qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64 >> >> Libvirt installed packages: >> libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 >> libvirt-libs-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 >> libvirt-3.9.0-14.el7_5.6.x86_64 >> libvirt-python-3.9.0-1.el7.x86_64 >> libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 >> libvirt-client-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 >> libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 >> libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64 >> >> Virt-manager: >> virt-manager-common-1.4.3-3.el7.noarch >> >> oVirt: >> [root@localhost network-scripts]# rpm -qa | grep ovirt >> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >> cockpit-ovirt-dashboard-0.11.28-1.el7.noarch >> ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch >> ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch >> ovirt-host-dependencies-4.2.3-1.el7.x86_64 >> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch >> ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch >> ovirt-host-4.2.3-1.el7.x86_64 >> python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 >> ovirt-host-deploy-1.7.4-1.el7.noarch >> cockpit-machines-ovirt-169-1.el7.noarch >> ovirt-hosted-engine-ha-2.2.14-1.el7.noarch >> ovirt-vmconsole-1.0.5-4.el7.centos.noarch >> ovirt-provider-ovn-driver-1.2.11-1.el7.noarch >> ovirt-engine-appliance-4.2-20180626.1.el7.noarch >> ovirt-release42-4.2.4-1.el7.noarch >> ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch >> >> >> >> >> >> >> On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David < >> didi@redhat.com> wrote: >> >>> On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe <sakhi@sanren.ac.za >>> > wrote: >>> >>>> Hi, >>>> >>>> I did not select any CPU architecture. It doenst gove me the >>>> option to select one. It only states the number of virtual CPUs and the >>>> memory for the engine VM. >>>> >>>> Looking at the documentation of installing >>>> ovirt-release36.rpm....it does allow you to select te CPU, but not when >>>> installing ovirt-release42.rpm >>>> >>>> On Tuesday, July 10, 2018, Alastair Neil <ajneil.tech@gmail.com> >>>> wrote: >>>> >>>>> what did you select as your CPU architecture when you created >>>>> the cluster? It looks like the VM is trying to use a CPU type of "Custom", >>>>> how many nodes in your cluster? I suggest you specify the lowest common >>>>> denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU >>>>> architecture of the cluster.. >>>>> >>>>> On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe <sakhi@sanren.ac.za> >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I have just re-installed centOS 7 in 3 servers and have >>>>>> configured gluster volumes following this documentation: >>>>>> https://www.ovirt.org/blog/2016/03/up-and-running-with- >>>>>> ovirt-3-6/, But I have installed >>>>>> >>>>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm >>>>>> >>>>>> package. >>>>>> Hosted-engine --deploy is failing with this error: >>>>>> >>>>>> "rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", >>>>>> "4", "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", >>>>>> "--disk", "/var/tmp/localvm0nnJH9/images/eacac30d-0304-4c77- >>>>>> 8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", >>>>>> "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", >>>>>> "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", >>>>>> "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", >>>>>> "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], >>>>>> "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": >>>>>> "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", >>>>>> "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64 >>>>>> kvm domain on x86_64 host is not supported by hypervisor\nDomain >>>>>> installation does not appear to have been successful.\nIf it was, you can >>>>>> restart your domain by running:\n virsh --connect qemu:///system start >>>>>> HostedEngineLocal\notherwise, please restart your installation.", >>>>>> "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for >>>>>> x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain >>>>>> installation does not appear to have been successful.", "If it was, you can >>>>>> restart your domain by running:", " virsh --connect qemu:///system start >>>>>> HostedEngineLocal", "otherwise, please restart your installation."], >>>>>> "stdout": "\nStarting install...", "stdout_lines": ["", "Starting >>>>>> install..."]} >>>>>> >>>>> >>> This seems to be in the phase where we create a local vm for the >>> engine. We do this with plain virt-install, nothing fancy. Searching the >>> net for "unsupported configuration: CPU mode 'custom'" finds other relevant >>> reports, you might want to check them. You can see the command in >>> bootstrap_local_vm.yml . >>> >>> Please check/share versions of relevant packages (libvirt*, qemu*, >>> etc) and relevant logs (libvirt). >>> >>> Also updating the subject line and adding Simone. >>> >>> Best regards, >>> -- >>> Didi >>> >> >> >> >> -- >> Regards, >> Sakhi Hadebe >> >> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR >> >> Tel: +27 12 841 2308 <+27128414213> >> Fax: +27 12 841 4223 <+27128414223> >> Cell: +27 71 331 9622 <+27823034657> >> Email: sakhi@sanren.ac.za <shadebe@csir.co.za> >> >> > > > -- > Regards, > Sakhi Hadebe > > Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR > > Tel: +27 12 841 2308 <+27128414213> > Fax: +27 12 841 4223 <+27128414223> > Cell: +27 71 331 9622 <+27823034657> > Email: sakhi@sanren.ac.za <shadebe@csir.co.za> > >
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/ community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/YHKUKW22QLRVS56XZBXEWOGORFWFEGIA/
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/DS7O33ZSGV5ZAHXKA3KT2ABYW3VJJWXQ/
-- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>

On Wed, Sep 5, 2018 at 11:10 AM Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi All,
The host deploy logs are showing the below errors:
[root@garlic engine-logs-2018-09-05T08:48:22Z]# cat /var/log/ovirt-hosted-engine-setup/engine-logs-2018-09-05T08\:34\:55Z/ovirt-engine/host-deploy/ovirt-host-deploy-20180905103605-garlic.sanren.ac.za-543b536b.log | grep -i error 2018-09-05 10:35:46,909+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' 2018-09-05 10:35:47,116 [ERROR] __main__.py:8011:MainThread @identity.py:145 - Reload of consumer identity cert /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such file or directory: '/etc/pki/consumer/key.pem' 2018-09-05 10:35:47,383+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' 2018-09-05 10:35:47,593 [ERROR] __main__.py:8011:MainThread @identity.py:145 - Reload of consumer identity cert /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such file or directory: '/etc/pki/consumer/key.pem' 2018-09-05 10:35:48,245+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' Job for ovirt-imageio-daemon.service failed because the control process exited with error code. See "systemctl status ovirt-imageio-daemon.service" and "journalctl -xe" for details. RuntimeError: Failed to start service 'ovirt-imageio-daemon' 2018-09-05 10:36:05,098+0200 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Closing up': Failed to start service 'ovirt-imageio-daemon' 2018-09-05 10:36:05,099+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-09-05 10:36:05,099+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>, RuntimeError("Failed to start service 'ovirt-imageio-daemon'",), <traceback object at 0x7f4de25ff320>)]' 2018-09-05 10:36:05,106+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-09-05 10:36:05,106+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>, RuntimeError("Failed to start service 'ovirt-imageio-daemon'",), <traceback object at 0x7f4de25ff320>)]'
I coudn't find anything helpful from the internet.
something relevant int he output of systemctl status ovirt-imageio-daemon.service and journalctl -xe -u ovirt-imageio-daemon.service ?
On Tue, Sep 4, 2018 at 6:46 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Tue, Sep 4, 2018 at 6:07 PM Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi Sahina,
I am sorry I can't reproduce the error nor access the logs since I did a fresh installed pn nodes. However now I can't even react that far because the engine deployment fails to start the host up:
[ INFO ] TASK [Wait for the host to be up] [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "goku.sanren.ac.za", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": " sanren.ac.za", "subject": "O=sanren.ac.za,CN=goku.sanren.ac.za"}, "cluster": {"href": "/ovirt-engine/api/clusters/1ca368cc-b052-11e8-b7de-00163e008187", "id": "1ca368cc-b052-11e8-b7de-00163e008187"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/ 1c575995-70b1-43f7-b348-4a9788e070cd", "id": "1c575995-70b1-43f7-b348-4a9788e070cd", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "goku.sanren.ac.za", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:B3/PDH551EFid93fm6PoRryi6/cXuVE8yNgiiiROh84", "port": 22}, "statistics": [], "status": "install_failed", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false}
"status": "install_failed"
You have to check host-deploy logs to get a details error message.
Please help.
On Mon, Sep 3, 2018 at 1:34 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, Aug 29, 2018 at 8:39 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I am sorry to bother you again.
I am trying to deploy an oVirt engine for oVirtNode-4.2.5.1. I get the same error I encountered before:
[ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
The glusterd daemon is running.
mounting 172.16.4.18:/engine at /rhev/data-center/mnt/glusterSD/172.16.4.18:_engine (mount:204) 2018-08-29 16:47:28,846+0200 ERROR (jsonrpc/3) [storage.HSM] Could not connect to storageServer (hsm:2398)
Can you try to see if you are able to mount 172.16.4.18:/engine on the server you're deploying Hosted Engine using "mount -t glusterfs 172.16.4.18:/engine /mnt/test"
During the deployment of the engine it sets the engine entry in the /etc/hosts file with the IP Address of 192.168.124.* which it gets form the virbr0 bridge interface. I stopped the bridge and deleted it, but still giving the same error. Not sure what causes it to use that interface. Please help!
But I give the engine an IP of 192.168.1.10 same subnet as my gateway and my ovirtmgmt bridge. Attached is the ifconfig output of my Node, engine.log and vdsm.log.
Your assistance is always appreciated
On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose <sabose@redhat.com> wrote:
Is glusterd running on the server: goku.sanren.** There's an error Failed to get volume info: Command execution failed error: Connection failed. Please check if gluster daemon is operational
Please check the volume status using "gluster volume status engine"
and if all looks ok, attach the mount logs from /var/log/glusterfs
On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
> Hi, > > I have managed to fix the error by enabling the DMA Virtualisation > in BIOS. I am now hit with a new error: It's failing to add a glusterfs > storage domain: > > [ INFO ] TASK [Add glusterfs storage domain] > [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is > "[Problem while trying to mount target]". HTTP response code is 400. > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": > "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while > trying to mount target]\". HTTP response code is 400."} > Please specify the storage you would like to use > (glusterfs, iscsi, fc, nfs)[nfs]: > > Attached are vdsm and engine log files. > > > > > > On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe <sakhi@sanren.ac.za> > wrote: > >> >> >> On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe <sakhi@sanren.ac.za> >> wrote: >> >>> Hi, >>> >>> Below are the versions of packages installed. Please find the logs >>> attached. >>> Qemu: >>> ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch >>> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 >>> qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 >>> qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 >>> qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64 >>> >>> Libvirt installed packages: >>> libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 >>> libvirt-libs-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 >>> libvirt-3.9.0-14.el7_5.6.x86_64 >>> libvirt-python-3.9.0-1.el7.x86_64 >>> libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 >>> libvirt-client-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 >>> libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 >>> libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64 >>> >>> Virt-manager: >>> virt-manager-common-1.4.3-3.el7.noarch >>> >>> oVirt: >>> [root@localhost network-scripts]# rpm -qa | grep ovirt >>> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >>> cockpit-ovirt-dashboard-0.11.28-1.el7.noarch >>> ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch >>> ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch >>> ovirt-host-dependencies-4.2.3-1.el7.x86_64 >>> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch >>> ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch >>> ovirt-host-4.2.3-1.el7.x86_64 >>> python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 >>> ovirt-host-deploy-1.7.4-1.el7.noarch >>> cockpit-machines-ovirt-169-1.el7.noarch >>> ovirt-hosted-engine-ha-2.2.14-1.el7.noarch >>> ovirt-vmconsole-1.0.5-4.el7.centos.noarch >>> ovirt-provider-ovn-driver-1.2.11-1.el7.noarch >>> ovirt-engine-appliance-4.2-20180626.1.el7.noarch >>> ovirt-release42-4.2.4-1.el7.noarch >>> ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch >>> >>> >>> >>> >>> >>> >>> On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David < >>> didi@redhat.com> wrote: >>> >>>> On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe < >>>> sakhi@sanren.ac.za> wrote: >>>> >>>>> Hi, >>>>> >>>>> I did not select any CPU architecture. It doenst gove me the >>>>> option to select one. It only states the number of virtual CPUs and the >>>>> memory for the engine VM. >>>>> >>>>> Looking at the documentation of installing >>>>> ovirt-release36.rpm....it does allow you to select te CPU, but not when >>>>> installing ovirt-release42.rpm >>>>> >>>>> On Tuesday, July 10, 2018, Alastair Neil <ajneil.tech@gmail.com> >>>>> wrote: >>>>> >>>>>> what did you select as your CPU architecture when you created >>>>>> the cluster? It looks like the VM is trying to use a CPU type of "Custom", >>>>>> how many nodes in your cluster? I suggest you specify the lowest common >>>>>> denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU >>>>>> architecture of the cluster.. >>>>>> >>>>>> On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe <sakhi@sanren.ac.za> >>>>>> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I have just re-installed centOS 7 in 3 servers and have >>>>>>> configured gluster volumes following this documentation: >>>>>>> https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/, >>>>>>> But I have installed >>>>>>> >>>>>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm >>>>>>> >>>>>>> package. >>>>>>> Hosted-engine --deploy is failing with this error: >>>>>>> >>>>>>> "rhel7", "--virt-type", "kvm", "--memory", "16384", >>>>>>> "--vcpus", "4", "--network", >>>>>>> "network=default,mac=00:16:3e:09:5e:5d,model=virtio", "--disk", >>>>>>> "/var/tmp/localvm0nnJH9/images/eacac30d-0304-4c77-8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", >>>>>>> "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", >>>>>>> "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", >>>>>>> "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", >>>>>>> "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], >>>>>>> "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": >>>>>>> "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", >>>>>>> "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64 >>>>>>> kvm domain on x86_64 host is not supported by hypervisor\nDomain >>>>>>> installation does not appear to have been successful.\nIf it was, you can >>>>>>> restart your domain by running:\n virsh --connect qemu:///system start >>>>>>> HostedEngineLocal\notherwise, please restart your installation.", >>>>>>> "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for >>>>>>> x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain >>>>>>> installation does not appear to have been successful.", "If it was, you can >>>>>>> restart your domain by running:", " virsh --connect qemu:///system start >>>>>>> HostedEngineLocal", "otherwise, please restart your installation."], >>>>>>> "stdout": "\nStarting install...", "stdout_lines": ["", "Starting >>>>>>> install..."]} >>>>>>> >>>>>> >>>> This seems to be in the phase where we create a local vm for the >>>> engine. We do this with plain virt-install, nothing fancy. Searching the >>>> net for "unsupported configuration: CPU mode 'custom'" finds other relevant >>>> reports, you might want to check them. You can see the command in >>>> bootstrap_local_vm.yml . >>>> >>>> Please check/share versions of relevant packages (libvirt*, >>>> qemu*, etc) and relevant logs (libvirt). >>>> >>>> Also updating the subject line and adding Simone. >>>> >>>> Best regards, >>>> -- >>>> Didi >>>> >>> >>> >>> >>> -- >>> Regards, >>> Sakhi Hadebe >>> >>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR >>> >>> Tel: +27 12 841 2308 <+27128414213> >>> Fax: +27 12 841 4223 <+27128414223> >>> Cell: +27 71 331 9622 <+27823034657> >>> Email: sakhi@sanren.ac.za <shadebe@csir.co.za> >>> >>> >> >> >> -- >> Regards, >> Sakhi Hadebe >> >> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR >> >> Tel: +27 12 841 2308 <+27128414213> >> Fax: +27 12 841 4223 <+27128414223> >> Cell: +27 71 331 9622 <+27823034657> >> Email: sakhi@sanren.ac.za <shadebe@csir.co.za> >> >> > > > -- > Regards, > Sakhi Hadebe > > Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR > > Tel: +27 12 841 2308 <+27128414213> > Fax: +27 12 841 4223 <+27128414223> > Cell: +27 71 331 9622 <+27823034657> > Email: sakhi@sanren.ac.za <shadebe@csir.co.za> > > > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/YHKUKW22QLRVS5... > >
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DS7O33ZSGV5ZAH...
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>

# systemctl status ovirt-imageio-daemon.service ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16 SAST; 19h ago Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 1min 58s ago ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was not met Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited, status=1/FAILURE) Main PID: 11345 (code=exited, status=1/FAILURE) Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service holdoff time over, scheduling ...art. Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too quickly for ovirt-imageio-daemon...vice Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Hint: Some lines were ellipsized, use -l to show in full. [root@glustermount ~]# systemctl status ovirt-imageio-daemon.service -l ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16 SAST; 19h ago Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 2min 9s ago ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was not met Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited, status=1/FAILURE) Main PID: 11345 (code=exited, status=1/FAILURE) Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service holdoff time over, scheduling restart. Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too quickly for ovirt-imageio-daemon.service Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Output of: On Wed, Sep 5, 2018 at 11:35 AM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 5, 2018 at 11:10 AM Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi All,
The host deploy logs are showing the below errors:
[root@garlic engine-logs-2018-09-05T08:48:22Z]# cat /var/log/ovirt-hosted-engine-setup/engine-logs-2018-09- 05T08\:34\:55Z/ovirt-engine/host-deploy/ovirt-host-deploy- 20180905103605-garlic.sanren.ac.za-543b536b.log | grep -i error 2018-09-05 10:35:46,909+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' 2018-09-05 10:35:47,116 [ERROR] __main__.py:8011:MainThread @identity.py:145 - Reload of consumer identity cert /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such file or directory: '/etc/pki/consumer/key.pem' 2018-09-05 10:35:47,383+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' 2018-09-05 10:35:47,593 [ERROR] __main__.py:8011:MainThread @identity.py:145 - Reload of consumer identity cert /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such file or directory: '/etc/pki/consumer/key.pem' 2018-09-05 10:35:48,245+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' Job for ovirt-imageio-daemon.service failed because the control process exited with error code. See "systemctl status ovirt-imageio-daemon.service" and "journalctl -xe" for details. RuntimeError: Failed to start service 'ovirt-imageio-daemon' 2018-09-05 10:36:05,098+0200 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Closing up': Failed to start service 'ovirt-imageio-daemon' 2018-09-05 10:36:05,099+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-09-05 10:36:05,099+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>, RuntimeError("Failed to start service 'ovirt-imageio-daemon'",), <traceback object at 0x7f4de25ff320>)]' 2018-09-05 10:36:05,106+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-09-05 10:36:05,106+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>, RuntimeError("Failed to start service 'ovirt-imageio-daemon'",), <traceback object at 0x7f4de25ff320>)]'
I coudn't find anything helpful from the internet.
something relevant int he output of systemctl status ovirt-imageio-daemon.service and journalctl -xe -u ovirt-imageio-daemon.service ?
On Tue, Sep 4, 2018 at 6:46 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Tue, Sep 4, 2018 at 6:07 PM Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi Sahina,
I am sorry I can't reproduce the error nor access the logs since I did a fresh installed pn nodes. However now I can't even react that far because the engine deployment fails to start the host up:
[ INFO ] TASK [Wait for the host to be up] [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "goku.sanren.ac.za", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": " sanren.ac.za", "subject": "O=sanren.ac.za,CN=goku.sanren.ac.za"}, "cluster": {"href": "/ovirt-engine/api/clusters/1c a368cc-b052-11e8-b7de-00163e008187", "id": "1ca368cc-b052-11e8-b7de- 00163e008187"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/1c575995-70b1-43f7-b348-4a9788e070cd", "id": "1c575995-70b1-43f7-b348-4a9788e070cd", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "goku.sanren.ac.za", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:B3/PDH551EFid93fm6PoRryi6/cXuVE8yNgiiiROh84", "port": 22}, "statistics": [], "status": "install_failed", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false}
"status": "install_failed"
You have to check host-deploy logs to get a details error message.
Please help.
On Mon, Sep 3, 2018 at 1:34 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, Aug 29, 2018 at 8:39 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi,
I am sorry to bother you again.
I am trying to deploy an oVirt engine for oVirtNode-4.2.5.1. I get the same error I encountered before:
[ INFO ] TASK [Add glusterfs storage domain] [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
The glusterd daemon is running.
mounting 172.16.4.18:/engine at /rhev/data-center/mnt/ glusterSD/172.16.4.18:_engine (mount:204) 2018-08-29 16:47:28,846+0200 ERROR (jsonrpc/3) [storage.HSM] Could not connect to storageServer (hsm:2398)
Can you try to see if you are able to mount 172.16.4.18:/engine on the server you're deploying Hosted Engine using "mount -t glusterfs 172.16.4.18:/engine /mnt/test"
During the deployment of the engine it sets the engine entry in the /etc/hosts file with the IP Address of 192.168.124.* which it gets form the virbr0 bridge interface. I stopped the bridge and deleted it, but still giving the same error. Not sure what causes it to use that interface. Please help!
But I give the engine an IP of 192.168.1.10 same subnet as my gateway and my ovirtmgmt bridge. Attached is the ifconfig output of my Node, engine.log and vdsm.log.
Your assistance is always appreciated
On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose <sabose@redhat.com> wrote:
> Is glusterd running on the server: goku.sanren.** > There's an error > Failed to get volume info: Command execution failed > error: Connection failed. Please check if gluster daemon is > operational > > Please check the volume status using "gluster volume status engine" > > and if all looks ok, attach the mount logs from /var/log/glusterfs > > On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe <sakhi@sanren.ac.za> > wrote: > >> Hi, >> >> I have managed to fix the error by enabling the DMA Virtualisation >> in BIOS. I am now hit with a new error: It's failing to add a glusterfs >> storage domain: >> >> [ INFO ] TASK [Add glusterfs storage domain] >> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail >> is "[Problem while trying to mount target]". HTTP response code is 400. >> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": >> "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while >> trying to mount target]\". HTTP response code is 400."} >> Please specify the storage you would like to use >> (glusterfs, iscsi, fc, nfs)[nfs]: >> >> Attached are vdsm and engine log files. >> >> >> >> >> >> On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe <sakhi@sanren.ac.za> >> wrote: >> >>> >>> >>> On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe <sakhi@sanren.ac.za> >>> wrote: >>> >>>> Hi, >>>> >>>> Below are the versions of packages installed. Please find the >>>> logs attached. >>>> Qemu: >>>> ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch >>>> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 >>>> qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 >>>> qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 >>>> qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64 >>>> >>>> Libvirt installed packages: >>>> libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-libs-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-python-3.9.0-1.el7.x86_64 >>>> libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-client-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64 >>>> >>>> Virt-manager: >>>> virt-manager-common-1.4.3-3.el7.noarch >>>> >>>> oVirt: >>>> [root@localhost network-scripts]# rpm -qa | grep ovirt >>>> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >>>> cockpit-ovirt-dashboard-0.11.28-1.el7.noarch >>>> ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch >>>> ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch >>>> ovirt-host-dependencies-4.2.3-1.el7.x86_64 >>>> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch >>>> ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch >>>> ovirt-host-4.2.3-1.el7.x86_64 >>>> python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 >>>> ovirt-host-deploy-1.7.4-1.el7.noarch >>>> cockpit-machines-ovirt-169-1.el7.noarch >>>> ovirt-hosted-engine-ha-2.2.14-1.el7.noarch >>>> ovirt-vmconsole-1.0.5-4.el7.centos.noarch >>>> ovirt-provider-ovn-driver-1.2.11-1.el7.noarch >>>> ovirt-engine-appliance-4.2-20180626.1.el7.noarch >>>> ovirt-release42-4.2.4-1.el7.noarch >>>> ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David < >>>> didi@redhat.com> wrote: >>>> >>>>> On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe < >>>>> sakhi@sanren.ac.za> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I did not select any CPU architecture. It doenst gove me the >>>>>> option to select one. It only states the number of virtual CPUs and the >>>>>> memory for the engine VM. >>>>>> >>>>>> Looking at the documentation of installing >>>>>> ovirt-release36.rpm....it does allow you to select te CPU, but not when >>>>>> installing ovirt-release42.rpm >>>>>> >>>>>> On Tuesday, July 10, 2018, Alastair Neil <ajneil.tech@gmail.com> >>>>>> wrote: >>>>>> >>>>>>> what did you select as your CPU architecture when you created >>>>>>> the cluster? It looks like the VM is trying to use a CPU type of "Custom", >>>>>>> how many nodes in your cluster? I suggest you specify the lowest common >>>>>>> denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU >>>>>>> architecture of the cluster.. >>>>>>> >>>>>>> On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe <sakhi@sanren.ac.za> >>>>>>> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I have just re-installed centOS 7 in 3 servers and have >>>>>>>> configured gluster volumes following this documentation: >>>>>>>> https://www.ovirt.org/blog/2016/03/up-and-running-with- >>>>>>>> ovirt-3-6/, But I have installed >>>>>>>> >>>>>>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm >>>>>>>> >>>>>>>> package. >>>>>>>> Hosted-engine --deploy is failing with this error: >>>>>>>> >>>>>>>> "rhel7", "--virt-type", "kvm", "--memory", "16384", >>>>>>>> "--vcpus", "4", "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", >>>>>>>> "--disk", "/var/tmp/localvm0nnJH9/images/eacac30d-0304-4c77- >>>>>>>> 8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", >>>>>>>> "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", >>>>>>>> "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", >>>>>>>> "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", >>>>>>>> "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], >>>>>>>> "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": >>>>>>>> "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", >>>>>>>> "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64 >>>>>>>> kvm domain on x86_64 host is not supported by hypervisor\nDomain >>>>>>>> installation does not appear to have been successful.\nIf it was, you can >>>>>>>> restart your domain by running:\n virsh --connect qemu:///system start >>>>>>>> HostedEngineLocal\notherwise, please restart your installation.", >>>>>>>> "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for >>>>>>>> x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain >>>>>>>> installation does not appear to have been successful.", "If it was, you can >>>>>>>> restart your domain by running:", " virsh --connect qemu:///system start >>>>>>>> HostedEngineLocal", "otherwise, please restart your installation."], >>>>>>>> "stdout": "\nStarting install...", "stdout_lines": ["", "Starting >>>>>>>> install..."]} >>>>>>>> >>>>>>> >>>>> This seems to be in the phase where we create a local vm for the >>>>> engine. We do this with plain virt-install, nothing fancy. Searching the >>>>> net for "unsupported configuration: CPU mode 'custom'" finds other relevant >>>>> reports, you might want to check them. You can see the command in >>>>> bootstrap_local_vm.yml . >>>>> >>>>> Please check/share versions of relevant packages (libvirt*, >>>>> qemu*, etc) and relevant logs (libvirt). >>>>> >>>>> Also updating the subject line and adding Simone. >>>>> >>>>> Best regards, >>>>> -- >>>>> Didi >>>>> >>>> >>>> >>>> >>>> -- >>>> Regards, >>>> Sakhi Hadebe >>>> >>>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR >>>> >>>> Tel: +27 12 841 2308 <+27128414213> >>>> Fax: +27 12 841 4223 <+27128414223> >>>> Cell: +27 71 331 9622 <+27823034657> >>>> Email: sakhi@sanren.ac.za <shadebe@csir.co.za> >>>> >>>> >>> >>> >>> -- >>> Regards, >>> Sakhi Hadebe >>> >>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR >>> >>> Tel: +27 12 841 2308 <+27128414213> >>> Fax: +27 12 841 4223 <+27128414223> >>> Cell: +27 71 331 9622 <+27823034657> >>> Email: sakhi@sanren.ac.za <shadebe@csir.co.za> >>> >>> >> >> >> -- >> Regards, >> Sakhi Hadebe >> >> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR >> >> Tel: +27 12 841 2308 <+27128414213> >> Fax: +27 12 841 4223 <+27128414223> >> Cell: +27 71 331 9622 <+27823034657> >> Email: sakhi@sanren.ac.za <shadebe@csir.co.za> >> >> >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: https://www.ovirt.org/ >> community/about/community-guidelines/ >> List Archives: https://lists.ovirt.org/ >> archives/list/users@ovirt.org/message/ >> YHKUKW22QLRVS56XZBXEWOGORFWFEGIA/ >> >> >
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/DS7O33ZSGV5ZAHXKA3KT2ABYW3VJJWXQ/
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>

Sorry, I mistakenly send the email: Below is the output of: [root@glustermount ~]# systemctl status ovirt-imageio-daemon.service -l ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16 SAST; 19h ago Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 2min 9s ago ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was not met Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited, status=1/FAILURE) Main PID: 11345 (code=exited, status=1/FAILURE) Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service holdoff time over, scheduling restart. Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too quickly for ovirt-imageio-daemon.service Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. [root@glustermount ~]# journalctl -xe -u ovirt-imageio-daemon.service Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File "/usr/lib64/python2.7/logging/handlers.py", Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: BaseRotatingHandler.__init__(self, filename, mode Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File "/usr/lib64/python2.7/logging/handlers.py", Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: logging.FileHandler.__init__(self, filename, mode Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File "/usr/lib64/python2.7/logging/__init__.py", Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: StreamHandler.__init__(self, self._open()) Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File "/usr/lib64/python2.7/logging/__init__.py", Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: stream = open(self.baseFilename, self.mode) Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: IOError: [Errno 2] No such file or directory: '/v Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service: main process exited, code=exited, st Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. -- Subject: Unit ovirt-imageio-daemon.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit ovirt-imageio-daemon.service has failed. -- -- The result is failed. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service holdoff time over, scheduling restart Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too quickly for ovirt-imageio-daemon.servic Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. -- Subject: Unit ovirt-imageio-daemon.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit ovirt-imageio-daemon.service has failed. -- -- The result is failed. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. On Wed, Sep 5, 2018 at 12:01 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
# systemctl status ovirt-imageio-daemon.service ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16 SAST; 19h ago Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 1min 58s ago ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was not met Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited, status=1/FAILURE) Main PID: 11345 (code=exited, status=1/FAILURE)
Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service holdoff time over, scheduling ...art. Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too quickly for ovirt-imageio-daemon...vice Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Hint: Some lines were ellipsized, use -l to show in full. [root@glustermount ~]# systemctl status ovirt-imageio-daemon.service -l ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16 SAST; 19h ago Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 2min 9s ago ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was not met Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited, status=1/FAILURE) Main PID: 11345 (code=exited, status=1/FAILURE)
Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service holdoff time over, scheduling restart. Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too quickly for ovirt-imageio-daemon.service Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed.
Output of:
On Wed, Sep 5, 2018 at 11:35 AM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 5, 2018 at 11:10 AM Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi All,
The host deploy logs are showing the below errors:
[root@garlic engine-logs-2018-09-05T08:48:22Z]# cat /var/log/ovirt-hosted-engine-setup/engine-logs-2018-09-05T08 \:34\:55Z/ovirt-engine/host-deploy/ovirt-host-deploy-20180 905103605-garlic.sanren.ac.za-543b536b.log | grep -i error 2018-09-05 10:35:46,909+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' 2018-09-05 10:35:47,116 [ERROR] __main__.py:8011:MainThread @identity.py:145 - Reload of consumer identity cert /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such file or directory: '/etc/pki/consumer/key.pem' 2018-09-05 10:35:47,383+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' 2018-09-05 10:35:47,593 [ERROR] __main__.py:8011:MainThread @identity.py:145 - Reload of consumer identity cert /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such file or directory: '/etc/pki/consumer/key.pem' 2018-09-05 10:35:48,245+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' Job for ovirt-imageio-daemon.service failed because the control process exited with error code. See "systemctl status ovirt-imageio-daemon.service" and "journalctl -xe" for details. RuntimeError: Failed to start service 'ovirt-imageio-daemon' 2018-09-05 10:36:05,098+0200 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Closing up': Failed to start service 'ovirt-imageio-daemon' 2018-09-05 10:36:05,099+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-09-05 10:36:05,099+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>, RuntimeError("Failed to start service 'ovirt-imageio-daemon'",), <traceback object at 0x7f4de25ff320>)]' 2018-09-05 10:36:05,106+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-09-05 10:36:05,106+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>, RuntimeError("Failed to start service 'ovirt-imageio-daemon'",), <traceback object at 0x7f4de25ff320>)]'
I coudn't find anything helpful from the internet.
something relevant int he output of systemctl status ovirt-imageio-daemon.service and journalctl -xe -u ovirt-imageio-daemon.service ?
On Tue, Sep 4, 2018 at 6:46 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Tue, Sep 4, 2018 at 6:07 PM Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi Sahina,
I am sorry I can't reproduce the error nor access the logs since I did a fresh installed pn nodes. However now I can't even react that far because the engine deployment fails to start the host up:
[ INFO ] TASK [Wait for the host to be up] [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "goku.sanren.ac.za", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": " sanren.ac.za", "subject": "O=sanren.ac.za,CN=goku.sanren.ac.za"}, "cluster": {"href": "/ovirt-engine/api/clusters/1c a368cc-b052-11e8-b7de-00163e008187", "id": "1ca368cc-b052-11e8-b7de- 00163e008187"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/1c575995-70b1-43f7-b348-4a9788e070cd", "id": "1c575995-70b1-43f7-b348-4a9788e070cd", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "goku.sanren.ac.za", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:B3/PDH551EFid93fm6PoRryi6/cXuVE8yNgiiiROh84", "port": 22}, "statistics": [], "status": "install_failed", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false}
"status": "install_failed"
You have to check host-deploy logs to get a details error message.
Please help.
On Mon, Sep 3, 2018 at 1:34 PM, Sahina Bose <sabose@redhat.com> wrote:
On Wed, Aug 29, 2018 at 8:39 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
> Hi, > > I am sorry to bother you again. > > I am trying to deploy an oVirt engine for oVirtNode-4.2.5.1. I get > the same error I encountered before: > > [ INFO ] TASK [Add glusterfs storage domain] > [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is > "[Problem while trying to mount target]". HTTP response code is 400. > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": > "Fault reason is \"Operation Failed\". Fault detail is \"[Problem > while trying to mount target]\". HTTP response code is 400."} > Please specify the storage you would like to use > (glusterfs, iscsi, fc, nfs)[nfs]: > > The glusterd daemon is running. >
mounting 172.16.4.18:/engine at /rhev/data-center/mnt/glusterS D/172.16.4.18:_engine (mount:204) 2018-08-29 16:47:28,846+0200 ERROR (jsonrpc/3) [storage.HSM] Could not connect to storageServer (hsm:2398)
Can you try to see if you are able to mount 172.16.4.18:/engine on the server you're deploying Hosted Engine using "mount -t glusterfs 172.16.4.18:/engine /mnt/test"
> During the deployment of the engine it sets the engine entry in the > /etc/hosts file with the IP Address of 192.168.124.* which it gets form the > virbr0 bridge interface. I stopped the bridge and deleted it, but still > giving the same error. Not sure what causes it to use that interface. > Please help! > > But I give the engine an IP of 192.168.1.10 same subnet as my > gateway and my ovirtmgmt bridge. Attached is the ifconfig output of my > Node, engine.log and vdsm.log. > > Your assistance is always appreciated > > > > > > On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose <sabose@redhat.com> > wrote: > >> Is glusterd running on the server: goku.sanren.** >> There's an error >> Failed to get volume info: Command execution failed >> error: Connection failed. Please check if gluster daemon is >> operational >> >> Please check the volume status using "gluster volume status engine" >> >> and if all looks ok, attach the mount logs from /var/log/glusterfs >> >> On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe <sakhi@sanren.ac.za> >> wrote: >> >>> Hi, >>> >>> I have managed to fix the error by enabling the DMA Virtualisation >>> in BIOS. I am now hit with a new error: It's failing to add a glusterfs >>> storage domain: >>> >>> [ INFO ] TASK [Add glusterfs storage domain] >>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail >>> is "[Problem while trying to mount target]". HTTP response code is 400. >>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": >>> "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while >>> trying to mount target]\". HTTP response code is 400."} >>> Please specify the storage you would like to use >>> (glusterfs, iscsi, fc, nfs)[nfs]: >>> >>> Attached are vdsm and engine log files. >>> >>> >>> >>> >>> >>> On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe <sakhi@sanren.ac.za> >>> wrote: >>> >>>> >>>> >>>> On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe <sakhi@sanren.ac.za >>>> > wrote: >>>> >>>>> Hi, >>>>> >>>>> Below are the versions of packages installed. Please find the >>>>> logs attached. >>>>> Qemu: >>>>> ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch >>>>> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 >>>>> qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 >>>>> qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 >>>>> qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64 >>>>> >>>>> Libvirt installed packages: >>>>> libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-libs-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-python-3.9.0-1.el7.x86_64 >>>>> libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-client-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64 >>>>> >>>>> Virt-manager: >>>>> virt-manager-common-1.4.3-3.el7.noarch >>>>> >>>>> oVirt: >>>>> [root@localhost network-scripts]# rpm -qa | grep ovirt >>>>> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >>>>> cockpit-ovirt-dashboard-0.11.28-1.el7.noarch >>>>> ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch >>>>> ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch >>>>> ovirt-host-dependencies-4.2.3-1.el7.x86_64 >>>>> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch >>>>> ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch >>>>> ovirt-host-4.2.3-1.el7.x86_64 >>>>> python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 >>>>> ovirt-host-deploy-1.7.4-1.el7.noarch >>>>> cockpit-machines-ovirt-169-1.el7.noarch >>>>> ovirt-hosted-engine-ha-2.2.14-1.el7.noarch >>>>> ovirt-vmconsole-1.0.5-4.el7.centos.noarch >>>>> ovirt-provider-ovn-driver-1.2.11-1.el7.noarch >>>>> ovirt-engine-appliance-4.2-20180626.1.el7.noarch >>>>> ovirt-release42-4.2.4-1.el7.noarch >>>>> ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David < >>>>> didi@redhat.com> wrote: >>>>> >>>>>> On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe < >>>>>> sakhi@sanren.ac.za> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I did not select any CPU architecture. It doenst gove me the >>>>>>> option to select one. It only states the number of virtual CPUs and the >>>>>>> memory for the engine VM. >>>>>>> >>>>>>> Looking at the documentation of installing >>>>>>> ovirt-release36.rpm....it does allow you to select te CPU, but not when >>>>>>> installing ovirt-release42.rpm >>>>>>> >>>>>>> On Tuesday, July 10, 2018, Alastair Neil < >>>>>>> ajneil.tech@gmail.com> wrote: >>>>>>> >>>>>>>> what did you select as your CPU architecture when you created >>>>>>>> the cluster? It looks like the VM is trying to use a CPU type of "Custom", >>>>>>>> how many nodes in your cluster? I suggest you specify the lowest common >>>>>>>> denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU >>>>>>>> architecture of the cluster.. >>>>>>>> >>>>>>>> On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe < >>>>>>>> sakhi@sanren.ac.za> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I have just re-installed centOS 7 in 3 servers and have >>>>>>>>> configured gluster volumes following this documentation: >>>>>>>>> https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt >>>>>>>>> -3-6/, But I have installed >>>>>>>>> >>>>>>>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm >>>>>>>>> >>>>>>>>> package. >>>>>>>>> Hosted-engine --deploy is failing with this error: >>>>>>>>> >>>>>>>>> "rhel7", "--virt-type", "kvm", "--memory", "16384", >>>>>>>>> "--vcpus", "4", "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", >>>>>>>>> "--disk", "/var/tmp/localvm0nnJH9/images >>>>>>>>> /eacac30d-0304-4c77-8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", >>>>>>>>> "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", >>>>>>>>> "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", >>>>>>>>> "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", >>>>>>>>> "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], >>>>>>>>> "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": >>>>>>>>> "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", >>>>>>>>> "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64 >>>>>>>>> kvm domain on x86_64 host is not supported by hypervisor\nDomain >>>>>>>>> installation does not appear to have been successful.\nIf it was, you can >>>>>>>>> restart your domain by running:\n virsh --connect qemu:///system start >>>>>>>>> HostedEngineLocal\notherwise, please restart your installation.", >>>>>>>>> "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for >>>>>>>>> x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain >>>>>>>>> installation does not appear to have been successful.", "If it was, you can >>>>>>>>> restart your domain by running:", " virsh --connect qemu:///system start >>>>>>>>> HostedEngineLocal", "otherwise, please restart your installation."], >>>>>>>>> "stdout": "\nStarting install...", "stdout_lines": ["", "Starting >>>>>>>>> install..."]} >>>>>>>>> >>>>>>>> >>>>>> This seems to be in the phase where we create a local vm for >>>>>> the engine. We do this with plain virt-install, nothing fancy. Searching >>>>>> the net for "unsupported configuration: CPU mode 'custom'" finds other >>>>>> relevant reports, you might want to check them. You can see the command in >>>>>> bootstrap_local_vm.yml . >>>>>> >>>>>> Please check/share versions of relevant packages (libvirt*, >>>>>> qemu*, etc) and relevant logs (libvirt). >>>>>> >>>>>> Also updating the subject line and adding Simone. >>>>>> >>>>>> Best regards, >>>>>> -- >>>>>> Didi >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Regards, >>>>> Sakhi Hadebe >>>>> >>>>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR >>>>> >>>>> Tel: +27 12 841 2308 <+27128414213> >>>>> Fax: +27 12 841 4223 <+27128414223> >>>>> Cell: +27 71 331 9622 <+27823034657> >>>>> Email: sakhi@sanren.ac.za <shadebe@csir.co.za> >>>>> >>>>> >>>> >>>> >>>> -- >>>> Regards, >>>> Sakhi Hadebe >>>> >>>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR >>>> >>>> Tel: +27 12 841 2308 <+27128414213> >>>> Fax: +27 12 841 4223 <+27128414223> >>>> Cell: +27 71 331 9622 <+27823034657> >>>> Email: sakhi@sanren.ac.za <shadebe@csir.co.za> >>>> >>>> >>> >>> >>> -- >>> Regards, >>> Sakhi Hadebe >>> >>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR >>> >>> Tel: +27 12 841 2308 <+27128414213> >>> Fax: +27 12 841 4223 <+27128414223> >>> Cell: +27 71 331 9622 <+27823034657> >>> Email: sakhi@sanren.ac.za <shadebe@csir.co.za> >>> >>> >>> _______________________________________________ >>> Users mailing list -- users@ovirt.org >>> To unsubscribe send an email to users-leave@ovirt.org >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>> oVirt Code of Conduct: https://www.ovirt.org/communit >>> y/about/community-guidelines/ >>> List Archives: https://lists.ovirt.org/archiv >>> es/list/users@ovirt.org/message/YHKUKW22QLRVS56XZBXEWOGORFWFEGIA/ >>> >>> >> > > > -- > Regards, > Sakhi Hadebe > > Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR > > Tel: +27 12 841 2308 <+27128414213> > Fax: +27 12 841 4223 <+27128414223> > Cell: +27 71 331 9622 <+27823034657> > Email: sakhi@sanren.ac.za <shadebe@csir.co.za> > >
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/DS7O33ZSGV5ZAHXKA3KT2ABYW3VJJWXQ/
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>

Can you please check if on your host you have /var/log/ovirt-imageio-daemon and its ownership and permissions (it should be vdsm:kvm,700)? Can you please report which version of ovirt-imageio-daemon you are using? We had a bug there but it has been fixed long time ago. On Wed, Sep 5, 2018 at 12:04 PM Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Sorry, I mistakenly send the email:
Below is the output of: [root@glustermount ~]# systemctl status ovirt-imageio-daemon.service -l ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16 SAST; 19h ago Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 2min 9s ago ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was not met Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited, status=1/FAILURE) Main PID: 11345 (code=exited, status=1/FAILURE)
Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service holdoff time over, scheduling restart. Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too quickly for ovirt-imageio-daemon.service Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. [root@glustermount ~]# journalctl -xe -u ovirt-imageio-daemon.service Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File "/usr/lib64/python2.7/logging/handlers.py", Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: BaseRotatingHandler.__init__(self, filename, mode Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File "/usr/lib64/python2.7/logging/handlers.py", Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: logging.FileHandler.__init__(self, filename, mode Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File "/usr/lib64/python2.7/logging/__init__.py", Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: StreamHandler.__init__(self, self._open()) Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File "/usr/lib64/python2.7/logging/__init__.py", Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: stream = open(self.baseFilename, self.mode) Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: IOError: [Errno 2] No such file or directory: '/v Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service: main process exited, code=exited, st Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. -- Subject: Unit ovirt-imageio-daemon.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit ovirt-imageio-daemon.service has failed. -- -- The result is failed. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service holdoff time over, scheduling restart Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too quickly for ovirt-imageio-daemon.servic Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. -- Subject: Unit ovirt-imageio-daemon.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit ovirt-imageio-daemon.service has failed. -- -- The result is failed. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed.
On Wed, Sep 5, 2018 at 12:01 PM, Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
# systemctl status ovirt-imageio-daemon.service ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16 SAST; 19h ago Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 1min 58s ago ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was not met Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited, status=1/FAILURE) Main PID: 11345 (code=exited, status=1/FAILURE)
Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service holdoff time over, scheduling ...art. Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too quickly for ovirt-imageio-daemon...vice Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Hint: Some lines were ellipsized, use -l to show in full. [root@glustermount ~]# systemctl status ovirt-imageio-daemon.service -l ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16 SAST; 19h ago Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 2min 9s ago ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was not met Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited, status=1/FAILURE) Main PID: 11345 (code=exited, status=1/FAILURE)
Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service holdoff time over, scheduling restart. Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too quickly for ovirt-imageio-daemon.service Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed.
Output of:
On Wed, Sep 5, 2018 at 11:35 AM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 5, 2018 at 11:10 AM Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi All,
The host deploy logs are showing the below errors:
[root@garlic engine-logs-2018-09-05T08:48:22Z]# cat /var/log/ovirt-hosted-engine-setup/engine-logs-2018-09-05T08\:34\:55Z/ovirt-engine/host-deploy/ovirt-host-deploy-20180905103605-garlic.sanren.ac.za-543b536b.log | grep -i error 2018-09-05 10:35:46,909+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' 2018-09-05 10:35:47,116 [ERROR] __main__.py:8011:MainThread @identity.py:145 - Reload of consumer identity cert /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such file or directory: '/etc/pki/consumer/key.pem' 2018-09-05 10:35:47,383+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' 2018-09-05 10:35:47,593 [ERROR] __main__.py:8011:MainThread @identity.py:145 - Reload of consumer identity cert /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such file or directory: '/etc/pki/consumer/key.pem' 2018-09-05 10:35:48,245+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' Job for ovirt-imageio-daemon.service failed because the control process exited with error code. See "systemctl status ovirt-imageio-daemon.service" and "journalctl -xe" for details. RuntimeError: Failed to start service 'ovirt-imageio-daemon' 2018-09-05 10:36:05,098+0200 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Closing up': Failed to start service 'ovirt-imageio-daemon' 2018-09-05 10:36:05,099+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-09-05 10:36:05,099+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>, RuntimeError("Failed to start service 'ovirt-imageio-daemon'",), <traceback object at 0x7f4de25ff320>)]' 2018-09-05 10:36:05,106+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-09-05 10:36:05,106+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>, RuntimeError("Failed to start service 'ovirt-imageio-daemon'",), <traceback object at 0x7f4de25ff320>)]'
I coudn't find anything helpful from the internet.
something relevant int he output of systemctl status ovirt-imageio-daemon.service and journalctl -xe -u ovirt-imageio-daemon.service ?
On Tue, Sep 4, 2018 at 6:46 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Tue, Sep 4, 2018 at 6:07 PM Sakhi Hadebe <sakhi@sanren.ac.za> wrote:
Hi Sahina,
I am sorry I can't reproduce the error nor access the logs since I did a fresh installed pn nodes. However now I can't even react that far because the engine deployment fails to start the host up:
[ INFO ] TASK [Wait for the host to be up] [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "goku.sanren.ac.za", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": " sanren.ac.za", "subject": "O=sanren.ac.za,CN=goku.sanren.ac.za"}, "cluster": {"href": "/ovirt-engine/api/clusters/ 1ca368cc-b052-11e8-b7de-00163e008187", "id": "1ca368cc-b052-11e8-b7de-00163e008187"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/ 1c575995-70b1-43f7-b348-4a9788e070cd", "id": "1c575995-70b1-43f7-b348-4a9788e070cd", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "goku.sanren.ac.za", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:B3/PDH551EFid93fm6PoRryi6/cXuVE8yNgiiiROh84", "port": 22}, "statistics": [], "status": "install_failed", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false}
"status": "install_failed"
You have to check host-deploy logs to get a details error message.
Please help.
On Mon, Sep 3, 2018 at 1:34 PM, Sahina Bose <sabose@redhat.com> wrote:
> > > On Wed, Aug 29, 2018 at 8:39 PM, Sakhi Hadebe <sakhi@sanren.ac.za> > wrote: > >> Hi, >> >> I am sorry to bother you again. >> >> I am trying to deploy an oVirt engine for oVirtNode-4.2.5.1. I get >> the same error I encountered before: >> >> [ INFO ] TASK [Add glusterfs storage domain] >> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail >> is "[Problem while trying to mount target]". HTTP response code is >> 400. >> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": >> "Fault reason is \"Operation Failed\". Fault detail is \"[Problem >> while trying to mount target]\". HTTP response code is 400."} >> Please specify the storage you would like to use >> (glusterfs, iscsi, fc, nfs)[nfs]: >> >> The glusterd daemon is running. >> > > mounting 172.16.4.18:/engine at > /rhev/data-center/mnt/glusterSD/172.16.4.18:_engine (mount:204) > 2018-08-29 16:47:28,846+0200 ERROR (jsonrpc/3) [storage.HSM] Could > not connect to storageServer (hsm:2398) > > Can you try to see if you are able to mount 172.16.4.18:/engine on > the server you're deploying Hosted Engine using "mount -t glusterfs > 172.16.4.18:/engine /mnt/test" > > >> During the deployment of the engine it sets the engine entry in the >> /etc/hosts file with the IP Address of 192.168.124.* which it gets form the >> virbr0 bridge interface. I stopped the bridge and deleted it, but still >> giving the same error. Not sure what causes it to use that interface. >> Please help! >> >> But I give the engine an IP of 192.168.1.10 same subnet as my >> gateway and my ovirtmgmt bridge. Attached is the ifconfig output of my >> Node, engine.log and vdsm.log. >> >> Your assistance is always appreciated >> >> >> >> >> >> On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose <sabose@redhat.com> >> wrote: >> >>> Is glusterd running on the server: goku.sanren.** >>> There's an error >>> Failed to get volume info: Command execution failed >>> error: Connection failed. Please check if gluster daemon is >>> operational >>> >>> Please check the volume status using "gluster volume status engine" >>> >>> and if all looks ok, attach the mount logs from /var/log/glusterfs >>> >>> On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe <sakhi@sanren.ac.za> >>> wrote: >>> >>>> Hi, >>>> >>>> I have managed to fix the error by enabling the DMA >>>> Virtualisation in BIOS. I am now hit with a new error: It's failing to add >>>> a glusterfs storage domain: >>>> >>>> [ INFO ] TASK [Add glusterfs storage domain] >>>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail >>>> is "[Problem while trying to mount target]". HTTP response code is 400. >>>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, >>>> "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem >>>> while trying to mount target]\". HTTP response code is 400."} >>>> Please specify the storage you would like to use >>>> (glusterfs, iscsi, fc, nfs)[nfs]: >>>> >>>> Attached are vdsm and engine log files. >>>> >>>> >>>> >>>> >>>> >>>> On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe <sakhi@sanren.ac.za >>>> > wrote: >>>> >>>>> >>>>> >>>>> On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe < >>>>> sakhi@sanren.ac.za> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> Below are the versions of packages installed. Please find the >>>>>> logs attached. >>>>>> Qemu: >>>>>> ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch >>>>>> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 >>>>>> qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 >>>>>> qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 >>>>>> qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64 >>>>>> >>>>>> Libvirt installed packages: >>>>>> libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-libs-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-python-3.9.0-1.el7.x86_64 >>>>>> libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-client-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 >>>>>> libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64 >>>>>> >>>>>> Virt-manager: >>>>>> virt-manager-common-1.4.3-3.el7.noarch >>>>>> >>>>>> oVirt: >>>>>> [root@localhost network-scripts]# rpm -qa | grep ovirt >>>>>> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >>>>>> cockpit-ovirt-dashboard-0.11.28-1.el7.noarch >>>>>> ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch >>>>>> ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch >>>>>> ovirt-host-dependencies-4.2.3-1.el7.x86_64 >>>>>> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch >>>>>> ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch >>>>>> ovirt-host-4.2.3-1.el7.x86_64 >>>>>> python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 >>>>>> ovirt-host-deploy-1.7.4-1.el7.noarch >>>>>> cockpit-machines-ovirt-169-1.el7.noarch >>>>>> ovirt-hosted-engine-ha-2.2.14-1.el7.noarch >>>>>> ovirt-vmconsole-1.0.5-4.el7.centos.noarch >>>>>> ovirt-provider-ovn-driver-1.2.11-1.el7.noarch >>>>>> ovirt-engine-appliance-4.2-20180626.1.el7.noarch >>>>>> ovirt-release42-4.2.4-1.el7.noarch >>>>>> ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David < >>>>>> didi@redhat.com> wrote: >>>>>> >>>>>>> On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe < >>>>>>> sakhi@sanren.ac.za> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I did not select any CPU architecture. It doenst gove me the >>>>>>>> option to select one. It only states the number of virtual CPUs and the >>>>>>>> memory for the engine VM. >>>>>>>> >>>>>>>> Looking at the documentation of installing >>>>>>>> ovirt-release36.rpm....it does allow you to select te CPU, but not when >>>>>>>> installing ovirt-release42.rpm >>>>>>>> >>>>>>>> On Tuesday, July 10, 2018, Alastair Neil < >>>>>>>> ajneil.tech@gmail.com> wrote: >>>>>>>> >>>>>>>>> what did you select as your CPU architecture when you >>>>>>>>> created the cluster? It looks like the VM is trying to use a CPU type of >>>>>>>>> "Custom", how many nodes in your cluster? I suggest you specify the lowest >>>>>>>>> common denominator of CPU architecture (e.g. Sandybridge) of the nodes as >>>>>>>>> the CPU architecture of the cluster.. >>>>>>>>> >>>>>>>>> On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe < >>>>>>>>> sakhi@sanren.ac.za> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> I have just re-installed centOS 7 in 3 servers and have >>>>>>>>>> configured gluster volumes following this documentation: >>>>>>>>>> https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/, >>>>>>>>>> But I have installed >>>>>>>>>> >>>>>>>>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm >>>>>>>>>> >>>>>>>>>> package. >>>>>>>>>> Hosted-engine --deploy is failing with this error: >>>>>>>>>> >>>>>>>>>> "rhel7", "--virt-type", "kvm", "--memory", "16384", >>>>>>>>>> "--vcpus", "4", "--network", >>>>>>>>>> "network=default,mac=00:16:3e:09:5e:5d,model=virtio", "--disk", >>>>>>>>>> "/var/tmp/localvm0nnJH9/images/eacac30d-0304-4c77-8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", >>>>>>>>>> "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", >>>>>>>>>> "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", >>>>>>>>>> "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", >>>>>>>>>> "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], >>>>>>>>>> "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": >>>>>>>>>> "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", >>>>>>>>>> "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64 >>>>>>>>>> kvm domain on x86_64 host is not supported by hypervisor\nDomain >>>>>>>>>> installation does not appear to have been successful.\nIf it was, you can >>>>>>>>>> restart your domain by running:\n virsh --connect qemu:///system start >>>>>>>>>> HostedEngineLocal\notherwise, please restart your installation.", >>>>>>>>>> "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for >>>>>>>>>> x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain >>>>>>>>>> installation does not appear to have been successful.", "If it was, you can >>>>>>>>>> restart your domain by running:", " virsh --connect qemu:///system start >>>>>>>>>> HostedEngineLocal", "otherwise, please restart your installation."], >>>>>>>>>> "stdout": "\nStarting install...", "stdout_lines": ["", "Starting >>>>>>>>>> install..."]} >>>>>>>>>> >>>>>>>>> >>>>>>> This seems to be in the phase where we create a local vm for >>>>>>> the engine. We do this with plain virt-install, nothing fancy. Searching >>>>>>> the net for "unsupported configuration: CPU mode 'custom'" finds other >>>>>>> relevant reports, you might want to check them. You can see the command in >>>>>>> bootstrap_local_vm.yml . >>>>>>> >>>>>>> Please check/share versions of relevant packages (libvirt*, >>>>>>> qemu*, etc) and relevant logs (libvirt). >>>>>>> >>>>>>> Also updating the subject line and adding Simone. >>>>>>> >>>>>>> Best regards, >>>>>>> -- >>>>>>> Didi >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Regards, >>>>>> Sakhi Hadebe >>>>>> >>>>>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR >>>>>> >>>>>> Tel: +27 12 841 2308 <+27128414213> >>>>>> Fax: +27 12 841 4223 <+27128414223> >>>>>> Cell: +27 71 331 9622 <+27823034657> >>>>>> Email: sakhi@sanren.ac.za <shadebe@csir.co.za> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Regards, >>>>> Sakhi Hadebe >>>>> >>>>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR >>>>> >>>>> Tel: +27 12 841 2308 <+27128414213> >>>>> Fax: +27 12 841 4223 <+27128414223> >>>>> Cell: +27 71 331 9622 <+27823034657> >>>>> Email: sakhi@sanren.ac.za <shadebe@csir.co.za> >>>>> >>>>> >>>> >>>> >>>> -- >>>> Regards, >>>> Sakhi Hadebe >>>> >>>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR >>>> >>>> Tel: +27 12 841 2308 <+27128414213> >>>> Fax: +27 12 841 4223 <+27128414223> >>>> Cell: +27 71 331 9622 <+27823034657> >>>> Email: sakhi@sanren.ac.za <shadebe@csir.co.za> >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list -- users@ovirt.org >>>> To unsubscribe send an email to users-leave@ovirt.org >>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >>>> oVirt Code of Conduct: >>>> https://www.ovirt.org/community/about/community-guidelines/ >>>> List Archives: >>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YHKUKW22QLRVS5... >>>> >>>> >>> >> >> >> -- >> Regards, >> Sakhi Hadebe >> >> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR >> >> Tel: +27 12 841 2308 <+27128414213> >> Fax: +27 12 841 4223 <+27128414223> >> Cell: +27 71 331 9622 <+27823034657> >> Email: sakhi@sanren.ac.za <shadebe@csir.co.za> >> >> >
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DS7O33ZSGV5ZAH...
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
-- Regards, Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi@sanren.ac.za <shadebe@csir.co.za>
participants (5)
-
Sahina Bose
-
Sakhi Hadebe
-
Sandro Bonazzola
-
Simone Tiraboschi
-
Yedidyah Bar David