[ovirt-users] Testing oVirt 4.2

wodel youchi wodel.youchi at gmail.com
Tue Mar 27 22:15:54 UTC 2018


Hi and thanks for your replies,

I cleaned up everything and started from scratch.
I using nested-kvm for my test with host-passthrough to expose vmx to the
VM hypervisor, my physical CPU is a Core i5 6500

This time I had another problem, the VM engine won't start because of this
error in vdsm.log
2018-03-27 22:48:31,893+0100 ERROR (vm/c9c0640e) [virt.vm]
(vmId='c9c0640e-d8f1-4ade-95f3-40f2982b1d8c') The vm start process failed
(vm:927)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, in
_startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2756, in
_run
    dom.createWithFlags(flags)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 130, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92,
in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in
createWithFlags
    if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed',
dom=self)
libvirtError:* internal error: Unknown CPU model SkylakeClient*

The CPU model *SkylakeClient* presented to the VM engine is not recognized.
is there a way to bypass this?

Regards.

2018-03-24 16:25 GMT+01:00 Andrei Verovski <andreil1 at starlett.lv>:

> On 03/24/2018 01:40 PM, Andy Michielsen wrote:
>
> Hello,
>
> I also have done a installation on my host running KVM and I ‘m pretty
> sure my vm’s can only use the 192.168.122.0/24 range if you install them
> with NAT networking when creating them. So that might explain why you see
> that address appear in your log and also explain why the engine system
> can’t be reached.
>
>
> Can't tell fo sure about other installations, yet IMHO problem is with
> networking schema.
>
> One need to set bridge to real ethernet interface and add it to KVM VM
> definition.
>
> For example, my SuSE box have 2 ethernet cards, 192.168.0.aa for SMB fle
> server and another bridged with IP 192.168.0.bb defined within KVM guest
> (CentOS 7.4 with oVirt host engine). See configs below.
>
> Another SuSE box have 10 Ethernet interfaces, one for for its own needs,
> and 4 + 3 for VyOS routers running as KVM guests.
>
> ******************************
>
> SU47:/etc/sysconfig/network # tail -n 100 ifcfg-br0
> BOOTPROTO='static'
> BRIDGE='yes'
> BRIDGE_FORWARDDELAY='0'
> BRIDGE_PORTS='eth0'
> BRIDGE_STP='off'
> BROADCAST=''
> DHCLIENT_SET_DEFAULT_ROUTE='no'
> ETHTOOL_OPTIONS=''
> IPADDR=''
> MTU=''
> NETWORK=''
> PREFIXLEN='24'
> REMOTE_IPADDR=''
> STARTMODE='auto'
> NAME=''
>
> SU47:/etc/sysconfig/network # tail -n 100 ifcfg-eth0
> BOOTPROTO='none'
> BROADCAST=''
> DHCLIENT_SET_DEFAULT_ROUTE='no'
> ETHTOOL_OPTIONS=''
> IPADDR=''
> MTU=''
> NAME='82579LM Gigabit Network Connection'
> NETMASK=''
> NETWORK=''
> REMOTE_IPADDR=''
> STARTMODE='auto'
> PREFIXLEN=''
>
>
>
>
> Kind regards.
>
> On 24 Mar 2018, at 12:13, wodel youchi <wodel.youchi at gmail.com> wrote:
>
> Hi,
>
> I am testing oVirt 4.2, I am using nested KVM for that.
> I am using two hypervisors Centos 7 updated and the hosted-Engine
> deployment using the ovirt appliance.
> For storage I am using iscsi and NFS4
>
> Versions I am using :
> ovirt-engine-appliance-4.2-20180214.1.el7.centos.noarch
> ovirt-hosted-engine-setup-2.2.9-1.el7.centos.noarch
> kernel-3.10.0-693.21.1.el7.x86_64
>
> I have a problem deploying the hosted-engine VM, when configuring the
> deployment (hosted-engine --deploy), it asks for the engine's hostname then
> the engine's IP address, I use static IP, in my lab I used *192.168.1.104* as
> IP for the VM engine, and I choose to add the it's hostname entry to the
> hypervisors's /etc/hosts
>
> But the deployment get stuck every time in the same place : *TASK [Wait
> for the host to become non operational]*
>
> After some time, it gave up and the deployment fails.
>
> I don't know the reason for now, but I have seen this behavior in */etc/hosts
> *of the hypervisor.
>
> In the beginning of the deployment the entry  *192.168.2.104
> engine01.example.local* is added, then sometime after that it's deleted,
> then a new entry is added with this IP *192.168.122.65 engine01.wodel.wd* which
> has nothing to do with the network I am using.
>
> Here is the error I am seeing in the deployment log
>
> 2018-03-24 11:51:31,398+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils
> ansible_utils._process_output:100 TASK [Wait
> for the host to become non operational]
> 2018-03-24 12:02:07,284+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
> ansible_utils._process_output:94 {u'_ansible
> _parsed': True, u'_ansible_no_log': False, u'changed': False, u'attempts':
> 150, u'invocation': {u'module_args': {u'pattern':
> u'name=hyperv01.wodel.wd', u'fetch_nested': False, u'nested_attributes':
> []}}, u'ansible_facts': {u'ovirt_hosts': []}}
> 2018-03-24 12:02:07,385+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils
> ansible_utils._process_output:98 fatal: [loc
> alhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, "attempts":
> 150, "changed": false}
> 2018-03-24 12:02:07,587+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
> ansible_utils._process_output:94 PLAY RECAP
> [engine01.wodel.wd] : ok: 15 changed: 8 unreachable: 0 skipped: 4 failed:
> 0
> 2018-03-24 12:02:07,688+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
> ansible_utils._process_output:94 PLAY RECAP
> [localhost] : ok: 41 changed: 14 unreachable: 0 skipped: 3 failed: 1
> 2018-03-24 12:02:07,789+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
> ansible_utils.run:180 ansible-playbook rc: 2
> 2018-03-24 12:02:07,790+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
> ansible_utils.run:187 ansible-playbook stdou
> t:
> 2018-03-24 12:02:07,791+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
> ansible_utils.run:189  to retry, use: --limi
> t @/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry
>
> 2018-03-24 12:02:07,791+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
> ansible_utils.run:190 ansible-playbook stder
> r:
> 2018-03-24 12:02:07,792+0100 DEBUG otopi.context
> context._executeMethod:143 method exception
> Traceback (most recent call last):
>  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
> _executeMethod
>    method['method']()
>  File "/usr/share/ovirt-hosted-engine-setup/scripts/../
> plugins/gr-he-ansiblesetup/core/misc.py", line 186, in _closeup
>    r = ah.run()
>  File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py",
> line 194, in run
>    raise RuntimeError(_('Failed executing ansible-playbook'))
> RuntimeError: Failed executing ansible-playbook
> 2018-03-24 12:02:07,795+0100 ERROR otopi.context
> context._executeMethod:152 Failed to execute stage 'Closing up': Failed exec
> uting ansible-playbook
>
>
> any idea????
>
>
> Regards
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180327/b30649a9/attachment.html>


More information about the Users mailing list