Deploying selfhosted engine

I'm having trouble deploying selfhosted engine following this guide http://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-engine... It gets close to deploying the vm, but fails with 2016-09-13 19:46:04 ERROR otopi.plugins.gr_he_common.core.misc misc._terminate:180 Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy For storage I chose a random NFS share on a remote host that was empty, I did notice that something kept removing 'nameserver' entries from resolv.conf vdsm log show issues connecting to the broker, periodic/0::ERROR::2016-09-13 19:49:49,704::brokerlink::75::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(connect) Failed to connect to broker, the number of errors has exceeded the limit (1) periodic/0::ERROR::2016-09-13 19:49:49,704::api::253::root::(_getHaInfo) failed to retrieve Hosted Engine HA info periodic/1::WARNING::2016-09-13 19:49:49,748::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/2::WARNING::2016-09-13 19:49:51,749::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/3::WARNING::2016-09-13 19:49:53,749::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/0::WARNING::2016-09-13 19:49:55,750::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/1::WARNING::2016-09-13 19:49:57,750::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/2::WARNING::2016-09-13 19:49:59,751::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] What should I check next ?

On Wed, Sep 14, 2016 at 8:19 AM, Maton, Brett <matonb@ltresources.co.uk> wrote:
I'm having trouble deploying selfhosted engine following this guide http://www.ovirt.org/develop/developer-guide/engine/ migrate-to-hosted-engine/
It gets close to deploying the vm, but fails with
2016-09-13 19:46:04 ERROR otopi.plugins.gr_he_common.core.misc misc._terminate:180 Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy
can you please attach the whole log to let us check where it's failing?
For storage I chose a random NFS share on a remote host that was empty, I did notice that something kept removing 'nameserver' entries from resolv.conf
vdsm log show issues connecting to the broker,
ovirt-hosted-engine-setup will enable and start the broker if and only if the deployment went fine; that errors at deploy time are harmless since the broker is really down.
periodic/0::ERROR::2016-09-13 19:49:49,704::brokerlink::75:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(connect) Failed to connect to broker, the number of errors has exceeded the limit (1) periodic/0::ERROR::2016-09-13 19:49:49,704::api::253::root::(_getHaInfo) failed to retrieve Hosted Engine HA info periodic/1::WARNING::2016-09-13 19:49:49,748::periodic::269:: virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6- fe21d1dff8c5'] periodic/2::WARNING::2016-09-13 19:49:51,749::periodic::269:: virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6- fe21d1dff8c5'] periodic/3::WARNING::2016-09-13 19:49:53,749::periodic::269:: virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6- fe21d1dff8c5'] periodic/0::WARNING::2016-09-13 19:49:55,750::periodic::269:: virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6- fe21d1dff8c5'] periodic/1::WARNING::2016-09-13 19:49:57,750::periodic::269:: virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6- fe21d1dff8c5'] periodic/2::WARNING::2016-09-13 19:49:59,751::periodic::269:: virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6- fe21d1dff8c5']
What should I check next ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Log attached as request On 14 September 2016 at 08:48, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 8:19 AM, Maton, Brett <matonb@ltresources.co.uk> wrote:
I'm having trouble deploying selfhosted engine following this guide http://www.ovirt.org/develop/developer-guide/engine/migrate- to-hosted-engine/
It gets close to deploying the vm, but fails with
2016-09-13 19:46:04 ERROR otopi.plugins.gr_he_common.core.misc misc._terminate:180 Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy
can you please attach the whole log to let us check where it's failing?
For storage I chose a random NFS share on a remote host that was empty, I did notice that something kept removing 'nameserver' entries from resolv.conf
vdsm log show issues connecting to the broker,
ovirt-hosted-engine-setup will enable and start the broker if and only if the deployment went fine; that errors at deploy time are harmless since the broker is really down.
periodic/0::ERROR::2016-09-13 19:49:49,704::brokerlink::75:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(connect) Failed to connect to broker, the number of errors has exceeded the limit (1) periodic/0::ERROR::2016-09-13 19:49:49,704::api::253::root::(_getHaInfo) failed to retrieve Hosted Engine HA info periodic/1::WARNING::2016-09-13 19:49:49,748::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/2::WARNING::2016-09-13 19:49:51,749::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/3::WARNING::2016-09-13 19:49:53,749::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/0::WARNING::2016-09-13 19:49:55,750::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/1::WARNING::2016-09-13 19:49:57,750::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/2::WARNING::2016-09-13 19:49:59,751::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5']
What should I check next ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Sep 14, 2016 at 10:46 AM, Maton, Brett <matonb@ltresources.co.uk> wrote:
Log attached as request
Sorry, I was asking for hosted-engine-setup logs; you can find them under /var/log/ovirt-hosted-engine-setup/
On 14 September 2016 at 08:48, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 8:19 AM, Maton, Brett <matonb@ltresources.co.uk> wrote:
I'm having trouble deploying selfhosted engine following this guide http://www.ovirt.org/develop/developer-guide/engine/migrate- to-hosted-engine/
It gets close to deploying the vm, but fails with
2016-09-13 19:46:04 ERROR otopi.plugins.gr_he_common.core.misc misc._terminate:180 Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy
can you please attach the whole log to let us check where it's failing?
For storage I chose a random NFS share on a remote host that was empty, I did notice that something kept removing 'nameserver' entries from resolv.conf
vdsm log show issues connecting to the broker,
ovirt-hosted-engine-setup will enable and start the broker if and only if the deployment went fine; that errors at deploy time are harmless since the broker is really down.
periodic/0::ERROR::2016-09-13 19:49:49,704::brokerlink::75:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(connect) Failed to connect to broker, the number of errors has exceeded the limit (1) periodic/0::ERROR::2016-09-13 19:49:49,704::api::253::root::(_getHaInfo) failed to retrieve Hosted Engine HA info periodic/1::WARNING::2016-09-13 19:49:49,748::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/2::WARNING::2016-09-13 19:49:51,749::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/3::WARNING::2016-09-13 19:49:53,749::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/0::WARNING::2016-09-13 19:49:55,750::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/1::WARNING::2016-09-13 19:49:57,750::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/2::WARNING::2016-09-13 19:49:59,751::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5']
What should I check next ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Oops :) hosted setup log attached. On 14 September 2016 at 09:59, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 10:46 AM, Maton, Brett <matonb@ltresources.co.uk> wrote:
Log attached as request
Sorry, I was asking for hosted-engine-setup logs; you can find them under /var/log/ovirt-hosted-engine-setup/
On 14 September 2016 at 08:48, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 8:19 AM, Maton, Brett <matonb@ltresources.co.uk> wrote:
I'm having trouble deploying selfhosted engine following this guide http://www.ovirt.org/develop/developer-guide/engine/migrate- to-hosted-engine/
It gets close to deploying the vm, but fails with
2016-09-13 19:46:04 ERROR otopi.plugins.gr_he_common.core.misc misc._terminate:180 Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy
can you please attach the whole log to let us check where it's failing?
For storage I chose a random NFS share on a remote host that was empty, I did notice that something kept removing 'nameserver' entries from resolv.conf
vdsm log show issues connecting to the broker,
ovirt-hosted-engine-setup will enable and start the broker if and only if the deployment went fine; that errors at deploy time are harmless since the broker is really down.
periodic/0::ERROR::2016-09-13 19:49:49,704::brokerlink::75:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(connect) Failed to connect to broker, the number of errors has exceeded the limit (1) periodic/0::ERROR::2016-09-13 19:49:49,704::api::253::root::(_getHaInfo) failed to retrieve Hosted Engine HA info periodic/1::WARNING::2016-09-13 19:49:49,748::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/2::WARNING::2016-09-13 19:49:51,749::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/3::WARNING::2016-09-13 19:49:53,749::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/0::WARNING::2016-09-13 19:49:55,750::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/1::WARNING::2016-09-13 19:49:57,750::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/2::WARNING::2016-09-13 19:49:59,751::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5']
What should I check next ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Sep 14, 2016 at 11:05 AM, Maton, Brett <matonb@ltresources.co.uk> wrote:
Oops :)
hosted setup log attached.
Your issue is probably here: 2016-09-13 19:41:52 DEBUG otopi.ovirt_host_deploy.hardware hardware.detect:210 Cannot detect virualization Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 199, in detect if self._isVirtualizationEnabled(): File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 175, in _isVirtualizationEnabled bios_ok = self._vmx_enabled_by_bios() File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 134, in _vmx_enabled_by_bios self._prdmsr(0, MSR_IA32_FEATURE_CONTROL) & ( File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 121, in _prdmsr ret = struct.unpack('L', f.read(8))[0] IOError: [Errno 5] Input/output error 2016-09-13 19:41:52 WARNING otopi.plugins.gr_he_setup.vdsmd.cpu cpu._setup:156 Cannot detect if hardware supports virtualization Can you please check if vmx is enabled in the bios of your host? Otherwise vdsm/libvrit will refuse to start the VM.
On 14 September 2016 at 09:59, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 10:46 AM, Maton, Brett <matonb@ltresources.co.uk> wrote:
Log attached as request
Sorry, I was asking for hosted-engine-setup logs; you can find them under /var/log/ovirt-hosted-engine-setup/
On 14 September 2016 at 08:48, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 8:19 AM, Maton, Brett <matonb@ltresources.co.uk
wrote:
I'm having trouble deploying selfhosted engine following this guide http://www.ovirt.org/develop/developer-guide/engine/migrate- to-hosted-engine/
It gets close to deploying the vm, but fails with
2016-09-13 19:46:04 ERROR otopi.plugins.gr_he_common.core.misc misc._terminate:180 Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy
can you please attach the whole log to let us check where it's failing?
For storage I chose a random NFS share on a remote host that was empty, I did notice that something kept removing 'nameserver' entries from resolv.conf
vdsm log show issues connecting to the broker,
ovirt-hosted-engine-setup will enable and start the broker if and only if the deployment went fine; that errors at deploy time are harmless since the broker is really down.
periodic/0::ERROR::2016-09-13 19:49:49,704::brokerlink::75:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(connect) Failed to connect to broker, the number of errors has exceeded the limit (1) periodic/0::ERROR::2016-09-13 19:49:49,704::api::253::root::(_getHaInfo) failed to retrieve Hosted Engine HA info periodic/1::WARNING::2016-09-13 19:49:49,748::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/2::WARNING::2016-09-13 19:49:51,749::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/3::WARNING::2016-09-13 19:49:53,749::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/0::WARNING::2016-09-13 19:49:55,750::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/1::WARNING::2016-09-13 19:49:57,750::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/2::WARNING::2016-09-13 19:49:59,751::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5']
What should I check next ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

The vmx cpu flag is present, is that what you mean ? # grep flags /proc/cpuinfo | grep -c ' vmx ' 48 Trying to avoid rebooting the server if possible... On 14 September 2016 at 10:16, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 11:05 AM, Maton, Brett <matonb@ltresources.co.uk> wrote:
Oops :)
hosted setup log attached.
Your issue is probably here: 2016-09-13 19:41:52 DEBUG otopi.ovirt_host_deploy.hardware hardware.detect:210 Cannot detect virualization Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 199, in detect if self._isVirtualizationEnabled(): File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 175, in _isVirtualizationEnabled bios_ok = self._vmx_enabled_by_bios() File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 134, in _vmx_enabled_by_bios self._prdmsr(0, MSR_IA32_FEATURE_CONTROL) & ( File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 121, in _prdmsr ret = struct.unpack('L', f.read(8))[0] IOError: [Errno 5] Input/output error 2016-09-13 19:41:52 WARNING otopi.plugins.gr_he_setup.vdsmd.cpu cpu._setup:156 Cannot detect if hardware supports virtualization
Can you please check if vmx is enabled in the bios of your host? Otherwise vdsm/libvrit will refuse to start the VM.
On 14 September 2016 at 09:59, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 10:46 AM, Maton, Brett <matonb@ltresources.co.uk
wrote:
Log attached as request
Sorry, I was asking for hosted-engine-setup logs; you can find them under /var/log/ovirt-hosted-engine-setup/
On 14 September 2016 at 08:48, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 8:19 AM, Maton, Brett < matonb@ltresources.co.uk> wrote:
I'm having trouble deploying selfhosted engine following this guide http://www.ovirt.org/develop/developer-guide/engine/migrate- to-hosted-engine/
It gets close to deploying the vm, but fails with
2016-09-13 19:46:04 ERROR otopi.plugins.gr_he_common.core.misc misc._terminate:180 Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy
can you please attach the whole log to let us check where it's failing?
For storage I chose a random NFS share on a remote host that was empty, I did notice that something kept removing 'nameserver' entries from resolv.conf
vdsm log show issues connecting to the broker,
ovirt-hosted-engine-setup will enable and start the broker if and only if the deployment went fine; that errors at deploy time are harmless since the broker is really down.
periodic/0::ERROR::2016-09-13 19:49:49,704::brokerlink::75:: ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(connect) Failed to connect to broker, the number of errors has exceeded the limit (1) periodic/0::ERROR::2016-09-13 19:49:49,704::api::253::root::(_getHaInfo) failed to retrieve Hosted Engine HA info periodic/1::WARNING::2016-09-13 19:49:49,748::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/2::WARNING::2016-09-13 19:49:51,749::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/3::WARNING::2016-09-13 19:49:53,749::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/0::WARNING::2016-09-13 19:49:55,750::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/1::WARNING::2016-09-13 19:49:57,750::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] periodic/2::WARNING::2016-09-13 19:49:59,751::periodic::269::v irt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5']
What should I check next ?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Sep 14, 2016 at 11:39 AM, Maton, Brett <matonb@ltresources.co.uk> wrote:
The vmx cpu flag is present, is that what you mean ?
Yes, let me double check VDSM logs.
# grep flags /proc/cpuinfo | grep -c ' vmx ' 48
Trying to avoid rebooting the server if possible...
On 14 September 2016 at 10:16, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 11:05 AM, Maton, Brett <matonb@ltresources.co.uk> wrote:
Oops :)
hosted setup log attached.
Your issue is probably here: 2016-09-13 19:41:52 DEBUG otopi.ovirt_host_deploy.hardware hardware.detect:210 Cannot detect virualization Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 199, in detect if self._isVirtualizationEnabled(): File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 175, in _isVirtualizationEnabled bios_ok = self._vmx_enabled_by_bios() File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 134, in _vmx_enabled_by_bios self._prdmsr(0, MSR_IA32_FEATURE_CONTROL) & ( File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 121, in _prdmsr ret = struct.unpack('L', f.read(8))[0] IOError: [Errno 5] Input/output error 2016-09-13 19:41:52 WARNING otopi.plugins.gr_he_setup.vdsmd.cpu cpu._setup:156 Cannot detect if hardware supports virtualization
Can you please check if vmx is enabled in the bios of your host? Otherwise vdsm/libvrit will refuse to start the VM.
On 14 September 2016 at 09:59, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 10:46 AM, Maton, Brett < matonb@ltresources.co.uk> wrote:
Log attached as request
Sorry, I was asking for hosted-engine-setup logs; you can find them under /var/log/ovirt-hosted-engine-setup/
On 14 September 2016 at 08:48, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 8:19 AM, Maton, Brett < matonb@ltresources.co.uk> wrote:
> > I'm having trouble deploying selfhosted engine following this guide > http://www.ovirt.org/develop/developer-guide/engine/migrate- > to-hosted-engine/ > > It gets close to deploying the vm, but fails with > > 2016-09-13 19:46:04 ERROR otopi.plugins.gr_he_common.core.misc > misc._terminate:180 Hosted Engine deployment failed: this system is not > reliable, please check the issue,fix and redeploy >
can you please attach the whole log to let us check where it's failing?
> > For storage I chose a random NFS share on a remote host that was > empty, I did notice that something kept removing 'nameserver' entries from > resolv.conf > > vdsm log show issues connecting to the broker, >
ovirt-hosted-engine-setup will enable and start the broker if and only if the deployment went fine; that errors at deploy time are harmless since the broker is really down.
> > periodic/0::ERROR::2016-09-13 19:49:49,704::brokerlink::75:: > ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(connect) Failed > to connect to broker, the number of errors has exceeded the limit (1) > periodic/0::ERROR::2016-09-13 19:49:49,704::api::253::root::(_getHaInfo) > failed to retrieve Hosted Engine HA info > periodic/1::WARNING::2016-09-13 19:49:49,748::periodic::269::v > irt.periodic.VmDispatcher::(__call__) could not run <class > 'vdsm.virt.periodic.DriveWatermarkMonitor'> on > [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] > periodic/2::WARNING::2016-09-13 19:49:51,749::periodic::269::v > irt.periodic.VmDispatcher::(__call__) could not run <class > 'vdsm.virt.periodic.DriveWatermarkMonitor'> on > [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] > periodic/3::WARNING::2016-09-13 19:49:53,749::periodic::269::v > irt.periodic.VmDispatcher::(__call__) could not run <class > 'vdsm.virt.periodic.DriveWatermarkMonitor'> on > [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] > periodic/0::WARNING::2016-09-13 19:49:55,750::periodic::269::v > irt.periodic.VmDispatcher::(__call__) could not run <class > 'vdsm.virt.periodic.DriveWatermarkMonitor'> on > [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] > periodic/1::WARNING::2016-09-13 19:49:57,750::periodic::269::v > irt.periodic.VmDispatcher::(__call__) could not run <class > 'vdsm.virt.periodic.DriveWatermarkMonitor'> on > [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] > periodic/2::WARNING::2016-09-13 19:49:59,751::periodic::269::v > irt.periodic.VmDispatcher::(__call__) could not run <class > 'vdsm.virt.periodic.DriveWatermarkMonitor'> on > [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] > > What should I check next ? > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >

On Wed, Sep 14, 2016 at 11:40 AM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 11:39 AM, Maton, Brett <matonb@ltresources.co.uk> wrote:
The vmx cpu flag is present, is that what you mean ?
Yes, let me double check VDSM logs.
The issue is indeed here: Thread-89::ERROR::2016-09-13 19:45:58,188::vm::765::virt.vm::(_startUnderlyingVm) vmId=`1ed592d3-2b69-4901-9ff6-fe21d1dff8c5`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 706, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 1996, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 916, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: invalid argument: could not find capabilities for arch=x86_64 domaintype=kvm Thread-89::INFO::2016-09-13 19:45:58,192::vm::1308::virt.vm::(setDownStatus) vmId=`1ed592d3-2b69-4901-9ff6-fe21d1dff8c5`::Changed state to Down: invalid argument: could not find capabilities for arch=x86_64 domaintype=kvm (code=1) And VDSM returns just 'cpuFlags': 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,syscall,nx,rdtscp,lm,constant_tsc,rep_good,nopl,eagerfpu,pni,pclmulqdq,ssse3,fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,hypervisor,lahf_lm,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,xsaveopt,model_Haswell-noTSX,model_Nehalem,model_Conroe,model_Penryn,model_IvyBridge,model_Westmere,model_SandyBridge' without vmx!
# grep flags /proc/cpuinfo | grep -c ' vmx ' 48
Trying to avoid rebooting the server if possible...
On 14 September 2016 at 10:16, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 11:05 AM, Maton, Brett <matonb@ltresources.co.uk
wrote:
Oops :)
hosted setup log attached.
Your issue is probably here: 2016-09-13 19:41:52 DEBUG otopi.ovirt_host_deploy.hardware hardware.detect:210 Cannot detect virualization Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 199, in detect if self._isVirtualizationEnabled(): File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 175, in _isVirtualizationEnabled bios_ok = self._vmx_enabled_by_bios() File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 134, in _vmx_enabled_by_bios self._prdmsr(0, MSR_IA32_FEATURE_CONTROL) & ( File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 121, in _prdmsr ret = struct.unpack('L', f.read(8))[0] IOError: [Errno 5] Input/output error 2016-09-13 19:41:52 WARNING otopi.plugins.gr_he_setup.vdsmd.cpu cpu._setup:156 Cannot detect if hardware supports virtualization
Can you please check if vmx is enabled in the bios of your host? Otherwise vdsm/libvrit will refuse to start the VM.
On 14 September 2016 at 09:59, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 10:46 AM, Maton, Brett < matonb@ltresources.co.uk> wrote:
Log attached as request
Sorry, I was asking for hosted-engine-setup logs; you can find them under /var/log/ovirt-hosted-engine-setup/
On 14 September 2016 at 08:48, Simone Tiraboschi <stirabos@redhat.com > wrote:
> > > On Wed, Sep 14, 2016 at 8:19 AM, Maton, Brett < > matonb@ltresources.co.uk> wrote: > >> >> I'm having trouble deploying selfhosted engine following this guide >> http://www.ovirt.org/develop/developer-guide/engine/migrate- >> to-hosted-engine/ >> >> It gets close to deploying the vm, but fails with >> >> 2016-09-13 19:46:04 ERROR otopi.plugins.gr_he_common.core.misc >> misc._terminate:180 Hosted Engine deployment failed: this system is not >> reliable, please check the issue,fix and redeploy >> > > can you please attach the whole log to let us check where it's > failing? > > >> >> For storage I chose a random NFS share on a remote host that was >> empty, I did notice that something kept removing 'nameserver' entries from >> resolv.conf >> >> vdsm log show issues connecting to the broker, >> > > ovirt-hosted-engine-setup will enable and start the broker if and > only if the deployment went fine; that errors at deploy time are harmless > since the broker is really down. > > >> >> periodic/0::ERROR::2016-09-13 19:49:49,704::brokerlink::75:: >> ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(connect) Failed >> to connect to broker, the number of errors has exceeded the limit (1) >> periodic/0::ERROR::2016-09-13 19:49:49,704::api::253::root::(_getHaInfo) >> failed to retrieve Hosted Engine HA info >> periodic/1::WARNING::2016-09-13 19:49:49,748::periodic::269::v >> irt.periodic.VmDispatcher::(__call__) could not run <class >> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >> periodic/2::WARNING::2016-09-13 19:49:51,749::periodic::269::v >> irt.periodic.VmDispatcher::(__call__) could not run <class >> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >> periodic/3::WARNING::2016-09-13 19:49:53,749::periodic::269::v >> irt.periodic.VmDispatcher::(__call__) could not run <class >> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >> periodic/0::WARNING::2016-09-13 19:49:55,750::periodic::269::v >> irt.periodic.VmDispatcher::(__call__) could not run <class >> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >> periodic/1::WARNING::2016-09-13 19:49:57,750::periodic::269::v >> irt.periodic.VmDispatcher::(__call__) could not run <class >> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >> periodic/2::WARNING::2016-09-13 19:49:59,751::periodic::269::v >> irt.periodic.VmDispatcher::(__call__) could not run <class >> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >> >> What should I check next ? >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >

Ah ok, the VM I'm running hosted-engine --deploy on indeed doesn;t have the vmx flag. ( the previous result was from the physical host running the VM ). I've had a quick look through the options, but can't see where I could enable vmx or change the settings for the VM itself.. On 14 September 2016 at 10:50, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 11:40 AM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 11:39 AM, Maton, Brett <matonb@ltresources.co.uk> wrote:
The vmx cpu flag is present, is that what you mean ?
Yes, let me double check VDSM logs.
The issue is indeed here: Thread-89::ERROR::2016-09-13 19:45:58,188::vm::765::virt.vm::(_startUnderlyingVm) vmId=`1ed592d3-2b69-4901-9ff6-fe21d1dff8c5`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 706, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 1996, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 916, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: invalid argument: could not find capabilities for arch=x86_64 domaintype=kvm Thread-89::INFO::2016-09-13 19:45:58,192::vm::1308::virt.vm::(setDownStatus) vmId=`1ed592d3-2b69-4901-9ff6-fe21d1dff8c5`::Changed state to Down: invalid argument: could not find capabilities for arch=x86_64 domaintype=kvm (code=1)
And VDSM returns just 'cpuFlags': 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca, cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,syscall,nx, rdtscp,lm,constant_tsc,rep_good,nopl,eagerfpu,pni, pclmulqdq,ssse3,fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe, popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand, hypervisor,lahf_lm,fsgsbase,bmi1,avx2,smep,bmi2,erms, invpcid,xsaveopt,model_Haswell-noTSX,model_Nehalem, model_Conroe,model_Penryn,model_IvyBridge,model_ Westmere,model_SandyBridge'
without vmx!
# grep flags /proc/cpuinfo | grep -c ' vmx ' 48
Trying to avoid rebooting the server if possible...
On 14 September 2016 at 10:16, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 11:05 AM, Maton, Brett < matonb@ltresources.co.uk> wrote:
Oops :)
hosted setup log attached.
Your issue is probably here: 2016-09-13 19:41:52 DEBUG otopi.ovirt_host_deploy.hardware hardware.detect:210 Cannot detect virualization Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 199, in detect if self._isVirtualizationEnabled(): File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 175, in _isVirtualizationEnabled bios_ok = self._vmx_enabled_by_bios() File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 134, in _vmx_enabled_by_bios self._prdmsr(0, MSR_IA32_FEATURE_CONTROL) & ( File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 121, in _prdmsr ret = struct.unpack('L', f.read(8))[0] IOError: [Errno 5] Input/output error 2016-09-13 19:41:52 WARNING otopi.plugins.gr_he_setup.vdsmd.cpu cpu._setup:156 Cannot detect if hardware supports virtualization
Can you please check if vmx is enabled in the bios of your host? Otherwise vdsm/libvrit will refuse to start the VM.
On 14 September 2016 at 09:59, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 10:46 AM, Maton, Brett < matonb@ltresources.co.uk> wrote:
> Log attached as request > > Sorry, I was asking for hosted-engine-setup logs; you can find them under /var/log/ovirt-hosted-engine-setup/
> On 14 September 2016 at 08:48, Simone Tiraboschi < > stirabos@redhat.com> wrote: > >> >> >> On Wed, Sep 14, 2016 at 8:19 AM, Maton, Brett < >> matonb@ltresources.co.uk> wrote: >> >>> >>> I'm having trouble deploying selfhosted engine following this >>> guide http://www.ovirt.org/develop/developer-guide/engine/migrate- >>> to-hosted-engine/ >>> >>> It gets close to deploying the vm, but fails with >>> >>> 2016-09-13 19:46:04 ERROR otopi.plugins.gr_he_common.core.misc >>> misc._terminate:180 Hosted Engine deployment failed: this system is not >>> reliable, please check the issue,fix and redeploy >>> >> >> can you please attach the whole log to let us check where it's >> failing? >> >> >>> >>> For storage I chose a random NFS share on a remote host that was >>> empty, I did notice that something kept removing 'nameserver' entries from >>> resolv.conf >>> >>> vdsm log show issues connecting to the broker, >>> >> >> ovirt-hosted-engine-setup will enable and start the broker if and >> only if the deployment went fine; that errors at deploy time are harmless >> since the broker is really down. >> >> >>> >>> periodic/0::ERROR::2016-09-13 19:49:49,704::brokerlink::75:: >>> ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(connect) >>> Failed to connect to broker, the number of errors has exceeded the limit (1) >>> periodic/0::ERROR::2016-09-13 19:49:49,704::api::253::root::(_getHaInfo) >>> failed to retrieve Hosted Engine HA info >>> periodic/1::WARNING::2016-09-13 19:49:49,748::periodic::269::v >>> irt.periodic.VmDispatcher::(__call__) could not run <class >>> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >>> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >>> periodic/2::WARNING::2016-09-13 19:49:51,749::periodic::269::v >>> irt.periodic.VmDispatcher::(__call__) could not run <class >>> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >>> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >>> periodic/3::WARNING::2016-09-13 19:49:53,749::periodic::269::v >>> irt.periodic.VmDispatcher::(__call__) could not run <class >>> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >>> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >>> periodic/0::WARNING::2016-09-13 19:49:55,750::periodic::269::v >>> irt.periodic.VmDispatcher::(__call__) could not run <class >>> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >>> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >>> periodic/1::WARNING::2016-09-13 19:49:57,750::periodic::269::v >>> irt.periodic.VmDispatcher::(__call__) could not run <class >>> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >>> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >>> periodic/2::WARNING::2016-09-13 19:49:59,751::periodic::269::v >>> irt.periodic.VmDispatcher::(__call__) could not run <class >>> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >>> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >>> >>> What should I check next ? >>> >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >

On Wed, Sep 14, 2016 at 11:56 AM, Maton, Brett <matonb@ltresources.co.uk> wrote:
Ah ok, the VM I'm running hosted-engine --deploy on indeed doesn;t have the vmx flag.
( the previous result was from the physical host running the VM ).
I've had a quick look through the options, but can't see where I could enable vmx or change the settings for the VM itself.
Sorry but I don't understand the scenario. hosted-engine-setup has to be installed and executed on the host you are going to use to run a VM which will contain the engine; while are you running it on a VM? You can do it for test purposes but it's not recommended for the production usage. In that case you need nested virtualization support on your physical host since you are going to create a VM for your hosted-engine host and then you are going to start a nested VM there.
On 14 September 2016 at 10:50, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 11:40 AM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 11:39 AM, Maton, Brett <matonb@ltresources.co.uk
wrote:
The vmx cpu flag is present, is that what you mean ?
Yes, let me double check VDSM logs.
The issue is indeed here: Thread-89::ERROR::2016-09-13 19:45:58,188::vm::765::virt.vm::(_startUnderlyingVm) vmId=`1ed592d3-2b69-4901-9ff6-fe21d1dff8c5`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 706, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 1996, in _run self._connection.createXML(domxml, flags), File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 916, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: invalid argument: could not find capabilities for arch=x86_64 domaintype=kvm Thread-89::INFO::2016-09-13 19:45:58,192::vm::1308::virt.vm::(setDownStatus) vmId=`1ed592d3-2b69-4901-9ff6-fe21d1dff8c5`::Changed state to Down: invalid argument: could not find capabilities for arch=x86_64 domaintype=kvm (code=1)
And VDSM returns just 'cpuFlags': 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cm ov,pat,pse36,clflush,mmx,fxsr,sse,sse2,syscall,nx,rdtscp,lm, constant_tsc,rep_good,nopl,eagerfpu,pni,pclmulqdq,ssse3, fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_ deadline_timer,aes,xsave,avx,f16c,rdrand,hypervisor,lahf_ lm,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,xsaveopt, model_Haswell-noTSX,model_Nehalem,model_Conroe,model_ Penryn,model_IvyBridge,model_Westmere,model_SandyBridge'
without vmx!
# grep flags /proc/cpuinfo | grep -c ' vmx ' 48
Trying to avoid rebooting the server if possible...
On 14 September 2016 at 10:16, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 11:05 AM, Maton, Brett < matonb@ltresources.co.uk> wrote:
Oops :)
hosted setup log attached.
Your issue is probably here: 2016-09-13 19:41:52 DEBUG otopi.ovirt_host_deploy.hardware hardware.detect:210 Cannot detect virualization Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 199, in detect if self._isVirtualizationEnabled(): File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 175, in _isVirtualizationEnabled bios_ok = self._vmx_enabled_by_bios() File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 134, in _vmx_enabled_by_bios self._prdmsr(0, MSR_IA32_FEATURE_CONTROL) & ( File "/usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py", line 121, in _prdmsr ret = struct.unpack('L', f.read(8))[0] IOError: [Errno 5] Input/output error 2016-09-13 19:41:52 WARNING otopi.plugins.gr_he_setup.vdsmd.cpu cpu._setup:156 Cannot detect if hardware supports virtualization
Can you please check if vmx is enabled in the bios of your host? Otherwise vdsm/libvrit will refuse to start the VM.
On 14 September 2016 at 09:59, Simone Tiraboschi <stirabos@redhat.com > wrote:
> > > On Wed, Sep 14, 2016 at 10:46 AM, Maton, Brett < > matonb@ltresources.co.uk> wrote: > >> Log attached as request >> >> > Sorry, I was asking for hosted-engine-setup logs; you can find them > under /var/log/ovirt-hosted-engine-setup/ > > >> On 14 September 2016 at 08:48, Simone Tiraboschi < >> stirabos@redhat.com> wrote: >> >>> >>> >>> On Wed, Sep 14, 2016 at 8:19 AM, Maton, Brett < >>> matonb@ltresources.co.uk> wrote: >>> >>>> >>>> I'm having trouble deploying selfhosted engine following this >>>> guide http://www.ovirt.org/develop/d >>>> eveloper-guide/engine/migrate-to-hosted-engine/ >>>> >>>> It gets close to deploying the vm, but fails with >>>> >>>> 2016-09-13 19:46:04 ERROR otopi.plugins.gr_he_common.core.misc >>>> misc._terminate:180 Hosted Engine deployment failed: this system is not >>>> reliable, please check the issue,fix and redeploy >>>> >>> >>> can you please attach the whole log to let us check where it's >>> failing? >>> >>> >>>> >>>> For storage I chose a random NFS share on a remote host that was >>>> empty, I did notice that something kept removing 'nameserver' entries from >>>> resolv.conf >>>> >>>> vdsm log show issues connecting to the broker, >>>> >>> >>> ovirt-hosted-engine-setup will enable and start the broker if and >>> only if the deployment went fine; that errors at deploy time are harmless >>> since the broker is really down. >>> >>> >>>> >>>> periodic/0::ERROR::2016-09-13 19:49:49,704::brokerlink::75:: >>>> ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(connect) >>>> Failed to connect to broker, the number of errors has exceeded the limit (1) >>>> periodic/0::ERROR::2016-09-13 19:49:49,704::api::253::root::(_getHaInfo) >>>> failed to retrieve Hosted Engine HA info >>>> periodic/1::WARNING::2016-09-13 19:49:49,748::periodic::269::v >>>> irt.periodic.VmDispatcher::(__call__) could not run <class >>>> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >>>> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >>>> periodic/2::WARNING::2016-09-13 19:49:51,749::periodic::269::v >>>> irt.periodic.VmDispatcher::(__call__) could not run <class >>>> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >>>> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >>>> periodic/3::WARNING::2016-09-13 19:49:53,749::periodic::269::v >>>> irt.periodic.VmDispatcher::(__call__) could not run <class >>>> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >>>> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >>>> periodic/0::WARNING::2016-09-13 19:49:55,750::periodic::269::v >>>> irt.periodic.VmDispatcher::(__call__) could not run <class >>>> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >>>> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >>>> periodic/1::WARNING::2016-09-13 19:49:57,750::periodic::269::v >>>> irt.periodic.VmDispatcher::(__call__) could not run <class >>>> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >>>> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >>>> periodic/2::WARNING::2016-09-13 19:49:59,751::periodic::269::v >>>> irt.periodic.VmDispatcher::(__call__) could not run <class >>>> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on >>>> [u'1ed592d3-2b69-4901-9ff6-fe21d1dff8c5'] >>>> >>>> What should I check next ? >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >> >

Ok, I think I must have miss read the 'Preparations' section of the guide. So i really want to be running this on one of the physical hosts already in the cluster? In which case, when I'm asked for storage should I use the same storage area which is already setup for virtual machines ( I think last time I tried the installer asked if it was for a new domain or not ). Thanks for you patience

On Wed, Sep 14, 2016 at 12:29 PM, Maton, Brett <matonb@ltresources.co.uk> wrote:
Ok, I think I must have miss read the 'Preparations' section of the guide.
So i really want to be running this on one of the physical hosts already in the cluster?
No, please use an additional host or remove it from the existing setup before creating the backup.
In which case, when I'm asked for storage should I use the same storage area which is already setup for virtual machines ( I think last time I tried the installer asked if it was for a new domain or not ).
You need a dedicated storage domain just for the hosted-engine VM.
Thanks for you patience

I'm confused, this doesn't sound like migrating but creating a new cluster using a self hosted engine ? On 14 September 2016 at 11:32, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 12:29 PM, Maton, Brett <matonb@ltresources.co.uk> wrote:
Ok, I think I must have miss read the 'Preparations' section of the guide.
So i really want to be running this on one of the physical hosts already in the cluster?
No, please use an additional host or remove it from the existing setup before creating the backup.
In which case, when I'm asked for storage should I use the same storage area which is already setup for virtual machines ( I think last time I tried the installer asked if it was for a new domain or not ).
You need a dedicated storage domain just for the hosted-engine VM.
Thanks for you patience

On Wed, Sep 14, 2016 at 12:47 PM, Maton, Brett <matonb@ltresources.co.uk> wrote:
I'm confused, this doesn't sound like migrating but creating a new cluster using a self hosted engine ?
No, it's just that hosted-engine setup will also try to add the host to the engine and it will fail if the host is already managed by the engine.
On 14 September 2016 at 11:32, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 12:29 PM, Maton, Brett <matonb@ltresources.co.uk> wrote:
Ok, I think I must have miss read the 'Preparations' section of the guide.
So i really want to be running this on one of the physical hosts already in the cluster?
No, please use an additional host or remove it from the existing setup before creating the backup.
In which case, when I'm asked for storage should I use the same storage area which is already setup for virtual machines ( I think last time I tried the installer asked if it was for a new domain or not ).
You need a dedicated storage domain just for the hosted-engine VM.
Thanks for you patience

Ok, so I remove te host from the cluster deploy self hosted engine and then add it back ? On 14 September 2016 at 12:20, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 12:47 PM, Maton, Brett <matonb@ltresources.co.uk> wrote:
I'm confused, this doesn't sound like migrating but creating a new cluster using a self hosted engine ?
No, it's just that hosted-engine setup will also try to add the host to the engine and it will fail if the host is already managed by the engine.
On 14 September 2016 at 11:32, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 12:29 PM, Maton, Brett <matonb@ltresources.co.uk
wrote:
Ok, I think I must have miss read the 'Preparations' section of the guide.
So i really want to be running this on one of the physical hosts already in the cluster?
No, please use an additional host or remove it from the existing setup before creating the backup.
In which case, when I'm asked for storage should I use the same storage area which is already setup for virtual machines ( I think last time I tried the installer asked if it was for a new domain or not ).
You need a dedicated storage domain just for the hosted-engine VM.
Thanks for you patience

On Wed, Sep 14, 2016 at 2:13 PM, Maton, Brett <matonb@ltresources.co.uk> wrote:
Ok, so I remove te host from the cluster deploy self hosted engine and then add it back ?
hosted-engine-setup will add it back for you, that's why you have to remove it before taking the backup.
On 14 September 2016 at 12:20, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 12:47 PM, Maton, Brett <matonb@ltresources.co.uk> wrote:
I'm confused, this doesn't sound like migrating but creating a new cluster using a self hosted engine ?
No, it's just that hosted-engine setup will also try to add the host to the engine and it will fail if the host is already managed by the engine.
On 14 September 2016 at 11:32, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 12:29 PM, Maton, Brett < matonb@ltresources.co.uk> wrote:
Ok, I think I must have miss read the 'Preparations' section of the guide.
So i really want to be running this on one of the physical hosts already in the cluster?
No, please use an additional host or remove it from the existing setup before creating the backup.
In which case, when I'm asked for storage should I use the same storage area which is already setup for virtual machines ( I think last time I tried the installer asked if it was for a new domain or not ).
You need a dedicated storage domain just for the hosted-engine VM.
Thanks for you patience

Neraly there I think, Run into what is hopefully a small problem. When I run engine-setup on the new host, it fails with the following error: [ ERROR ] dwhd is currently running. Its hostname is dwhd.domain.com. Please stop it before running Setup. [ ERROR ] Failed to execute stage 'Transaction setup': dwhd is currently running [ INFO ] Yum Performing yum transaction rollback [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20160914154202-jr8hnc.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20160914154219-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed dwhd isn't running anywhere, and was stopped before I took the backup. Any suggestions ? On 14 September 2016 at 14:01, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 2:13 PM, Maton, Brett <matonb@ltresources.co.uk> wrote:
Ok, so I remove te host from the cluster deploy self hosted engine and then add it back ?
hosted-engine-setup will add it back for you, that's why you have to remove it before taking the backup.
On 14 September 2016 at 12:20, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 12:47 PM, Maton, Brett <matonb@ltresources.co.uk
wrote:
I'm confused, this doesn't sound like migrating but creating a new cluster using a self hosted engine ?
No, it's just that hosted-engine setup will also try to add the host to the engine and it will fail if the host is already managed by the engine.
On 14 September 2016 at 11:32, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Sep 14, 2016 at 12:29 PM, Maton, Brett < matonb@ltresources.co.uk> wrote:
Ok, I think I must have miss read the 'Preparations' section of the guide.
So i really want to be running this on one of the physical hosts already in the cluster?
No, please use an additional host or remove it from the existing setup before creating the backup.
In which case, when I'm asked for storage should I use the same storage area which is already setup for virtual machines ( I think last time I tried the installer asked if it was for a new domain or not ).
You need a dedicated storage domain just for the hosted-engine VM.
Thanks for you patience

Le 14/09/2016 à 16:45, Maton, Brett a écrit :
Neraly there I think,
Run into what is hopefully a small problem.
When I run engine-setup on the new host, it fails with the following error:
[ ERROR ] dwhd is currently running. Its hostname is dwhd.domain.com <http://dwhd.domain.com>. Please stop it before running Setup. [ ERROR ] Failed to execute stage 'Transaction setup': dwhd is currently running [ INFO ] Yum Performing yum transaction rollback [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20160914154202-jr8hnc.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20160914154219-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed
dwhd isn't running anywhere, and was stopped before I took the backup. Any suggestions ?
Old and known issue. Just restart then stop the DWH service. Then run engine-setup again and it will work. There is already a BZ for this one. (Feel free to search it...) -- Nicolas ECARNOT
participants (3)
-
Maton, Brett
-
Nicolas Ecarnot
-
Simone Tiraboschi