"Cannot acquire bridge address" / Failure in "hosted-engine --deploy" on 4.1.8
by Frank Wall
Hi,
when running "hosted-engine --deploy" on a fresh CentOS 7.4 with pre-built
network configuration (and ovirtmgmt bridge), the following error occurs:
2017-11-27 19:08:12 DEBUG otopi.context context._executeMethod:142 method exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/network/bridge.py", line 295, in _get_hostname_from_bridge_if
raise RuntimeError(_('Cannot acquire bridge address'))
RuntimeError: Cannot acquire bridge address
While checking the source of bridge.py line 295 the root cause is interesting:
Running "vdsClient -s $HOSTNAME getVdsCapabilities" works perfectly, all NICs
and bridges are returned, but only "networks" is returned as empty list: {}.
Clearly looks like a bug. How can I resolve this issue?
ovirt-hosted-engine-setup-2.1.4-1.el7.centos.noarch
vdsm-4.19.40-1.el7.centos.x86_64
Regards
- Frank
7 years
Time management in self hosted engine vm
by Gianluca Cecchi
Hello,
I have a self hosted engine HCI environment that was born in 4.0.5 through
the ansible/gdeploy mechanism and then gradually updated to 4.1.7
Now the engine and host are at CentOS 7.4 level. Inside the gdeploy jobs
the engine vm was installed through the appliance at the 4.0.5 time.
I see in web admin gui that hw clock of engine vm is configured as UTC (so
far so good) and the same is inside the OS. And I would like to change it
to UTC+1 like the 3 hosts
I also see that neither chronyd nor ntpd are installed on engine vm.
Can I fix the situation the normal os level way:
- timedatectl set-timezone "Europe/Rome"
- install and configure chronyd
or better not to configure time sync in engine vm?
Thanks,
Gianluca
7 years
HA on Non Responsive
by Arthur Melo
Hello, there!
Is it possible to reduce time to auto migrate machines when node is non
responsive?
Atenciosamente,
Arthur Melo
Linux User #302250
7 years
Hosts been evacuated unnecessarily
by FERNANDO FREDIANI
This is a multi-part message in MIME format.
--------------85B0C37C08377D8D314D9F67
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello folks.
Ou oVirt (4.1.7.3-1.el7.centos) which runs in one Datacenter and
controls Nodes locally and also remotelly lost communication with the
remote Nodes in another Datacenter.
To this point nothing wrong as the Nodes can continue working as
expected and running their Virtual Machines each without dependency of
the oVirt Engine.
What happened at some point is that when the communication between
Engine and Hosts came back Hosts in the remote Datacenter got confused
and initiated a Live Migration of ALL VMs from one of the hosts to
another. I had also to restart vdsmd agent on all Hosts in order to get
sanity my environment.
What adds up even more strangeness to this scenario is that one of the
Hosts affected by the need of restarting VDSM doesn't belong to the same
Cluster as the others and had to have the vdsmd restarted.
I understand the Hosts can survive without the Engine online with
reduced possibilities but can communicated between them, but without
affecting the VMs or even needing to do what happened in this scenario.
Am I wrong on any of the assumptions ?
Fernando
--------------85B0C37C08377D8D314D9F67
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<font face="arial, helvetica, sans-serif">Hello folks.<br>
<br>
</font><font face="arial, helvetica, sans-serif"><font face="arial,
helvetica, sans-serif">Ou oVirt (</font></font><font
face="arial, helvetica, sans-serif"><font face="arial, helvetica,
sans-serif"><span class="version-text">4.1.7.3-1.el7.centos)</span>
which runs in one Datacenter and controls Nodes locally and also
remotelly lost communication with the remote Nodes in another
Datacenter.<br>
To this point nothing wrong as the Nodes can continue working as
expected and running their Virtual Machines each without
dependency of the oVirt Engine.<br>
<br>
What happened at some point is that when the communication
between Engine and Hosts came back Hosts in the remote
Datacenter got confused and initiated a Live Migration of ALL
VMs from one of the hosts to another. I had also to restart
vdsmd agent on all Hosts in order to get sanity my environment.<br>
<br>
What adds up even more strangeness to this scenario is that one
of the Hosts affected by the need of restarting VDSM doesn't
belong to the same Cluster as the others and had to have the
vdsmd restarted.<br>
<br>
I understand the Hosts can survive without the Engine online
with reduced possibilities but can communicated between them,
but without affecting the VMs or even needing to do what
happened in this scenario.<br>
<br>
Am I wrong on any of the assumptions ?<br>
<br>
Fernando</font></font>
</body>
</html>
--------------85B0C37C08377D8D314D9F67--
7 years
OVS DPDK Performance
by FERNANDO FREDIANI
This is a multi-part message in MIME format.
--------------BDE85DB92A8FD76EF257B82B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello
As some may have seen recently OVS DPDK has been introduced to oVirt
(https://ovirt.org/blog/2017/09/ovs-dpdk/). This is very interesting
feature which can make a huge performance difference in terms of network
performance.
Just wanted to ask if anyone has tested it in any environment and made
any comparison, specially for packet forward (e.g: running Virtual
Routers or Virtual Firewalls with virtio) or packet dropping as well.
One doubt I have and if someone could clarify is: Should I enable DPDK
in the Host any traffic forwarded to any VMs will automatically benefit
from this performance gain of DPDK or there additional steps that need
to be put in place inside the VM when sharing a physical Network Interface ?
Thanks
Fernando
--------------BDE85DB92A8FD76EF257B82B
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<font face="arial, helvetica, sans-serif">Hello<br>
<br>
As some may have seen recently OVS DPDK has been introduced to
oVirt (<a class="moz-txt-link-freetext" href="https://ovirt.org/blog/2017/09/ovs-dpdk/">https://ovirt.org/blog/2017/09/ovs-dpdk/</a>). This is very
interesting feature which can make a huge performance difference
in terms of network performance.<br>
<br>
Just wanted to ask if anyone has tested it in any environment and
made any comparison, specially for packet forward (e.g: running
Virtual Routers or Virtual Firewalls with virtio) or packet
dropping as well.<br>
<br>
One doubt I have and if someone could clarify is: Should I enable
DPDK in the Host any traffic forwarded to any VMs will
automatically benefit from this performance gain of DPDK or there
additional steps that need to be put in place inside the VM when
sharing a physical Network Interface ?<br>
<br>
Thanks<br>
Fernando<br>
</font>
</body>
</html>
--------------BDE85DB92A8FD76EF257B82B--
7 years
Upgrading from Hosted Engine 3.6 to 4.X
by Cam Wright
Hi there,
We're looking to upgrade our hosted engine setup from 3.6 to 4.0 (or 4.1)...
We built the 3.6 setup a couple of years ago with Fedora 22 (we wanted
the newer-at-the-time kernel 4.4) on the hosts and engine, but when we
move to 4.X we'd like to move to EL7 on the engine (as that seems to
be the supported version) and to the oVirt Node ISO installer on the
hypervisors.
We've got only four hosts in our oVirt datacentre, configured in two clusters.
Our current idea is to take a backup of the oVirt database using the
backup-restore tool, and to take a 'dd' of the virtual disk too, for
good measure. Then upgrade the engine to 4.X and confirm that the 3.6
hosts will run, and then finally piecemeal upgrade the hosts to 4.X
using the oVirt Node ISO installer.
Looking at this page -
https://www.ovirt.org/documentation/self-hosted/chap-Maintenance_and_Upgr...
- it seems the 'hosted-engine --upgrade-appliance' path is the best
way to do this... but because our hosts are running Fedora instead of
EL, I think that makes this option moot to us.
Is what I've suggested a valid upgrade path, or is there a more sane
way of going about this?
-C
Cam Wright - Systems and Technical Resource Administrator
CUTTINGEDGE /
90 Victoria St, West End, Brisbane, QLD, 4101
T + 61 7 3013 6200 M 0420 827 007
E cwright(a)cuttingedge.com.au | W www.cuttingedge.com.au
/SYD /BNE /TYO
--
This email is confidential and solely for the use of the intended recipient.
If you have received this email in error please notify the author and
delete it immediately. This email is not to be distributed without the
author's written consent. Unauthorised forwarding, printing, copying or use
is strictly prohibited and may be a breach of copyright. Any views
expressed in this email are those of the individual sender unless
specifically stated to be the views of Cutting Edge Post Pty Ltd (Cutting
Edge). Although this email has been sent in the belief that it is
virus-free, it is the responsibility of the recipient to ensure that it is
virus free. No responsibility is accepted by Cutting Edge for any loss or
damage arising in any way from receipt or use of this email. This email may
contain legally privileged information and privilege is not waived if you
have received this email in error.
7 years
Re: [ovirt-users] VDSM multipath.conf - prevent automatic management of local devices
by Yaniv Kaul
On Nov 28, 2017 12:24 AM, "Ben Bradley" <listsbb(a)virtx.net> wrote:
On 23/11/17 06:46, Maton, Brett wrote:
> Might not be quite what you're after but adding
>
> # RHEV PRIVATE
>
> To /etc/multipath.conf will stop vdsm from changing the file.
> |||
>
Hi there. Thanks for the reply.
Yes I am aware of that and it seems that's what I will have to do.
I have no problem with VDSM managing the file, I just wish it didn't
automatically load local storage devices into multipathd.
It doesn't. Recent multipath releases do.
Does it bother you in any way?
Y.
I'm still not clear on the purpose of this automatic management though.
>From what I can tell there is no difference to hosts/clusters made through
this automatic management - i.e. you still have to add storage domains
manually in oVirt.
Could anyone give any info on the purpose of this auto-management of local
storage devices into multipathd in VDSM?
Then I will be able to make an informed decision as to the benefit of
letting it continue.
Thanks, Ben
> On 22 November 2017 at 22:42, Ben Bradley <listsbb(a)virtx.net <mailto:
> listsbb(a)virtx.net>> wrote:
>
> Hi All
>
> I have been running ovirt in a lab environment on CentOS 7 for
> several months but have only just got around to really testing things.
> I understand that VDSM manages multipath.conf and I understand that
> I can make changes to that file and set it to private to prevent
> VDSM making further changes.
>
> I don't mind VDSM managing the file but is it possible to set to
> prevent local devices being automatically added to multipathd?
>
> Many times I have had to flush local devices from multipath when
> they are added/removed or re-partitioned or the system is rebooted.
> It doesn't even look like oVirt does anything with these devices
> once they are setup in multipathd.
>
> I'm assuming it's the VDSM additions to multipath that are causing
> this. Can anyone else confirm this?
>
> Is there a way to prevent new or local devices being added
> automatically?
>
> Regards
> Ben
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
7 years
oVirt self-hosted Engine [Failed to acquire lock: error -243]
by Terry hey
Hello all,
I installed ovirt self-hosted engine. Unfortunately, the engine VM was
suddenly shutdown. And later, it automatically powered on. The engine admin
console showed the following error.
VM HostedEngine is down with error. Exit message: resource busy: Failed to
acquire lock: error -243.
I would like to know what is this error message is talking about and how to
solve this error.
I would like to first thank all of you help me to solve this issue.
Regards,
Terry
7 years
Auto-starting VMS in a all-in-one / confusion.
by CRiMSON
I've been digging thru mailing lists and blogs and I'm a bit confused about
how you have VMs auto-start after a reboot in a ovirt system that is setup
as all-in-one.
>From what I can gather this can be achieved via some startup scripts (or a
rc.local foo).
But there is no setting inside the WebUI that can be set to achieve this?
Or have I missed something.
7 years
Error during delete snapshot
by Arthur Melo
Can someone help me with this error?
Failed to delete snapshot '<UNKNOWN>' for VM 'proxy03'.
Atenciosamente,
Arthur Melo
Linux User #302250
7 years