Property: default route, host - true, DC - false
I have 4 nics.
bond0 = 2 x 10g
eno1 = ovirtmgmt
eno2 = for vm traffic.
eno2 says it's out of sync, hosts network config differs from DC,
default route: host - true, DC - false.
I have tried "sync all networks" but the message still remains
Where can I look to fix the issue?
I am currently building a new virtualization cluster with oVirt, using
AMD EPYC processors (AMD EPYC 7351P). At the moment I'm running oVirt
Node Version 4.2.3 @ CentOS 7.4.1708.
We have the situation that the processor type is recognized as "AMD
Opteron G3". With this type of instruction set the VMs are not able to
do AES in hardware, this results in poor performance in our case.
I found some information that tells me that this problem should be
solved with CentOS 7.5
My actual questions:
- Are there any further information about the AMD EPYC support?
- Any information about an update of the oVirt node to CentOS 7.5?
I have Ovirt engine 4.2 and node version is 4.2.
After installing node in in ovirt engine when i try to install node it
gives following error
(EE-ManagedThreadFactory-engine-Thread-19)  Host installation
failed for host 'bd8d007a-be92-4075-bba9-6cbeb890a1e5', 'node_2': Command
returned failure code 1 during SSH session 'root(a)192.168.20.20'
2018-02-27 14:25:37,416+05 INFO
(EE-ManagedThreadFactory-engine-Thread-19)  START,
SetVdsStatusVDSCommand(HostName = node_2,
stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 2b138e87
2018-02-27 14:25:37,423+05 INFO
(EE-ManagedThreadFactory-engine-Thread-19)  FINISH,
SetVdsStatusVDSCommand, log id: 2b138e87
2018-02-27 14:25:37,429+05 ERROR
(EE-ManagedThreadFactory-engine-Thread-19)  EVENT_ID:
VDS_INSTALL_FAILED(505), Host node_2 installation failed. Command returned
failure code 1 during SSH session 'root(a)192.168.20.20'.
2018-02-27 14:25:37,433+05 INFO
(EE-ManagedThreadFactory-engine-Thread-19)  Lock freed to object
I have attached log file for your reference
Please help me out.
After upgrading to 4.2.1 I have problems with ovn provider.
I'm getting "Failed to synchronize networks of Provider ovirt-provider-ovn."
I use custom SSL certificate in apache and I guess this is the reason.
I've tried to update ovirt-provider-ovn.conf with
but still no go
Any tips on this?
The only way you have is to configure cinder to manage ceph pool or in
alternative you have to deploy an iscsi gateway, no other ways are
available at the moment.
So you can't use rbd directly.
Il dom 27 mag 2018, 16:54 Leo David <leoalex(a)gmail.com> ha scritto:
> Thank you Luca,
> At the moment i would try the cinder storage provider, since we already
> have a proxmox cluster directly connecting to ceph. The problem is that I
> just could not find a straight way to do this.
> ie: Specify the ceph monitors and ceph pool to connect to. Can oVirt
> directly connect to ceph monitors ? How the configuration should be done if
> so ?
> Thank you very much !
> On Sun, May 27, 2018, 17:20 Luca 'remix_tj' Lorenzetto <
> lorenzetto.luca(a)gmail.com> wrote:
>> Yes, using cinder or through iscsi gateway.
>> For a simpler setup i suggest the second option.
>> Il dom 27 mag 2018, 16:08 Leo David <leoalex(a)gmail.com> ha scritto:
>>> Hello everyone,
>>> I am new to ovirt and very impressed of its features. I would like to
>>> levereage on our existing ceph cluster to provide rbd images for vm hdds,
>>> is this possible to achieve ?
>>> Thank you very much !
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
..trying to update from 4.2.3 to 4.2.4 engine-setup fails with the
2018-06-28 16:26:45,507+0200 DEBUG
['/usr/share/ovirt-engine-dwh/bin/dwh-vacuum.sh', '-f', '-v'] stderr:
vacuumdb: could not connect to database ovirt_engine_history: FATAL:
password authentication failed for user "ovirt_engine_history"
2018-06-28 16:26:45,507+0200 DEBUG otopi.context
context._executeMethod:143 method exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
line 126, in _vacuum
File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 931, in
RuntimeError: Command '/usr/share/ovirt-engine-dwh/bin/dwh-vacuum.sh'
failed to execute
2018-06-28 16:26:45,508+0200 ERROR otopi.context
context._executeMethod:152 Failed to execute stage 'Misc configuration':
Command '/usr/share/ovirt-engine-dwh/bin/dwh-vacuum.sh' failed to execute
2018-06-28 16:26:45,508+0200 DEBUG otopi.transaction transaction.abort:119
aborting 'Yum Transaction'