Re: Damaged hard disk in volume replica gluster
by Strahil
As you are replacing an old brick, you have to recreate the old LV and mount it on the same location.
Then you can use the gluster's "reset-brick" (i thing ovirt UI has that option too) and all data will be replicated there.
You got also the "replace-brick" option if you decided to change the mount location.
P.S.: with replica volumes your volume should be still working, in case it has stopped - you have to investigate before proceeding.
Best Regards,
Strahil NikolovOn Oct 12, 2019 13:12, matteo fedeli <matmilan97(a)gmail.com> wrote:
>
> Hi, I have in my ovirt HCI a volume that not work properly because one of three hdd fail. I bought a new hdd, I recreate old lvm partitioning and mount point. Now, how can I attach this empty new brick?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5XQ4M2K5RB...
4 years, 10 months
Vm suddenly paused with error "vm has paused due to unknown storage error"
by Jasper Siero
Hi all,
Since we upgraded our Ovirt nodes to CentOS 7 a vm (not a specific one but never more then one) will sometimes pause suddenly with the error "VM ... has paused due to unknown storage error". It happens now two times in a month.
The Ovirt node uses san storage for the vm's running on it. When a specific vm is pausing with an error the other vm's keeps running without problems.
The vm runs without problems after unpausing it.
Versions:
CentOS Linux release 7.1.1503
vdsm-4.14.17-0
libvirt-daemon-1.2.8-16
vdsm.log:
VM Channels Listener::DEBUG::2015-10-25 07:43:54,382::vmChannels::95::vds::(_handle_timeouts) Timeout on fileno 78.
libvirtEventLoop::INFO::2015-10-25 07:43:56,177::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
libvirtEventLoop::DEBUG::2015-10-25 07:43:56,178::vm::5204::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::event Suspended detail 2 opaque None
libvirtEventLoop::INFO::2015-10-25 07:43:56,178::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
...........
libvirtEventLoop::INFO::2015-10-25 07:43:56,180::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
specific error part in libvirt vm log:
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
...........
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
engine.log:
2015-10-25 07:44:48,945 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-40) [a43dcc8] VM diataal-prod-cas1 77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb moved from
Up --> Paused
2015-10-25 07:44:49,003 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-40) [a43dcc8] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VM diataal-prod-cas1 has paused due to unknown storage error.
Has anyone experienced the same problem or knows a way to solve this?
Kind regards,
Jasper
4 years, 10 months
Re: Cannot Increase Hosted Engine VM Memory
by Serhiy Morhun
Hello, did anyone find a resolution for this issue? I'm having exactly the
same problem:
Hosted Engine VM is running with 5344MB of RAM, when trying to increase to
8192 it would not accept the change because the difference is not divisible
by 256.
When trying to increase to 8160 the change is accepted but log shows
"Hotset memory: changed the amount of memory on VM HostedEngine from 5344
to 5344" at the same time amount of guaranteed memory does increase to 8160
which, in turn, starts generating error messages that VM does not have all
the guaranteed RAM.
Serhiy Morhun
--
*-----------------------------------------------------------------------------------***
*THE INFORMATION CONTAINED IN THIS MESSAGE (E-MAIL AND ANY ATTACHMENTS) IS
INTENDED ONLY FOR THE INDIVIDUAL AND CONFIDENTIAL USE OF THE DESIGNATED
RECIPIENT(S).*
If any reader of this message is not an intended recipient
or any agent responsible for delivering it to an intended recipient, you
are hereby notified that you have received this document in error, and that
any review, dissemination, distribution, copying or other use of this
message is prohibited. If you have received this message in error, please
notify us immediately by reply e-mail message or by telephone and delete
the original message from your e-mail system and/or computer database.
Thank you.
*-----------------------------------------------------------------------------------*
**NOTICE**:
*You are advised that e-mail correspondence and attachments
between the public and the Ridgewood Board of Education are obtainable by
any person who files a request under the NJ Open Public Records Act (OPRA)
unless it is subject to a specific OPRA exception. You should have no
expectation that the content of e-mails sent to or from school district
e-mail addresses, or between the public and school district officials and
employees, will remain private.*
*-----------------------------------------------------------------------------------*
4 years, 10 months
Ovirt-engine-ha cannot to see live status of Hosted Engine
by asm@pioner.kz
Good day for all.
I have some issues with Ovirt 4.2.6. But now the main this of it:
I have two Centos 7 Nodes with same config and last Ovirt 4.2.6 with Hostedengine with disk on NFS storage.
Also some of virtual machines working good.
But, when HostedEngine running on one node (srv02.local) everything is fine.
After migrating to another node (srv00.local), i see that agent cannot to check livelinness of HostedEngine. After few minutes HostedEngine going to reboot and after some time i see some situation. After migration to another node (srv00.local) all looks OK.
hosted-engine --vm-status commang when HosterEngine on srv00 node:
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : srv02.local
Host ID : 1
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}
Score : 0
stopped : False
Local maintenance : False
crc32 : ecc7ad2d
local_conf_timestamp : 78328
Host timestamp : 78328
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=78328 (Tue Sep 18 12:44:18 2018)
host-id=1
score=0
vm_conf_refresh_time=78328 (Tue Sep 18 12:44:18 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Fri Jan 2 03:49:58 1970
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : srv00.local
Host ID : 2
Engine status : {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 1d62b106
local_conf_timestamp : 326288
Host timestamp : 326288
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=326288 (Tue Sep 18 12:44:21 2018)
host-id=2
score=3400
vm_conf_refresh_time=326288 (Tue Sep 18 12:44:21 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False
Log agent.log from srv00.local:
MainThread::INFO::2018-09-18 12:40:51,749::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:40:52,052::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2018-09-18 12:41:01,066::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:41:01,374::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2018-09-18 12:41:11,393::state_machine::169::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(refresh) Global metadata: {'maintenance': False}
MainThread::INFO::2018-09-18 12:41:11,393::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(refresh) Host srv02.local.pioner.kz (id 1): {'conf_on_shared_storage': True, 'extra': 'meta
data_parse_version=1\nmetadata_feature_version=1\ntimestamp=78128 (Tue Sep 18 12:40:58 2018)\nhost-id=1\ns
core=0\nvm_conf_refresh_time=78128 (Tue Sep 18 12:40:58 2018)\nconf_on_shared_storage=True\nmaintenance=Fa
lse\nstate=EngineUnexpectedlyDown\nstopped=False\ntimeout=Fri Jan 2 03:49:58 1970\n', 'hostname': 'srv02.
local.pioner.kz', 'alive': True, 'host-id': 1, 'engine-status': {'reason': 'vm not running on this host',
'health': 'bad', 'vm': 'down_unexpected', 'detail': 'unknown'}, 'score': 0, 'stopped': False, 'maintenance
': False, 'crc32': 'e18e3f22', 'local_conf_timestamp': 78128, 'host-ts': 78128}
MainThread::INFO::2018-09-18 12:41:11,393::state_machine::177::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(refresh) Local (id 2): {'engine-health': {'reason': 'failed liveliness check', 'health': 'b
ad', 'vm': 'up', 'detail': 'Up'}, 'bridge': True, 'mem-free': 12763.0, 'maintenance': False, 'cpu-load': 0
.0364, 'gateway': 1.0, 'storage-domain': True}
MainThread::INFO::2018-09-18 12:41:11,393::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:41:11,703::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2018-09-18 12:41:21,716::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:41:22,020::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2018-09-18 12:41:31,033::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:41:31,344::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
As we can see, agent thinking that HostedEngine just in powering up mode. I cannot to do anythink with it. I allready reinstalled many times srv00 node without success.
One time i even has to uninstall ovirt* and vdsm* software. Also here one interesting point, after installing just "yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm" on this node i try to install this node from engine web interface with "Deploy" action. But, installation was unsuccesfull, before i didnt install ovirt-hosted-engine-ha on this node. I dont see in documentation that its need bofore installation of new hosts. But this is for information and checking. After installing ovirt-hosted-engine-ha node was installed with HostedEngine support. But the main issue not changed.
Thanks in advance for help.
BR,
Alexandr
4 years, 10 months
Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing
by Strahil
I got upgraded to RC3 and now cannot power any VM .
Constantly getting I/O error, but checking at gluster level - I can dd from each disk or even create a new one.
Removing the HighAvailability doesn't help.
I guess I should restore the engine from the gluster snapshot and rollback via 'yum history undo last'.
Does anyone else have my issues ?
Best Regards,
Strahil NikolovOn Nov 13, 2019 15:31, Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>
>
>
> Il giorno mer 13 nov 2019 alle ore 14:25 Sandro Bonazzola <sbonazzo(a)redhat.com> ha scritto:
>>
>>
>>
>> Il giorno mer 13 nov 2019 alle ore 13:56 Florian Schmid <fschmid(a)ubimet.com> ha scritto:
>>>
>>> Hello,
>>>
>>> I have a question about bugs, which are flagged as [downstream clone - 4.3.7], but are not yet released.
>>>
>>> I'm talking about this bug:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1749202
>>>
>>> I can't see it in 4.3.7 release notes. Will it be included in a further release candidate? This fix is very important I think and I can't upgrade yet because of this bug.
>>
>>
>>
>> Looking at the bug, the fix was done with $ git tag --contains 12bd5cb1fe7c95e29b4065fca968913722fe9eaa
>> ovirt-engine-4.3.6.6
>> ovirt-engine-4.3.6.7
>> ovirt-engine-4.3.7.0
>> ovirt-engine-4.3.7.1
>>
>> So the fix is already included in release oVirt 4.3.6.
>
>
> Sent a fix to 4.3.6 release notes: https://github.com/oVirt/ovirt-site/pull/2143. @Ryan Barry can you please review?
>
>
>>
>>
>>
>>
>>
>>
>>>
>>>
>>> BR Florian Schmid
>>>
>>> ________________________________
>>> Von: "Sandro Bonazzola" <sbonazzo(a)redhat.com>
>>> An: "users" <users(a)ovirt.org>
>>> Gesendet: Mittwoch, 13. November 2019 13:34:59
>>> Betreff: [ovirt-users] [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing
>>>
>>> The oVirt Project is pleased to announce the availability of the oVirt 4.3.7 Third Release Candidate for testing, as of November 13th, 2019.
>>>
>>> This update is a release candidate of the seventh in a series of stabilization updates to the 4.3 series.
>>> This is pre-release software. This pre-release should not to be used in production.
>>>
>>> This release is available now on x86_64 architecture for:
>>> * Red Hat Enterprise Linux 7.7 or later (but <8)
>>> * CentOS Linux (or similar) 7.7 or later (but <8)
>>>
>>> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
>>> * Red Hat Enterprise Linux 7.7 or later (but <8)
>>> * CentOS Linux (or similar) 7.7 or later (but <8)
>>> * oVirt Node 4.3 (available for x86_64 only) has been built consuming CentOS 7.7 Release
>>>
>>> See the release notes [1] for known issues, new features and bugs fixed.
>>>
>>> While testing this release candidate please note that oVirt node now includes:
>>> - ansible 2.9.0
>>> - GlusterFS 6.6
>>>
>>> Notes:
>>> - oVirt Appliance is already available
>>> - oVirt Node is already available
>>>
>>> Additional Resources:
>>> * Read more about the oVirt 4.3.7 release highlights: http://www.ovirt.org/release/4.3.7/
>>> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
>>> * Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/
>>>
>>> [1] http://www.ovirt.org/release/4.3.7/
>>> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>>>
>>> --
>>>
>>> Sandro Bonazzola
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>>
>>> Red Hat EMEA
>>>
>>> sbonazzo(a)redhat.com
>>>
>>> Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.
>>>
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/24QUREJPZHT...
>>
>>
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>
>> Red Hat EMEA
>>
>> sbonazzo(a)redhat.com
>>
>> Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.
>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA
>
> sbonazzo(a)redhat.com
>
> Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.
4 years, 11 months
ovirt upgrade
by David David
hi all
how do upgrade from 4.2.5 to 4.3.7 ?
what steps are necessary, are the same as when upgrading from 4.1 to 4.2?
# engine-upgrade-check
# yum update ovirt*setup*
# engine-setup
thanks
4 years, 11 months
Low disk space on Storage
by suporte@logicworks.pt
Hi,
I'm running ovirt Version:4.3.4.3-1.el7
My filesystem disk has 30 GB free space.
Cannot start a VM due to an I/O error storage.
When tryng to move the disk to another storage domain get this error:
Error while executing action: Cannot move Virtual Disk. Low disk space on Storage Domain DATA4.
The sum of pre-allocated disk is the total of the storage domain disk.
Any idea what can I do to move a disk to other storage domain?
Many thanks
--
Jose Ferradeira
http://www.logicworks.pt
4 years, 11 months
Hyperconverged setup - storage architecture - scaling
by Leo David
Hello Everyone,
Reading through the document:
"Red Hat Hyperconverged Infrastructure for Virtualization 1.5
Automating RHHI for Virtualization deployment"
Regarding storage scaling, i see the following statements:
*2.7. SCALINGRed Hat Hyperconverged Infrastructure for Virtualization is
supported for one node, and for clusters of 3, 6, 9, and 12 nodes.The
initial deployment is either 1 or 3 nodes.There are two supported methods
of horizontally scaling Red Hat Hyperconverged Infrastructure for
Virtualization:*
*1 Add new hyperconverged nodes to the cluster, in sets of three, up to the
maximum of 12 hyperconverged nodes.*
*2 Create new Gluster volumes using new disks on existing hyperconverged
nodes.You cannot create a volume that spans more than 3 nodes, or expand an
existing volume so that it spans across more than 3 nodes at a time*
*2.9.1. Prerequisites for geo-replicationBe aware of the following
requirements and limitations when configuring geo-replication:One
geo-replicated volume onlyRed Hat Hyperconverged Infrastructure for
Virtualization (RHHI for Virtualization) supports only one geo-replicated
volume. Red Hat recommends backing up the volume that stores the data of
your virtual machines, as this is usually contains the most valuable data.*
------
Also in oVirtEngine UI, when I add a brick to an existing volume i get the
following warning:
*"Expanding gluster volume in a hyper-converged setup is not recommended as
it could lead to degraded performance. To expand storage for cluster, it is
advised to add additional gluster volumes." *
Those things are raising a couple of questions that maybe for some for you
guys are easy to answer, but for me it creates a bit of confusion...
I am also referring to RedHat product documentation, because I treat
oVirt as production-ready as RHHI is.
*1*. Is there any reason for not going to distributed-replicated volumes (
ie: spread one volume across 6,9, or 12 nodes ) ?
- ie: is recomanded that in a 9 nodes scenario I should have 3 separated
volumes, but how should I deal with the folowing question
*2.* If only one geo-replicated volume can be configured, how should I
deal with 2nd and 3rd volume replication for disaster recovery
*3.* If the limit of hosts per datacenter is 250, then (in theory ) the
recomended way in reaching this treshold would be to create 20 separated
oVirt logical clusters with 12 nodes per each ( and datacenter managed from
one ha-engine ) ?
*4.* In present, I have the folowing one 9 nodes cluster , all hosts
contributing with 2 disks each to a single replica 3 distributed
replicated volume. They where added to the volume in the following order:
node1 - disk1
node2 - disk1
......
node9 - disk1
node1 - disk2
node2 - disk2
......
node9 - disk2
At the moment, the volume is arbitrated, but I intend to go for full
distributed replica 3.
Is this a bad setup ? Why ?
It oviously brakes the redhat recommended rules...
Is there anyone so kind to discuss on these things ?
Thank you very much !
Leo
--
Best regards, Leo David
--
Best regards, Leo David
4 years, 11 months
Re: Ovirt OVN help needed
by Strahil
Hi Dominik,
Thanks for your reply.
On ovirt1 I got the following:
[root@ovirt1 openvswitch]# less ovn-controller.log-20191216.gz
2019-12-15T01:49:02.988Z|00032|vlog|INFO|opened log file /var/log/openvswitch/ovn-controller.log
2019-12-16T01:18:02.114Z|00033|vlog|INFO|closing log file
ovn-controller.log-20191216.gz (END)
Same is on the other node:
[root@ovirt2 openvswitch]# less ovn-controller.log-20191216.gz
2019-12-15T01:26:03.477Z|00028|vlog|INFO|opened log file /var/log/openvswitch/ovn-controller.log
2019-12-16T01:30:01.718Z|00029|vlog|INFO|closing log file
ovn-controller.log-20191216.gz (END)
The strange thing is that the geneve tunnels are there:
[root@ovirt1 ~]# ovs-vsctl show
c0e938f1-b5b5-4d5a-9cda-29dae2986f29
Bridge br-int
fail_mode: secure
Port "ovn-25cc77-0"
Interface "ovn-25cc77-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.1.64"} Port "ovn-566849-0"
Interface "ovn-566849-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.1.41"} Port br-int
Interface br-int
type: internal
Port "vnet2"
Interface "vnet2"
ovs_version: "2.11.0"
[root@ovirt1 ~]# ovs-vsctl list ports
ovs-vsctl: unknown table "ports"
[root@ovirt1 ~]# ovs-vsctl list port
_uuid : fbf40569-925e-4430-a7c5-c78d58979bbc
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {}
fake_bridge : false
interfaces : [3207c0cb-3000-40f2-a850-83548f76f090]lacp : []
mac : []
name : "vnet2"
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : 8947f82d-a089-429b-8843-71371314cb52
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {}
fake_bridge : false
interfaces : [ec6a6688-e5d6-4346-ac47-ece1b8379440]lacp : []
mac : []
name : br-int
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : 72d612be-853e-43e9-8f5c-ce66cef0bebe
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {ovn-chassis-id="5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3(a)192.168.1.41"}
fake_bridge : false
interfaces : [a31574fe-515b-420b-859d-7f2ac729638f]lacp : []
mac : []
name : "ovn-566849-0"
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : 2043a15f-ec39-4cc3-b875-7be00423dd7a
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {ovn-chassis-id="25cc77b3-046f-45c5-af0c-ffb2f77d73f1(a)192.168.1.64"}
fake_bridge : false
interfaces : [f9a9e3ff-070e-4044-b601-7f7394dc295f]lacp : []
mac : []
name : "ovn-25cc77-0"
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
[root@ovirt1 ~]#
[root@ovirt2 ~]# ovs-vsctl show
3dbab138-6b90-44c5-af05-b8a944c9bf20
Bridge br-int
fail_mode: secure
Port "ovn-baa019-0"
Interface "ovn-baa019-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.1.90"} Port br-int
Interface br-int
type: internal
Port "vnet5"
Interface "vnet5"
Port "ovn-566849-0"
Interface "ovn-566849-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.1.41"}
ovs_version: "2.11.0"
[root@ovirt2 ~]# ovs-vsctl list port
_uuid : 151e1188-f07a-4750-a620-392a08e7e7fe
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : [] bond_updelay : 0
cvlans : []
external_ids : {ovn-chassis-id="baa0199e-d1a4-484c-af13-a41bcad19dbc(a)192.168.1.90"}
fake_bridge : false
interfaces : [4d4bc12a-609a-4917-b839-d4f652acdc33]lacp : []
mac : []
name : "ovn-baa019-0"
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : 3a862f96-b3ec-46a9-bcf6-f385e5def410
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {}
fake_bridge : false
interfaces : [777f2819-ca27-4890-8d2f-11349ca0d398]lacp : []
mac : []
name : br-int
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : a65109fa-f8b4-4670-8ae8-a2bd0bf6aba3
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {ovn-chassis-id="5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3(a)192.168.1.41"}
fake_bridge : false
interfaces : [ed442077-f897-4e0b-97a1-a8051e9c3d56]lacp : []
mac : []
name : "ovn-566849-0"
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : a1622e6f-fcd0-4a8a-b259-ca4d0ccf1cd2
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {}
fake_bridge : false
interfaces : [ca368654-54f3-49d0-a71c-8894426df6bf]lacp : []
mac : []
name : "vnet5"
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
[root@ovirt2 ~]#
Best Regards,
Strahil Nikolov
On Dec 16, 2019 23:28, Dominik Holler <dholler(a)redhat.com> wrote:
>
>
>
> On Sat, Dec 14, 2019 at 11:36 AM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>>
>> Hi Dominik,
>>
>> yes I was looking for those settings.
>>
>> I have added again the external provider , but I guess the mess is even bigger as I made some stupid decisions (like removing 2 port groups :) without knowing what I'm doing) .
>> Sadly I can't remove all packages on the engine and hosts and reinstall them from scratch.
>>
>> Pip fails to install the openstacksdk (centOS7 is not great for such tasks) on the engine and my lack of knowledge in OVN makes it even more difficult.
>>
>> So the symptoms are that 2 machines can communicate with each other only if they are on the same host ,while on separate - no communications is happening.
>>
>
> This indicates that the tunnels between the hosts are not created.
> Can you please check the /var/log/openvswitch/ovn-controller.log on both hosts for errors and warnings, or share parts of the files here?
> If this does not point us to a problem, ovn has to be reconfigured. If possible, most easy way to do this would be to ensure that
> ovirt-provider-ovn is the default network provider of the cluster of the hosts, put one host after another in maintance mode and reinstall.
>
>
>>
>> How I created the network via UI:
>>
>> 1. Networks - new
>> 2. Fill in the name
>> 3. Create on external provider
>> 4. Network Port security -> disabled (even undefined does not work)
>> 5.Connect to physical network -> ovirtmgmt
>>
>>
>> I would be happy to learn more about OVN and thus I would like to make it work.
>>
>> Here is some info from the engine:
>>
>> [root@engine ~]# ovn-nbctl show
>> switch 1288ed26-471c-4bc2-8a7d-4531f306f44c (ovirt-pxelan-2a88b2e0-d04b-4196-ad50-074501e4ed08)
>> port c1eba112-5eed-4c04-b25c-d3dcfb934546
>> addresses: ["56:6f:5a:65:00:06"]
>> port 8b52ab60-f474-4d51-b258-cb2e0a53c34a
>> type: localnet
>> addresses: ["unknown"]
>> port b2753040-881b-487a-92a1-9721da749be4
>> addresses: ["56:6f:5a:65:00:09"]
>> [root@engine ~]# ovn-sbctl show
>> Chassis "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
>> hostname: "ovirt3.localdomain"
>> Encap geneve
>> ip: "192.168.1.41"
>> options: {csum="true"}
>> Chassis "baa0199e-d1a4-484c-af13-a41bcad19dbc"
>> hostname: "ovirt1.localdomain"
>> Encap geneve
>> ip: "192.168.1.90"
>> options: {csum="true"}
>> Chassis "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
>> hostname: "ovirt2.localdomain"
>> Encap geneve
>> ip: "192.168.1.64"
>> options: {csum="true"}
>> Port_Binding "b2753040-881b-487a-92a1-9721da749be4"
>> Port_Binding &quo
4 years, 11 months