Low disk space on Storage
by suporte@logicworks.pt
Hi,
I'm running ovirt Version:4.3.4.3-1.el7
My filesystem disk has 30 GB free space.
Cannot start a VM due to an I/O error storage.
When tryng to move the disk to another storage domain get this error:
Error while executing action: Cannot move Virtual Disk. Low disk space on Storage Domain DATA4.
The sum of pre-allocated disk is the total of the storage domain disk.
Any idea what can I do to move a disk to other storage domain?
Many thanks
--
Jose Ferradeira
http://www.logicworks.pt
4 years, 3 months
Re: Unstable VM network connectivity after upgrade 4.2.8 to 4.3.7
by Strahil
This seems quite odd.
What happens when you set custom compatibility mode to 4.2 ?
I think it was in Edit -> system -> Advanced options -> Custom compatibility mode -> 4.2
Best Regards,
Strahil NikolovOn Jan 12, 2020 13:24, Latchezar Filtchev <Latcho(a)aubg.bg> wrote:
>
> Hi Strahil,
>
>
>
> Unfortunately virtio drives cannot be used. Guest OS is CentOS 5.2, kernel 2.6.18-92.1.17.el5.centos.plusPAE
>
>
>
> The network connectivity of the VM is very short (please see below). Till I connect the VM (via spice) network connectivity is already gone.
>
> tcpdump –I eth0 –v
>
> tcpdump: listening on eth0, link-type EN10MB ( Ethernet ), capture 96 bytes
>
>
>
> 0 packets captured
>
> 0 packets received by filer
>
> 0 packets dropped by kernel
>
>
>
> From 172.17.17.7 icmp_seq=60 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=61 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=62 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=63 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=64 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=65 Destination Host Unreachable
>
> 64 bytes from 172.17.16.16: icmp_seq=66 ttl=64 time=2.19 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=67 ttl=64 time=0.567 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=68 ttl=64 time=0.339 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=69 ttl=64 time=0.337 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=70 ttl=64 time=0.288 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=71 ttl=64 time=0.305 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=72 ttl=64 time=0.293 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=73 ttl=64 time=0.346 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=74 ttl=64 time=0.359 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=75 ttl=64 time=0.479 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=76 ttl=64 time=0.391 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=77 ttl=64 time=0.302 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=78 ttl=64 time=0.327 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=79 ttl=64 time=0.321 ms
>
> From 172.17.17.7 icmp_seq=125 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=126 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=127 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=128 Destination Host Unreachable
>
>
>
> Thank you!
>
> Best,
>
> Latcho
>
>
>
>
>
> From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
> Sent: Sunday, January 12, 2020 12:52 PM
> To: users <users(a)ovirt.org>; Latchezar Filtchev <Latcho(a)aubg.bg>
> Subject: Re: [ovirt-users] Re: Unstable VM network connectivity after upgrade 4.2.8 to 4.3.7
>
>
>
> Hi Lacho,
>
>
>
> Can you run the virtio drivers ?
>
>
>
> They are the most tested in KVM .
>
>
>
> Also, can you run a tcpdump (with e1000) and check what is going on when the network disapperead?
>
>
>
> Best Regards,
>
> Strahil Nikolov
>
>
>
> В неделя, 12 януари 2020 г., 09:22:52 ч. Гринуич+2, Latchezar Filtchev <latcho(a)aubg.bg> написа:
>
>
>
>
>
> Dear Strahi,
>
>
>
> I tried rtl8139. The behavior is the same.
>
>
>
> Best,
>
> Latcho
>
>
>
>
>
> From: Strahil <hunter86_bg(a)yahoo.com>
> Sent: Sunday, January 12, 2020 1:19 AM
> To: Latchezar Filtchev <Latcho(a)aubg.bg>; users <users(a)ovirt.org>
> Subject: Re: [ovirt-users] Unstable VM network connectivity after upgrade 4.2.8 to 4.3.7
>
>
>
> Hi Latcho,
>
> Most probably it's a bug.
> Have you tried virtio and/or rtl-based NIC ?
>
> As far as I know, CentOS 5 supports Virtio after Kernel >= 2.6.25 .
>
> Best Regards,
> Strahil Nikolov
>
> On Jan 11, 2020 19:54, Latchezar Filtchev <Latcho(a)aubg.bg> wrote:
>>
>> Hello Everyone,
>>
>>
>>
>> I am aware my guest OS is out of support but I experienced the following:
>>
>>
>>
>> oVirt – 4.2.8 (self-hosted engine)
>>
>> VM – CentOS 5.2; two VNIC’s ( driver used e1000) connected to different VLAN’s; - no issues with network connectivity
>>
>>
>>
>> After upgrade to oVirt 4.3.7
>>
>>
>>
>> The same VM starts normally. Network is available for several seconds (10 – 20 pings) and then it disappears. The machine works but no ping to/from VM. When I am returning the same machine (via export domain) to oVirt 4.2.8 environment – it works as expected.
>>
>>
>>
>> Can someone advise on this? Any help will be greatly appreciated.
>>
>>
>>
>> Thank you.
>>
>> Best,
>>
>> Latcho
>>
>>
>>
>>
>>
>>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DAPMFAA3XZO...
4 years, 3 months
Re: Unstable VM network connectivity after upgrade 4.2.8 to 4.3.7
by Strahil
Hi Latcho,
Most probably it's a bug.
Have you tried virtio and/or rtl-based NIC ?
As far as I know, CentOS 5 supports Virtio after Kernel >= 2.6.25 .
Best Regards,
Strahil NikolovOn Jan 11, 2020 19:54, Latchezar Filtchev <Latcho(a)aubg.bg> wrote:
>
> Hello Everyone,
>
>
>
> I am aware my guest OS is out of support but I experienced the following:
>
>
>
> oVirt – 4.2.8 (self-hosted engine)
>
> VM – CentOS 5.2; two VNIC’s ( driver used e1000) connected to different VLAN’s; - no issues with network connectivity
>
>
>
> After upgrade to oVirt 4.3.7
>
>
>
> The same VM starts normally. Network is available for several seconds (10 – 20 pings) and then it disappears. The machine works but no ping to/from VM. When I am returning the same machine (via export domain) to oVirt 4.2.8 environment – it works as expected.
>
>
>
> Can someone advise on this? Any help will be greatly appreciated.
>
>
>
> Thank you.
>
> Best,
>
> Latcho
>
>
>
>
>
>
4 years, 3 months
Upload ISO to data domain - checksum mismatch on the brick
by Strahil Nikolov
Hello Community,
I'm currently migrating all my ISOs from the ISO domain to the DATA domain but it seems that the checksum (on brick level) mismatches.The problem is that I have managed to upload several ISOs (one of them 4.9GB) without issues.
Restarting the ovirt-imageio-proxy.service and later the HostedEngine VM doesn't help.
The logs from the imageio proxy indicate no issues.
I thought that the gluster could be the problem ,but this is not the case - I tested via copying (using ovirt1) between the 2 mountpoints and the sha512sum matches.
Trying to upload via ovirt2 also doesn't help.
Any ideas are appreciated.
PS: I'm running latest oVirt.
Best Regards,Strahil Nikolov
4 years, 3 months
HCI Disaster Recovery
by Christian Reiss
Hey folks,
- theoretical question, no live data in jeopardy -
Let's say a 3-way HCI cluster is up and running, with engine running,
all is well. The setup was done via gui, including gluster.
Now I would kill a host, poweroff & disk wipe. Simulating a full node
failure.
The remaining nodes should keep running on (3 copy sync, no arbiter),
vms keep running or will be restarted. I would reinstall using the ovirt
node installer on the "failed" node.
This would net me with a completely empty, no-gluster setup. What is the
oVirt way to recover from this point onward?
Thanks for your continued support! <3
-Christian.
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss
4 years, 3 months
Unstable VM network connectivity after upgrade 4.2.8 to 4.3.7
by Latchezar Filtchev
Hello Everyone,
I am aware my guest OS is out of support but I experienced the following:
oVirt - 4.2.8 (self-hosted engine)
VM - CentOS 5.2; two VNIC's ( driver used e1000) connected to different VLAN's; - no issues with network connectivity
After upgrade to oVirt 4.3.7
The same VM starts normally. Network is available for several seconds (10 - 20 pings) and then it disappears. The machine works but no ping to/from VM. When I am returning the same machine (via export domain) to oVirt 4.2.8 environment - it works as expected.
Can someone advise on this? Any help will be greatly appreciated.
Thank you.
Best,
Latcho
4 years, 3 months
Re: HCI Disaster Recovery
by Strahil
It's actually not so easy.
The fastest way to recover is just restore from backup.
Otherwise the flow should be:
1. Install the new node (new hostname will be easier).
2. Use gluster's replace-brick to change the dead brick with new one.
3. Once ovirt 's integration with gluster detects the change, you will be able to forcefully remove the dead node.
4. Add the newly installed node to the relevant cluster (with or without hosted-engine deployment)
5. Test to move a low-priority VM to the new host.
6. Power up a test VM on the new host to test the functionality.
7. You can make the new node SPM and test snapahots and new vm creation.
As you can see , in order to remove a missing node - it should not be part of the Gluster Cluster.
Best Regards,
Strahil NikolovOn Jan 10, 2020 20:10, Christian Reiss wrote: > > Hey, > > is there really no ovirt native way to restore a single host and bring > it back into the cluster? > > -Chris. > > On 07.01.2020 09:54, Christian Reiss wrote: > > Hey folks, > > > > - theoretical question, no live data in jeopardy - > > > > Let's say a 3-way HCI cluster is up and running, with engine running, > > all is well. The setup was done via gui, including gluster. > > > > Now I would kill a host, poweroff & disk wipe. Simulating a full node > > failure. > > > > The remaining nodes should keep running on (3 copy sync, no arbiter), > > vms keep running or will be restarted. I would reinstall using the ovirt > > node installer on the "failed" node. > > > > This would net me with a completely empty, no-gluster setup. What is the > > oVirt way to recover from this point onward? > > > > Thanks for your continued support! <3 > > -Christian. > > > > -- > Christian Reiss - email(a)christian-reiss.de /"\ ASCII Ribbon > christian(a)reiss.nrw \ / Campaign > X against HTML > XMPP chris(a)alpha-labs.net / \ in eMails > WEB christian-reiss.de, reiss.nrw > > GPG Retrieval http://gpg.christian-reiss.de > GPG ID ABCD43C5, 0x44E29126ABCD43C5 > GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5 > > "It's better to reign in hell than to serve in heaven.", > John Milton, Paradise lost. > _______________________________________________ > Users mailing list -- users(a)ovirt.org > To unsubscribe send an email to users-leave(a)ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GQECSH72L4S...
4 years, 3 months
Hyperconverged setup - storage architecture - scaling
by Leo David
Hello Everyone,
Reading through the document:
"Red Hat Hyperconverged Infrastructure for Virtualization 1.5
Automating RHHI for Virtualization deployment"
Regarding storage scaling, i see the following statements:
*2.7. SCALINGRed Hat Hyperconverged Infrastructure for Virtualization is
supported for one node, and for clusters of 3, 6, 9, and 12 nodes.The
initial deployment is either 1 or 3 nodes.There are two supported methods
of horizontally scaling Red Hat Hyperconverged Infrastructure for
Virtualization:*
*1 Add new hyperconverged nodes to the cluster, in sets of three, up to the
maximum of 12 hyperconverged nodes.*
*2 Create new Gluster volumes using new disks on existing hyperconverged
nodes.You cannot create a volume that spans more than 3 nodes, or expand an
existing volume so that it spans across more than 3 nodes at a time*
*2.9.1. Prerequisites for geo-replicationBe aware of the following
requirements and limitations when configuring geo-replication:One
geo-replicated volume onlyRed Hat Hyperconverged Infrastructure for
Virtualization (RHHI for Virtualization) supports only one geo-replicated
volume. Red Hat recommends backing up the volume that stores the data of
your virtual machines, as this is usually contains the most valuable data.*
------
Also in oVirtEngine UI, when I add a brick to an existing volume i get the
following warning:
*"Expanding gluster volume in a hyper-converged setup is not recommended as
it could lead to degraded performance. To expand storage for cluster, it is
advised to add additional gluster volumes." *
Those things are raising a couple of questions that maybe for some for you
guys are easy to answer, but for me it creates a bit of confusion...
I am also referring to RedHat product documentation, because I treat
oVirt as production-ready as RHHI is.
*1*. Is there any reason for not going to distributed-replicated volumes (
ie: spread one volume across 6,9, or 12 nodes ) ?
- ie: is recomanded that in a 9 nodes scenario I should have 3 separated
volumes, but how should I deal with the folowing question
*2.* If only one geo-replicated volume can be configured, how should I
deal with 2nd and 3rd volume replication for disaster recovery
*3.* If the limit of hosts per datacenter is 250, then (in theory ) the
recomended way in reaching this treshold would be to create 20 separated
oVirt logical clusters with 12 nodes per each ( and datacenter managed from
one ha-engine ) ?
*4.* In present, I have the folowing one 9 nodes cluster , all hosts
contributing with 2 disks each to a single replica 3 distributed
replicated volume. They where added to the volume in the following order:
node1 - disk1
node2 - disk1
......
node9 - disk1
node1 - disk2
node2 - disk2
......
node9 - disk2
At the moment, the volume is arbitrated, but I intend to go for full
distributed replica 3.
Is this a bad setup ? Why ?
It oviously brakes the redhat recommended rules...
Is there anyone so kind to discuss on these things ?
Thank you very much !
Leo
--
Best regards, Leo David
--
Best regards, Leo David
4 years, 3 months
Re: Ovirt OVN help needed
by Strahil
Hi Dominik,
Thanks for your reply.
On ovirt1 I got the following:
[root@ovirt1 openvswitch]# less ovn-controller.log-20191216.gz
2019-12-15T01:49:02.988Z|00032|vlog|INFO|opened log file /var/log/openvswitch/ovn-controller.log
2019-12-16T01:18:02.114Z|00033|vlog|INFO|closing log file
ovn-controller.log-20191216.gz (END)
Same is on the other node:
[root@ovirt2 openvswitch]# less ovn-controller.log-20191216.gz
2019-12-15T01:26:03.477Z|00028|vlog|INFO|opened log file /var/log/openvswitch/ovn-controller.log
2019-12-16T01:30:01.718Z|00029|vlog|INFO|closing log file
ovn-controller.log-20191216.gz (END)
The strange thing is that the geneve tunnels are there:
[root@ovirt1 ~]# ovs-vsctl show
c0e938f1-b5b5-4d5a-9cda-29dae2986f29
Bridge br-int
fail_mode: secure
Port "ovn-25cc77-0"
Interface "ovn-25cc77-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.1.64"} Port "ovn-566849-0"
Interface "ovn-566849-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.1.41"} Port br-int
Interface br-int
type: internal
Port "vnet2"
Interface "vnet2"
ovs_version: "2.11.0"
[root@ovirt1 ~]# ovs-vsctl list ports
ovs-vsctl: unknown table "ports"
[root@ovirt1 ~]# ovs-vsctl list port
_uuid : fbf40569-925e-4430-a7c5-c78d58979bbc
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {}
fake_bridge : false
interfaces : [3207c0cb-3000-40f2-a850-83548f76f090]lacp : []
mac : []
name : "vnet2"
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : 8947f82d-a089-429b-8843-71371314cb52
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {}
fake_bridge : false
interfaces : [ec6a6688-e5d6-4346-ac47-ece1b8379440]lacp : []
mac : []
name : br-int
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : 72d612be-853e-43e9-8f5c-ce66cef0bebe
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {ovn-chassis-id="5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3(a)192.168.1.41"}
fake_bridge : false
interfaces : [a31574fe-515b-420b-859d-7f2ac729638f]lacp : []
mac : []
name : "ovn-566849-0"
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : 2043a15f-ec39-4cc3-b875-7be00423dd7a
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {ovn-chassis-id="25cc77b3-046f-45c5-af0c-ffb2f77d73f1(a)192.168.1.64"}
fake_bridge : false
interfaces : [f9a9e3ff-070e-4044-b601-7f7394dc295f]lacp : []
mac : []
name : "ovn-25cc77-0"
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
[root@ovirt1 ~]#
[root@ovirt2 ~]# ovs-vsctl show
3dbab138-6b90-44c5-af05-b8a944c9bf20
Bridge br-int
fail_mode: secure
Port "ovn-baa019-0"
Interface "ovn-baa019-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.1.90"} Port br-int
Interface br-int
type: internal
Port "vnet5"
Interface "vnet5"
Port "ovn-566849-0"
Interface "ovn-566849-0"
type: geneve
options: {csum="true", key=flow, remote_ip="192.168.1.41"}
ovs_version: "2.11.0"
[root@ovirt2 ~]# ovs-vsctl list port
_uuid : 151e1188-f07a-4750-a620-392a08e7e7fe
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : [] bond_updelay : 0
cvlans : []
external_ids : {ovn-chassis-id="baa0199e-d1a4-484c-af13-a41bcad19dbc(a)192.168.1.90"}
fake_bridge : false
interfaces : [4d4bc12a-609a-4917-b839-d4f652acdc33]lacp : []
mac : []
name : "ovn-baa019-0"
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : 3a862f96-b3ec-46a9-bcf6-f385e5def410
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {}
fake_bridge : false
interfaces : [777f2819-ca27-4890-8d2f-11349ca0d398]lacp : []
mac : []
name : br-int
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : a65109fa-f8b4-4670-8ae8-a2bd0bf6aba3
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {ovn-chassis-id="5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3(a)192.168.1.41"}
fake_bridge : false
interfaces : [ed442077-f897-4e0b-97a1-a8051e9c3d56]lacp : []
mac : []
name : "ovn-566849-0"
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
_uuid : a1622e6f-fcd0-4a8a-b259-ca4d0ccf1cd2
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {}
fake_bridge : false
interfaces : [ca368654-54f3-49d0-a71c-8894426df6bf]lacp : []
mac : []
name : "vnet5"
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
[root@ovirt2 ~]#
Best Regards,
Strahil Nikolov
On Dec 16, 2019 23:28, Dominik Holler <dholler(a)redhat.com> wrote:
>
>
>
> On Sat, Dec 14, 2019 at 11:36 AM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>>
>> Hi Dominik,
>>
>> yes I was looking for those settings.
>>
>> I have added again the external provider , but I guess the mess is even bigger as I made some stupid decisions (like removing 2 port groups :) without knowing what I'm doing) .
>> Sadly I can't remove all packages on the engine and hosts and reinstall them from scratch.
>>
>> Pip fails to install the openstacksdk (centOS7 is not great for such tasks) on the engine and my lack of knowledge in OVN makes it even more difficult.
>>
>> So the symptoms are that 2 machines can communicate with each other only if they are on the same host ,while on separate - no communications is happening.
>>
>
> This indicates that the tunnels between the hosts are not created.
> Can you please check the /var/log/openvswitch/ovn-controller.log on both hosts for errors and warnings, or share parts of the files here?
> If this does not point us to a problem, ovn has to be reconfigured. If possible, most easy way to do this would be to ensure that
> ovirt-provider-ovn is the default network provider of the cluster of the hosts, put one host after another in maintance mode and reinstall.
>
>
>>
>> How I created the network via UI:
>>
>> 1. Networks - new
>> 2. Fill in the name
>> 3. Create on external provider
>> 4. Network Port security -> disabled (even undefined does not work)
>> 5.Connect to physical network -> ovirtmgmt
>>
>>
>> I would be happy to learn more about OVN and thus I would like to make it work.
>>
>> Here is some info from the engine:
>>
>> [root@engine ~]# ovn-nbctl show
>> switch 1288ed26-471c-4bc2-8a7d-4531f306f44c (ovirt-pxelan-2a88b2e0-d04b-4196-ad50-074501e4ed08)
>> port c1eba112-5eed-4c04-b25c-d3dcfb934546
>> addresses: ["56:6f:5a:65:00:06"]
>> port 8b52ab60-f474-4d51-b258-cb2e0a53c34a
>> type: localnet
>> addresses: ["unknown"]
>> port b2753040-881b-487a-92a1-9721da749be4
>> addresses: ["56:6f:5a:65:00:09"]
>> [root@engine ~]# ovn-sbctl show
>> Chassis "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
>> hostname: "ovirt3.localdomain"
>> Encap geneve
>> ip: "192.168.1.41"
>> options: {csum="true"}
>> Chassis "baa0199e-d1a4-484c-af13-a41bcad19dbc"
>> hostname: "ovirt1.localdomain"
>> Encap geneve
>> ip: "192.168.1.90"
>> options: {csum="true"}
>> Chassis "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
>> hostname: "ovirt2.localdomain"
>> Encap geneve
>> ip: "192.168.1.64"
>> options: {csum="true"}
>> Port_Binding "b2753040-881b-487a-92a1-9721da749be4"
>> Port_Binding &quo
4 years, 3 months