I believe the user community deserves a little background for this
decision. OVS has been "experimental" since ovirt-4.0.z with migration
disabled by default. We were not aware of huge benefits it had
over the default Linux bridge, and did not expect people to be using
it in important deployments.
I would love to hear your experience regarding our OVS support, and
why you have chosen it.
In ovirt-4.2.0, the way in which VM libvirt definition is built has
changed considerably, and takes place in ovirt-engine, not in vdsm.
The vdsm code that supports OVS connectivity was disabled in
ovirt-4.2.3 which means that indeed, the experimental OVS feature is
no longer available for direct usage (unless you still use cluster
compatibility level 4.1)
However, as Thomas Davis explains, with OVN + physnet, ovirt-4.2 gives
you a matching functionality, including live migration out-of-the-box.
OVS switchtype was upgraded from "experimental" to "tech-preview".
I'd
like to drop the advisory altogether, but we keep it because we still
have bugs and
missing features comparing to Linux bridge clusters.
We've blocked changing the switchtype of existing clusters because
this functionality is buggy (particularly on the SPM host), and as of
ovirt-4.2, we do not have code to support live migration from a Linux
bridge host to an OVS one. Only cold migration is possible. We kept it
open over REST to allow testing and bugfixes to that flow, as well as
usage by careful users.
Thanks for using oVirt and its new features, and for engaging with the
community.
Regards,
Dan.
On Tue, May 22, 2018 at 9:20 PM, <tadavis(a)lbl.gov> wrote:
> The answer is..
>
> OVN replaced OVS as the networking technology. You cannot switch back to legacy,
they disabled switching between ovs and legacy in the default (1st) datacenter using the
gui. You can however, use Ansible to switch it.
>
> Remove the VDSM ovs setting, it will just mess you up, and it's not supported in
4.2
>
> To able to migrate a VM in 4.2, you have use OVN with OVS.
>
> I did this a few months back, on a 4.2.2 hosted-engine setup:
>
> 0) To setup a node in a cluster, make sure the cluster is in OVS, not
> legacy.
>
> 1) Make sure you have an OVN controller setup somewhere. Default
> appears to be the ovirt-hosted-engine.
> a) you should also have the external network provider for OVN
> configured also; see the web interface.
>
> 2) when you install the node, make sure it has openvswitch installed and
> running - ie:
> a) 'systemctl status openvswitch' says it's up and running. (be sure
> it's enable also)
> b) 'ovs-vsctl show' has vdsm bridges listed, and possibly a br-int
> bridge.
>
> 3) if there is no br-int bridge, do 'vdsm-tool ovn-config
> ovn-controller-ip host-ip'
>
> 4) when you have configured several nodes in the OVN, you should see
> them listed as geneve devices in 'ovs-vsctl show', ie:
>
> This is a 4 node cluster, so the other 3 nodes are expected:
>
> [ root at d8-r12-c1-n3 ~]# ovs-vsctl show
> 42df28ba-ffd6-4e61-b7b2-219576da51ab
> Bridge br-int
> fail_mode: secure
> Port "ovn-27461b-0"
> Interface "ovn-27461b-0"
> type: geneve
> options: {csum="true", key=flow,
remote_ip="192.168.85.91"}
> Port "vnet1"
> Interface "vnet1"
> Port "ovn-a1c08f-0"
> Interface "ovn-a1c08f-0"
> type: geneve
> options: {csum="true", key=flow,
remote_ip="192.168.85.87"}
> Port "patch-br-int-to-f7a19c7d-021a-455d-bf3a-c15e212d8831"
> Interface
> "patch-br-int-to-f7a19c7d-021a-455d-bf3a-c15e212d8831"
> type: patch
> options:
> {peer="patch-f7a19c7d-021a-455d-bf3a-c15e212d8831-to-br-int"}
> Port "vnet0"
> Interface "vnet0"
> Port "patch-br-int-to-7874ba85-8f6f-4e43-9535-5a1b1353a9ec"
> Interface
> "patch-br-int-to-7874ba85-8f6f-4e43-9535-5a1b1353a9ec"
> type: patch
> options:
> {peer="patch-7874ba85-8f6f-4e43-9535-5a1b1353a9ec-to-br-int"}
> Port "ovn-8da92c-0"
> Interface "ovn-8da92c-0"
> type: geneve
> options: {csum="true", key=flow,
remote_ip="192.168.85.95"}
> Port br-int
> Interface br-int
> type: internal
> Bridge "vdsmbr_LZmj3uJ1"
> Port "vdsmbr_LZmj3uJ1"
> Interface "vdsmbr_LZmj3uJ1"
> type: internal
> Port "net211"
> tag: 211
> Interface "net211"
> type: internal
> Port "eno2"
> Interface "eno2"
> Bridge "vdsmbr_e7rcnufp"
> Port "vdsmbr_e7rcnufp"
> Interface "vdsmbr_e7rcnufp"
> type: internal
> Port ipmi
> tag: 20
> Interface ipmi
> type: internal
> Port ovirtmgmt
> tag: 50
> Interface ovirtmgmt
> type: internal
> Port "patch-f7a19c7d-021a-455d-bf3a-c15e212d8831-to-br-int"
> Interface
> "patch-f7a19c7d-021a-455d-bf3a-c15e212d8831-to-br-int"
> type: patch
> options:
> {peer="patch-br-int-to-f7a19c7d-021a-455d-bf3a-c15e212d8831"}
> Port "eno1"
> Interface "eno1"
> Port "patch-7874ba85-8f6f-4e43-9535-5a1b1353a9ec-to-br-int"
> Interface
> "patch-7874ba85-8f6f-4e43-9535-5a1b1353a9ec-to-br-int"
> type: patch
> options:
> {peer="patch-br-int-to-7874ba85-8f6f-4e43-9535-5a1b1353a9ec"}
> ovs_version: "2.7.3"
>
> 5) Create in the cluster the legacy style bridge networks - ie,
> ovirtmgmt, etc. Do this just like you where creating them for the
> legacy network. Define the VLAN #, the MTU, etc.
>
> 6) Now, create in the network config, the OVN networks - ie,
> ovn-ovirtmgmt is on an external provider (select OVN), and make sure
> 'connect to physical network' is checked, and the correct network from
> step 5 is picked. Save this off.
>
> This will connect the two networks together in a bridge, and all
> services are visible to both ie dhcp, dns..
>
> 7) when you create the VM, select the OVN network interface, not the
> legacy bridge interface (this is why I decided to prefix with 'ovn-').
>
> 8) Create the vm, start it, migrate, stop, re-start, etc, it all should
> work now.
>
> Lots of reading.. lots of interesting stuff found.. finally figured
> this out after reading a bunch of bug fixes for the latest RC (released
> today)
>
> The only doc link:
>
>
https://ovirt.org/develop/release-management/features/network/provider-ph...
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org