On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão <ferrao@if.ufrj.br> wrote:
Thanks, Konstantin.

Just to be clear enough: the first deployment would be made on classic eth interfaces and later after the deployment of Hosted Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?

Another question: what about iSCSI Multipath on Self Hosted Engine? I've looked through the net and only found this issue: https://bugzilla.redhat.com/show_bug.cgi?id=1193961

Appears to be unsupported as today, but there's an workaround on the comments. It's safe to deploy this way? Should I use NFS instead?

It's probably not the most tested path but once you have an engine you should be able to create an iSCSI bond on your hosts from the engine.
Network configuration is persisted across host reboots and so the iSCSI bond configuration.

A different story is instead having ovirt-ha-agent connecting multiple IQNs or multiple targets over your SAN. This is currently not supported for the hosted-engine storage domain. 
See:
https://bugzilla.redhat.com/show_bug.cgi?id=1149579
 

Thanks,
V.

Sent from my iPhone

On 3 Jul 2017, at 21:55, Konstantin Shalygin <k0ste@k0ste.ru> wrote:

Hello,

I’m deploying oVirt for the first time and a question has emerged: what is the good practice to enable LACP on oVirt Node? Should I create 802.3ad bond during the oVirt Node installation in Anaconda, or it should be done in a posterior moment inside the Hosted Engine manager?

In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP bond for management and servers VLAN’s, while eth1 and eth2 are Multipath iSCSI disks (MPIO).

Thanks,
V.

Do all your network settings in ovirt-engine webadmin.

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users