On Tue, Jul 4, 2017 at 10:30 AM, Vinícius Ferrão <ferrao@if.ufrj.br> wrote:
Thanks for your input, Simone.

On 4 Jul 2017, at 05:01, Simone Tiraboschi <stirabos@redhat.com> wrote:



On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão <ferrao@if.ufrj.br> wrote:
Thanks, Konstantin.

Just to be clear enough: the first deployment would be made on classic eth interfaces and later after the deployment of Hosted Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?

Another question: what about iSCSI Multipath on Self Hosted Engine? I've looked through the net and only found this issue: https://bugzilla.redhat.com/show_bug.cgi?id=1193961

Appears to be unsupported as today, but there's an workaround on the comments. It's safe to deploy this way? Should I use NFS instead?

It's probably not the most tested path but once you have an engine you should be able to create an iSCSI bond on your hosts from the engine.
Network configuration is persisted across host reboots and so the iSCSI bond configuration.

A different story is instead having ovirt-ha-agent connecting multiple IQNs or multiple targets over your SAN. This is currently not supported for the hosted-engine storage domain. 
See:

Just to be clear, when we talk about bonding on iSCSI, we’re talking about iSCSI MPIO and not LACP (or something similar) on iSCSI interfaces, right?

Yes, correct.
 
In my case there are two different fabrics dedicated to iSCSI. They do not even transit on the same switch, so it’s plain ethernet (with fancy things, like mtu 9216 enabled and QoS).

So I think we’re talking about the unsupported feature of multiple IQN’s right?

Multiple IQNs on the host side (multiple initiators) should work trough iSCSI bonding as managed by oVirt engine:
https://www.ovirt.org/documentation/admin-guide/chap-Storage/#configuring-iscsi-multipathing

Multiple IQN on your SAN are instead currently not supported by ovirt-ha-agent for the hosted-engine storage domain

 

Thanks once again,
V.

 

Thanks,
V.

Sent from my iPhone

On 3 Jul 2017, at 21:55, Konstantin Shalygin <k0ste@k0ste.ru> wrote:

Hello,

I’m deploying oVirt for the first time and a question has emerged: what is the good practice to enable LACP on oVirt Node? Should I create 802.3ad bond during the oVirt Node installation in Anaconda, or it should be done in a posterior moment inside the Hosted Engine manager?

In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP bond for management and servers VLAN’s, while eth1 and eth2 are Multipath iSCSI disks (MPIO).

Thanks,
V.

Do all your network settings in ovirt-engine webadmin.

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users