Okay, both the BZ and ML posts are interesting and helpful. I'm kind of
surprised there seems to be so much trouble and confusion for what I
would have thought to be a very common setup. Are most people using
something else?
I think this gives me what I need for my next stab at doing this but I"m
still puzzled on how to tear down what I have in oVirt so that I can
redo it. Specifically, I didn't see how to delete the existing iSCSI
connections. I've read that this can only be done through the REST
API. I have managed to redo the interfaces on my Hosts so that
everything is now on just 2 NICs each, leaving 2 NICs free for a
foothold on a new setup. From all of my experimentation, it would
appear that my only option is to create a new storage domain and
export/import each disk volume one by one. Maybe there's a migration
option I have yet to see, but I don't see any way around creating a new
storage domain here.
On 1/21/19 7:12 AM, Vinícius Ferrão wrote:
Hello people, in the past Maor Lipchuk (from RH) tried very hard to
help me and Uwe but we was unable to converge on the solution.
This was discussed a year ago and on my understanding it still and
oVirt bug. As today, if you simple “DuckDuckGo” for “ovirt iscsi
multipath not working” the third link points to this bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1474904
Which is the one I’ve mentioned and it’s extremely similar to John
Florian case, which was my case too.
@John, take a look at the bugzilla link and see if the desired
topology match with your case.
Regards,
> On 21 Jan 2019, at 05:21, Eitan Raviv <eraviv(a)redhat.com
> <mailto:eraviv@redhat.com>> wrote:
>
> Shani,
> Can you help here with iSCSI bonding?
> Thanks
>
> On Mon, Jan 21, 2019 at 7:51 AM Uwe Laverenz <uwe(a)laverenz.de
> <mailto:uwe@laverenz.de>> wrote:
>>
>> Hi John,
>>
>> Am 20.01.19 um 18:32 schrieb John Florian:
>>
>>> As for how to get there, whatever exactly that might look like, I'm
>>> also
>>> having troubles figuring that out. I figured I would transform the
>>> setup described below into one where each host has:
>>>
>>> * 2 NICs bonded with LACP for my ovirtmgmt and "main" net
>>> * 1 NIC for my 1st storage net
>>> * 1 NIC for my 2nd storage net
>>
>> This is exactly the setup I use. I have run this successfully with
>> CentOS/LIO and FreeNAS iSCSI targets with good performance.
>>
>> In short:
>>
>> - 2 separate, isolated networks for iSCSI with dedicated adapters
>> on hosts and iSCSI target
>> - jumbo frames enabled
>> - no VLANs config needed on hosts, untagged VLANs on switch
>> - do _not_ use LACP, let multipathd handle failovers
>>
>> Same experience as Vinicius: what did _not_ work for me is the
>> iSCSI-Bonding in OVirt. It seems to require that all storage IPs are
>> reachable from all other IPs, which is not the case in every setup.
>>
>> To get multipathing to work I use multipath directly:
>>
>>>
https://www.mail-archive.com/users@ovirt.org/msg42735.html
>>
>> I will post a bonnie++ result later. If you need more details please let
>> me know.
>>
>> cu,
>> Uwe
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> <mailto:users-leave@ovirt.org>
>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>>
https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27...
_______________________________________________
Users mailing list --users(a)ovirt.org
To unsubscribe send an email tousers-leave(a)ovirt.org
Privacy
Statement:https://www.ovirt.org/site/privacy-policy/
oVirt Code of
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/NS...