[Users] Keepalived on oVirt Hosts has engine networking issues
Andrew Lau
andrew at andrewklau.com
Tue Dec 17 02:21:11 UTC 2013
Hi,
My workaround was successful, although I'm not sure if it should be
reported as a bug.
Running keepalived will show the floating IP address as the "IP for the
interface" and mess with the configs within ovirt-engine (even though it's
manually configured to be something else). I assume this is because of `ip
a`
Live migrations had an issue where it'd be trying to "destination is same
as source", but I'm unsure of the logic behind that. Didn't have time to
dig deeper as the issue cleared once I moved the keepalived service to a
different interface which wasn't used for migration.
Andrew.
On Mon, Dec 16, 2013 at 6:32 PM, Itamar Heim <iheim at redhat.com> wrote:
> On 12/01/2013 11:30 AM, Andrew Lau wrote:
>
>> I put the management and storage on separate VLANs to try avoid the
>> floating IP address issue temporarily. I also bonded the two nics, but I
>> don't think that shouldn't matter.
>>
>> The other server got brought down the other day for some maintenance, I
>> hope to get it back up in a few days. But I can tell you a few things I
>> noticed:
>>
>> ip a - it'll list the floating IP on both servers even if only active on
>> one.
>>
>> I've got about 10 other networks so I've snipped out quite a bit.
>>
>> # ip a
>> <snip>
>> 130: bond0.2 at bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500
>> qdisc noqueue state UP
>> link/ether 00:10:18:2e:6a:cb brd ff:ff:ff:ff:ff:ff
>> inet 172.16.0.11/24 <http://172.16.0.11/24> brd 172.16.0.255 scope
>>
>> global bond0.2
>> inet6 fe80::210:18ff:fe2e:6acb/64 scope link
>> valid_lft forever preferred_lft forever
>> 131: bond0.3 at bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500
>> qdisc noqueue state UP
>> link/ether 00:10:18:2e:6a:cb brd ff:ff:ff:ff:ff:ff
>> inet 172.16.1.11/24 <http://172.16.1.11/24> brd 172.16.1.255 scope
>> global bond0.3
>> inet 172.16.1.5/32 <http://172.16.1.5/32> scope global bond0.3
>>
>> inet6 fe80::210:18ff:fe2e:6acb/64 scope link
>> valid_lft forever preferred_lft forever
>> </snip>
>>
>>
>> # vdsClient -s 0 getVdsCaps
>> <snip>
>> 'storage_network': {'addr': '172.16.1.5',
>> 'bridged': False,
>> 'gateway': '172.16.1.1',
>> 'iface': 'bond0.3',
>> 'interface': 'bond0.3',
>> 'ipv6addrs':
>> ['fe80::210:18ff:fe2e:6acb/64'],
>> 'ipv6gateway': '::',
>> 'mtu': '1500',
>> 'netmask': '255.255.255.255',
>> 'qosInbound': '',
>> 'qosOutbound': ''},
>> <snip>
>> vlans = {'bond0.2': {'addr': '172.16.0.11',
>> 'cfg': {'BOOTPROTO': 'none',
>> 'DEFROUTE': 'yes',
>> 'DEVICE': 'bond0.2',
>> 'GATEWAY': '172.16.0.1',
>> 'IPADDR': '172.16.0.11',
>> 'NETMASK': '255.255.255.0',
>> 'NM_CONTROLLED': 'no',
>> 'ONBOOT': 'yes',
>> 'VLAN': 'yes'},
>> 'iface': 'bond0',
>> 'ipv6addrs': ['fe80::210:18ff:fe2e:6acb/64'],
>> 'mtu': '1500',
>> 'netmask': '255.255.255.0',
>> 'vlanid': 2},
>> 'bond0.3': {'addr': '172.16.1.5',
>> 'cfg': {'BOOTPROTO': 'none',
>> 'DEFROUTE': 'no',
>> 'DEVICE': 'bond0.3',
>> 'IPADDR': '172.16.1.11',
>> 'NETMASK': '255.255.255.0',
>> 'NM_CONTROLLED': 'no',
>> 'ONBOOT': 'yes',
>> 'VLAN': 'yes'},
>> 'iface': 'bond0',
>> 'ipv6addrs': ['fe80::210:18ff:fe2e:6acb/64'],
>> 'mtu': '1500',
>> 'netmask': '255.255.255.255',
>> 'vlanid': 3},
>>
>> I hope that's enough info, if not I'll post the full config on both when
>> I can bring it back up.
>>
>> Cheers,
>> Andrew.
>>
>>
>> On Sun, Dec 1, 2013 at 7:15 PM, Assaf Muller <amuller at redhat.com
>> <mailto:amuller at redhat.com>>wrote:
>>
>>
>> Could you please attach the output of:
>> "vdsClient -s 0 getVdsCaps"
>> (Or without the -s, whichever works)
>> And:
>> "ip a"
>>
>> On both hosts?
>> You seem to have made changes since the documentation on the link
>> you provided, like separating the management and storage via VLANs
>> on eth0. Any other changes?
>>
>>
>> Assaf Muller, Cloud Networking Engineer
>> Red Hat
>>
>> ----- Original Message -----
>> From: "Andrew Lau" <andrew at andrewklau.com
>> <mailto:andrew at andrewklau.com>>
>> To: "users" <users at ovirt.org <mailto:users at ovirt.org>>
>> Sent: Sunday, December 1, 2013 4:55:32 AM
>> Subject: [Users] Keepalived on oVirt Hosts has engine networking
>> issues
>>
>> Hi,
>>
>> I have the scenario where I have gluster and ovirt hosts on the same
>> box, to keep the gluster volumes highly available incase a box drops
>> I'm using keepalived across the boxes and using that IP as the means
>> for the storage domain. I documented my setup here in case anyone
>> needs a little more info
>> http://www.andrewklau.com/returning-to-glusterized-ovirt-3-3/
>>
>> However, the engine seems to be picking up the floating IP assigned
>> to keepalived as the interface and messing with the ovirtmgmt
>> migration network, so migrations are failing as my floating IP gets
>> assigned to the ovirtmgmt bridge in the engine however it's not
>> actually there on most hosts (except one) so vdsm seems to report
>> destination same as source.
>>
>> I've since created a new vlan interface just for storage to avoid
>> the ovirtmgmt conflict, but the engine will still pick up the wrong
>> IP on the storage vlan because of keepalived. This means I can't use
>> the save network feature within the engine as it'll save the
>> floating ip rather than the one already there. Is this a bug or just
>> the way it's designed.
>>
>> eth0.2 -> ovirtmgmt (172.16.0.11) -> management and migration
>> network -> engine sees, sets and saves 172.16.0.11
>> eth0.3 -> storagenetwork (172.16.1.11) -> gluster network -> engine
>> sees, sets and saves 172.16.1.5 (my floating IP)
>>
>> I hope this makes sense.
>>
>> p.s. can anyone also confirm, does gluster support multi pathing by
>> default? If I'm using this keepalived method, am I bottle necking
>> myself to one host?
>>
>> Thanks,
>> Andrew
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org <mailto:Users at ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> Andrew - was this resolved or you are still looking for more
> insight/assistance?
>
> thanks,
> Itamar
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20131217/e2bf792f/attachment-0001.html>
More information about the Users
mailing list