[ovirt-users] 1. Re: ??: bond mode balance-alb (Jorick Astrego)
Christopher Young
mexigabacho at gmail.com
Thu Jan 1 00:06:01 UTC 2015
I'm a little confused by your explanation of 'just do the bonding at the
guest level'. I apologize for my ignorance here, but I'm trying to prepare
myself for a similar configuration where I'm going to need to get all much
bandwidth out of the bond as possible. How would bonding multiple
interfaces at the VM level provide a better balance than at the hypervisor
level? Wouldn't the traffic more or less end up traveling the same path
regardless of the virtual interface?
I'm trying to plan out an oVirt implementation where I would like to bond
multiple interfaces on my hypervisor nodes for balancing/redundancy, and
I'm very curious what others have done with Cisco hardware (in my case, a
pair of 3650's with MEC) in order to get the best solution.
I will read through these threads and see if I can gain a better
understanding, but if you happen to have an easy explanation that would
help my understand, I would greatly appreciate it.
On Wed, Dec 31, 2014 at 1:01 AM, Blaster <blaster at 556nato.com> wrote:
>
> Thanks for your thoughts. The problem is, most of the data is transmitted
> from a couple apps to a couple systems. The chance of a hash collision
> (i.e., most of the data going out the same interface anyway) is quite
> high. On Solaris, I just created two physical interfaces each with their
> own IP, and bound the apps to the appropriate interfaces. This worked
> great. Imagine my surprise when I discovered this doesn’t work on Linux
> and my crash course on weak host models.
>
> Interesting that no one commented on my thought to just do the bonding at
> the guest level (and use balance-alb) instead of at the hypervisor level.
> Some ESXi experts I have talked to say this is actually the preferred
> method with ESXi and not to do it at the hypervisor level, as the VM knows
> better than VMware.
>
> Or is the bonding mode issue with balance-alb/tlb more with the Linux TCP
> stack itself and not with oVirt and VDSM?
>
>
>
> On Dec 30, 2014, at 4:34 AM, Nikolai Sednev <nsednev at redhat.com> wrote:
>
> Mode 2 will do the job the best way for you in case of static LAG
> supported only at the switch's side, I'd advise using of xmit_hash_policy
> layer3+4, so you'll get better distribution for your DC.
>
>
> Thanks in advance.
>
> Best regards,
> Nikolai
> ____________________
> Nikolai Sednev
> Senior Quality Engineer at Compute team
> Red Hat Israel
> 34 Jerusalem Road,
> Ra'anana, Israel 43501
>
> Tel: +972 9 7692043
> Mobile: +972 52 7342734
> Email: nsednev at redhat.com
> IRC: nsednev
>
> ------------------------------
> *From: *users-request at ovirt.org
> *To: *users at ovirt.org
> *Sent: *Tuesday, December 30, 2014 2:12:58 AM
> *Subject: *Users Digest, Vol 39, Issue 173
>
> Send Users mailing list submissions to
> users at ovirt.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.ovirt.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
> users-request at ovirt.org
>
> You can reach the person managing the list at
> users-owner at ovirt.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
>
>
> Today's Topics:
>
> 1. Re: ??: bond mode balance-alb (Jorick Astrego)
> 2. Re: ??: bond mode balance-alb (Jorick Astrego)
> 3. HostedEngine Deployment Woes (Mikola Rose)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 29 Dec 2014 20:13:40 +0100
> From: Jorick Astrego <j.astrego at netbulae.eu>
> To: users at ovirt.org
> Subject: Re: [ovirt-users] ??: bond mode balance-alb
> Message-ID: <54A1A7E4.90308 at netbulae.eu>
> Content-Type: text/plain; charset="utf-8"
>
>
> On 12/29/2014 12:56 AM, Dan Kenigsberg wrote:
> > On Fri, Dec 26, 2014 at 12:39:45PM -0600, Blaster wrote:
> >> On 12/23/2014 2:55 AM, Dan Kenigsberg wrote:
> >>> Bug 1094842 - Bonding modes 0, 5 and 6 should be avoided for VM
> networks
> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1094842#c0
> >> Dan,
> >>
> >> What is bad about these modes that oVirt can't use them?
> > I can only quote jpirko's workds from the link above:
> >
> > Do not use tlb or alb in bridge, never! It does not work, that's it.
> The reason
> > is it mangles source macs in xmit frames and arps. When it is
> possible, just
> > use mode 4 (lacp). That should be always possible because all
> enterprise
> > switches support that. Generally, for 99% of use cases, you *should*
> use mode
> > 4. There is no reason to use other modes.
> >
> This switch is more of an office switch and only supports part of the
> 802.3ad standard:
>
>
> PowerConnect* *2824
>
> Scalable from small workgroups to dense access solutions, the 2824
> offers 24-port flexibility plus two combo small?form?factor
> pluggable (SFP) ports for connecting the switch to other networking
> equipment located beyond the 100 m distance limitations of copper
> cabling.
>
> Industry-standard link aggregation adhering to IEEE 802.3ad
> standards (static support only, LACP not supported)
>
>
> So the only way to have some kind of bonding without buying more
> expensive switches, is using balance-rr (mode=0), balance-xor (mode=2)
> or broadcast (modes=3).
> >> I just tested mode 4, and the LACP with Fedora 20 appears to not be
> >> compatible with the LAG mode on my Dell 2824.
> >>
> >> Would there be any issues with bringing two NICS into the VM and doing
> >> balance-alb at the guest level?
> >>
> Kind regards,
>
> Jorick Astrego
>
>
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego
>
> Netbulae Virtualization Experts
>
> ----------------
>
> Tel: 053 20 30 270 info at netbulae.eu Staalsteden
> 4-3A KvK 08198180
> Fax: 053 20 30 271 www.netbulae.eu 7547 TA
> Enschede BTW NL821234584B01
>
> ----------------
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.ovirt.org/pipermail/users/attachments/20141229/dfacba22/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Mon, 29 Dec 2014 20:14:55 +0100
> From: Jorick Astrego <j.astrego at netbulae.eu>
> To: users at ovirt.org
> Subject: Re: [ovirt-users] ??: bond mode balance-alb
> Message-ID: <54A1A82F.1090100 at netbulae.eu>
> Content-Type: text/plain; charset="utf-8"
>
>
> On 12/29/2014 12:56 AM, Dan Kenigsberg wrote:
> > On Fri, Dec 26, 2014 at 12:39:45PM -0600, Blaster wrote:
> >> On 12/23/2014 2:55 AM, Dan Kenigsberg wrote:
> >>> Bug 1094842 - Bonding modes 0, 5 and 6 should be avoided for VM
> networks
> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1094842#c0
> >>
> Sorry, no mode 0. So only mode 2 or 3 for your environment....
>
> Kind regards,
>
> Jorick
>
>
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego
>
> Netbulae Virtualization Experts
>
> ----------------
>
> Tel: 053 20 30 270 info at netbulae.eu Staalsteden
> 4-3A KvK 08198180
> Fax: 053 20 30 271 www.netbulae.eu 7547 TA
> Enschede BTW NL821234584B01
>
> ----------------
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.ovirt.org/pipermail/users/attachments/20141229/41da033b/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 3
> Date: Tue, 30 Dec 2014 00:12:52 +0000
> From: Mikola Rose <mrose at power-soft.com>
> To: "users at ovirt.org" <users at ovirt.org>
> Subject: [ovirt-users] HostedEngine Deployment Woes
> Message-ID: <F992C848-E4EB-468E-83F4-37646EDB3E62 at power-soft.com>
> Content-Type: text/plain; charset="us-ascii"
>
>
> Hi List Members;
>
> I have been struggling with deploying oVirt hosted engine I keep running
> into a timeout during the "Misc Configuration" any suggestion on how I can
> trouble shoot this?
>
> Redhat 2.6.32-504.3.3.el6.x86_64
>
> Installed Packages
> ovirt-host-deploy.noarch
> 1.2.5-1.el6ev
>
> @rhel-6-server-rhevm-3.4-rpms
> ovirt-host-deploy-java.noarch
> 1.2.5-1.el6ev
>
> @rhel-6-server-rhevm-3.4-rpms
> ovirt-hosted-engine-ha.noarch
> 1.1.6-3.el6ev
>
> @rhel-6-server-rhevm-3.4-rpms
> ovirt-hosted-engine-setup.noarch
> 1.1.5-1.el6ev
>
> @rhel-6-server-rhevm-3.4-rpms
> rhevm-setup-plugin-ovirt-engine.noarch
> 3.4.4-2.2.el6ev
>
> @rhel-6-server-rhevm-3.4-rpms
> rhevm-setup-plugin-ovirt-engine-common.noarch
> 3.4.4-2.2.el6ev
>
> @rhel-6-server-rhevm-3.4-rpms
>
>
> Please confirm installation settings (Yes, No)[No]: Yes
> [ INFO ] Stage: Transaction setup
> [ INFO ] Stage: Misc configuration
> [ INFO ] Stage: Package installation
> [ INFO ] Stage: Misc configuration
> [ INFO ] Configuring libvirt
> [ INFO ] Configuring VDSM
> [ INFO ] Starting vdsmd
> [ INFO ] Waiting for VDSM hardware info
> [ INFO ] Waiting for VDSM hardware info
> [ INFO ] Connecting Storage Domain
> [ INFO ] Connecting Storage Pool
> [ INFO ] Verifying sanlock lockspace initialization
> [ INFO ] sanlock lockspace already initialized
> [ INFO ] sanlock metadata already initialized
> [ INFO ] Creating VM Image
> [ INFO ] Disconnecting Storage Pool
> [ INFO ] Start monitoring domain
> [ ERROR ] Failed to execute stage 'Misc configuration': The read operation
> timed out
> [ INFO ] Stage: Clean up
> [ INFO ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
> [ INFO ] Stage: Pre-termination
> [ INFO ] Stage: Termination
>
>
>
> 2014-12-29 14:53:41 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace
> lockspace._misc:133 Ensuring lease for lockspace hosted-engine, host id 1
> is acquired (file: /rhev/data-center/mnt/192.168.0.75:
> _Volumes_Raid1/8094d528-7aa2-4c28-839f-73d7c8bcfebb/ha_agent/hosted-engine.lockspace)
> 2014-12-29 14:53:41 INFO
> otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace
> lockspace._misc:144 sanlock lockspace already initialized
> 2014-12-29 14:53:41 INFO
> otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace
> lockspace._misc:157 sanlock metadata already initialized
> 2014-12-29 14:53:41 DEBUG otopi.context context._executeMethod:138 Stage
> misc METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.image.Plugin._misc
> 2014-12-29 14:53:41 INFO otopi.plugins.ovirt_hosted_engine_setup.vm.image
> image._misc:162 Creating VM Image
> 2014-12-29 14:53:41 DEBUG otopi.plugins.ovirt_hosted_engine_setup.vm.image
> image._misc:163 createVolume
> 2014-12-29 14:53:42 DEBUG otopi.plugins.ovirt_hosted_engine_setup.vm.image
> image._misc:184 Created volume d8e7eed4-c763-4b3d-8a71-35f2d692a73d,
> request was:
> - image: 9043e535-ea94-41f8-98df-6fdbfeb107c3
> - volume: e6a9291d-ac21-4a95-b43c-0d6e552baaa2
> 2014-12-29 14:53:42 DEBUG otopi.ovirt_hosted_engine_setup.tasks
> tasks.wait:48 Waiting for existing tasks to complete
> 2014-12-29 14:53:43 DEBUG otopi.ovirt_hosted_engine_setup.tasks
> tasks.wait:48 Waiting for existing tasks to complete
> 2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:138 Stage
> misc METHOD
> otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._misc
> 2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:144
> condition False
> 2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:138 Stage
> misc METHOD
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._disconnect_pool
> 2014-12-29 14:53:43 INFO
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._disconnect_pool:971 Disconnecting Storage Pool
> 2014-12-29 14:53:43 DEBUG otopi.ovirt_hosted_engine_setup.tasks
> tasks.wait:48 Waiting for existing tasks to complete
> 2014-12-29 14:53:43 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._spmStop:602 spmStop
> 2014-12-29 14:53:43 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._spmStop:611
> 2014-12-29 14:53:43 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._storagePoolConnection:573 disconnectStoragePool
> 2014-12-29 14:53:45 INFO
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._disconnect_pool:975 Start monitoring domain
> 2014-12-29 14:53:45 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._startMonitoringDomain:529 _startMonitoringDomain
> 2014-12-29 14:53:46 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._startMonitoringDomain:534 {'status': {'message': 'OK', 'code': 0}}
> 2014-12-29 14:53:51 DEBUG otopi.ovirt_hosted_engine_setup.tasks
> tasks.wait:127 Waiting for domain monitor
> 2014-12-29 14:54:51 DEBUG otopi.context context._executeMethod:152 method
> exception
> Traceback (most recent call last):
> File "/usr/lib/python2.6/site-packages/otopi/context.py", line 142, in
> _executeMethod
> method['method']()
> File
> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py",
> line 976, in _disconnect_pool
> self._startMonitoringDomain()
> File
> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py",
> line 539, in _startMonitoringDomain
> waiter.wait(self.environment[ohostedcons.StorageEnv.SD_UUID])
> File
> "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_setup/tasks.py", line
> 128, in wait
> response = serv.s.getVdsStats()
> File "/usr/lib64/python2.6/xmlrpclib.py", line 1199, in __call__
> return self.__send(self.__name, args)
> File "/usr/lib64/python2.6/xmlrpclib.py", line 1489, in __request
> verbose=self.__verbose
> File "/usr/lib64/python2.6/xmlrpclib.py", line 1237, in request
> errcode, errmsg, headers = h.getreply()
> File "/usr/lib64/python2.6/httplib.py", line 1064, in getreply
> response = self._conn.getresponse()
> File "/usr/lib64/python2.6/httplib.py", line 990, in getresponse
> response.begin()
> File "/usr/lib64/python2.6/httplib.py", line 391, in begin
> version, status, reason = self._read_status()
> File "/usr/lib64/python2.6/httplib.py", line 349, in _read_status
> line = self.fp.readline()
> File "/usr/lib64/python2.6/socket.py", line 433, in readline
> data = recv(1)
> File "/usr/lib64/python2.6/ssl.py", line 215, in recv
> return self.read(buflen)
> File "/usr/lib64/python2.6/ssl.py", line 136, in read
> return self._sslobj.read(len)
> SSLError: The read operation timed out
>
>
>
>
>
> var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20141229145137-g8d2or.log
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.ovirt.org/pipermail/users/attachments/20141230/899f724c/attachment.html
> >
>
> ------------------------------
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> End of Users Digest, Vol 39, Issue 173
> **************************************
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141231/769965f2/attachment-0001.html>
More information about the Users
mailing list