Hi Dan
I should have mentioned that we need to use the same subnet for both iSCSI
interfaces which is why I ended up bonding (mode 1) these. Looking at
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtuali...,
it doesn't say anything about tying the iSCSI Bond back to the host. In
our DEV environment I removed the bond the iSCSI interfaces were using and
created the iSCSI Bond as per this link. What do I do now? Recreate the
bond and give it an IP? I don't see where to put an IP for iSCSI against
the hosts?
Lastly, not using jumbo frames as where a critical infrastructure
organisation and I fear possible side effects.
Thanks
On Sat, Jun 25, 2016 at 10:30 AM, Dan Yasny <dyasny(a)gmail.com> wrote:
Two things off the top of my head after skimming the given details:
1. iSCSI will work better without the bond. It already uses multipath, so
all you need is to separate the portal IPs/subnets and provide separate
IPs/subnets to the iSCSI dedicated NICs, as is the recommended way here:
https://access.redhat.com/solutions/131153 and also be sure to follow
this:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtuali...
2. You haven't mentioned anything about jumbo frames, are you using those?
If not, it is a very good idea to start.
And 3: since this is RHEV, you might get much more help from the official
support than from this list.
Hope this helps
Dan
On Fri, Jun 24, 2016 at 9:12 PM, Colin Coe <colin.coe(a)gmail.com> wrote:
> Hi all
>
> We run four RHEV datacenters, two PROD, one DEV and one TEST/Training.
> They are all working OK but I'd like a definitive answer on how I should
> be configuring the networking side as I'm pretty sure we're getting
> sub-optimal networking performance.
>
> All datacenters are housed in HP C7000 Blade enclosures. The PROD
> datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster
> of two 4730s. These are configured RAID5 internally with NRAID1. The DEV
> and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a
> cluster of three P4500s configured with RAID10 internally and NRAID5.
>
> The HP C7000 each have two Flex10/10D interconnect modules configured in
> a redundant ring so that we can upgrade the interconnects without dropping
> network connectivity to the infrastructure. We use fat RHEL-H 7.2
> hypervisors (HP BL460) and these are all configured with six network
> interfaces:
> - eno1 and eno2 are bond0 which is the rhevm interface
> - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond
> using 802.1q
> - eno5 and eno6 are bond2 and dedicated to iSCSI traffic
>
> Is this the "correct" way to do this? If not, what should I be doing
> instead?
>
> Thanks
>
> CC
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
>