[ovirt-users] oVirt/RHEV and HP Blades and HP iSCSI SAN
Dan Yasny
dyasny at gmail.com
Fri Jun 24 22:30:49 EDT 2016
Two things off the top of my head after skimming the given details:
1. iSCSI will work better without the bond. It already uses multipath, so
all you need is to separate the portal IPs/subnets and provide separate
IPs/subnets to the iSCSI dedicated NICs, as is the recommended way here:
https://access.redhat.com/solutions/131153 and also be sure to follow this:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing
2. You haven't mentioned anything about jumbo frames, are you using those?
If not, it is a very good idea to start.
And 3: since this is RHEV, you might get much more help from the official
support than from this list.
Hope this helps
Dan
On Fri, Jun 24, 2016 at 9:12 PM, Colin Coe <colin.coe at gmail.com> wrote:
> Hi all
>
> We run four RHEV datacenters, two PROD, one DEV and one TEST/Training.
> They are all working OK but I'd like a definitive answer on how I should
> be configuring the networking side as I'm pretty sure we're getting
> sub-optimal networking performance.
>
> All datacenters are housed in HP C7000 Blade enclosures. The PROD
> datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster
> of two 4730s. These are configured RAID5 internally with NRAID1. The DEV
> and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a
> cluster of three P4500s configured with RAID10 internally and NRAID5.
>
> The HP C7000 each have two Flex10/10D interconnect modules configured in a
> redundant ring so that we can upgrade the interconnects without dropping
> network connectivity to the infrastructure. We use fat RHEL-H 7.2
> hypervisors (HP BL460) and these are all configured with six network
> interfaces:
> - eno1 and eno2 are bond0 which is the rhevm interface
> - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond
> using 802.1q
> - eno5 and eno6 are bond2 and dedicated to iSCSI traffic
>
> Is this the "correct" way to do this? If not, what should I be doing
> instead?
>
> Thanks
>
> CC
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160624/86e22480/attachment.html>
More information about the Users
mailing list