[ovirt-users] oVirt/RHEV and HP Blades and HP iSCSI SAN

Colin Coe colin.coe at gmail.com
Sun Jun 26 02:33:30 UTC 2016


HI Dan

As this is production, critical infrastructure large downtime is not
possible.  We have a hardware refresh coming up in about 12 months so I'll
have to wait until then.

I recall asking this of GSS quite some time ago and not really getting too
helpful an answer....

We use a combination of Cisco C4500-X (core/distribution) and C2960-X
(access) switches.  The SAN units connect into the C4500-X switches (32 x
10Gbps ports).

Thanks

On Sun, Jun 26, 2016 at 9:47 AM, Dan Yasny <dyasny at gmail.com> wrote:

>
>
> On Fri, Jun 24, 2016 at 11:05 PM, Colin Coe <colin.coe at gmail.com> wrote:
>
>> Hi Dan
>>
>> I should have mentioned that we need to use the same subnet for both
>> iSCSI interfaces which is why I ended up bonding (mode 1) these.
>>
>
> This is not best practice. Perhaps you should have asked these questions
> when planning? Right now, I'd start planning for a large downtime window in
> order to redo things right.
>
>
>> Looking at
>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing,
>> it doesn't say anything about tying the iSCSI Bond back to the host.  In
>> our DEV environment I removed the bond the iSCSI interfaces were using and
>> created the iSCSI Bond as per this link.  What do I do now?  Recreate the
>> bond and give it an IP?  I don't see where to put an IP for iSCSI against
>> the hosts?
>>
>
> I don't have a setup in front of me to provide instructions, but you did
> mention you're using RHEV, why not just call support, they can just remote
> in and help you, or send some screenshots...
>
>
>>
>> Lastly, not using jumbo frames as where a critical infrastructure
>> organisation and I fear possible side effects.
>>
>
> You have an iSCSI dedicated network, I don't see the problem setting up a
> dedicated network the correct way, unless your switches have a single MTU
> setting for all ports, like the cisco 2960's. There's a lot of performance
> to gain there, depending on the kind of IO your VMs of generating.
>
>
>>
>> Thanks
>>
>> On Sat, Jun 25, 2016 at 10:30 AM, Dan Yasny <dyasny at gmail.com> wrote:
>>
>>> Two things off the top of my head after skimming the given details:
>>> 1. iSCSI will work better without the bond. It already uses multipath,
>>> so all you need is to separate the portal IPs/subnets and provide separate
>>> IPs/subnets to the iSCSI dedicated NICs, as is the recommended way here:
>>> https://access.redhat.com/solutions/131153 and also be sure to follow
>>> this:
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing
>>> 2. You haven't mentioned anything about jumbo frames, are you using
>>> those? If not, it is a very good idea to start.
>>>
>>> And 3: since this is RHEV, you might get much more help from the
>>> official support than from this list.
>>>
>>> Hope this helps
>>> Dan
>>>
>>> On Fri, Jun 24, 2016 at 9:12 PM, Colin Coe <colin.coe at gmail.com> wrote:
>>>
>>>> Hi all
>>>>
>>>> We run four RHEV datacenters, two PROD, one DEV and one TEST/Training.
>>>> They are all  working OK but I'd like a definitive answer on how I should
>>>> be configuring the networking side as I'm pretty sure we're getting
>>>> sub-optimal networking performance.
>>>>
>>>> All datacenters are housed in HP C7000 Blade enclosures.  The PROD
>>>> datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster
>>>> of two 4730s. These are configured RAID5 internally with NRAID1. The DEV
>>>> and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a
>>>> cluster of three P4500s configured with RAID10 internally and NRAID5.
>>>>
>>>> The HP C7000 each have two Flex10/10D interconnect modules configured
>>>> in a redundant ring so that we can upgrade the interconnects without
>>>> dropping network connectivity to the infrastructure. We use fat RHEL-H 7.2
>>>> hypervisors (HP BL460) and these are all configured with six network
>>>> interfaces:
>>>> - eno1 and eno2 are bond0 which is the rhevm interface
>>>> - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this
>>>> bond using 802.1q
>>>> - eno5 and eno6 are bond2 and dedicated to iSCSI traffic
>>>>
>>>> Is this the "correct" way to do this?  If not, what should I be doing
>>>> instead?
>>>>
>>>> Thanks
>>>>
>>>> CC
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160626/1cdd8ebb/attachment-0001.html>


More information about the Users mailing list