[ovirt-users] oVirt/RHEV and HP Blades and HP iSCSI SAN
Fernando Frediani
fernando.frediani at upx.com.br
Sat Jun 25 00:45:11 EDT 2016
Hello Colin,
I know well all the equipment you have in your hands as I used to work
with these during a long time. Great stuff I can say.
All seems Ok from what you describe, except the iSCSI network which
should not be a bond, but two independent vlans (and subnets) using
iSCSI multipath. Bond works, but it's not the recommended setup for
these scenarios.
Fernando
On 24/06/2016 22:12, Colin Coe wrote:
> Hi all
>
> We run four RHEV datacenters, two PROD, one DEV and one
> TEST/Training. They are all working OK but I'd like a definitive
> answer on how I should be configuring the networking side as I'm
> pretty sure we're getting sub-optimal networking performance.
>
> All datacenters are housed in HP C7000 Blade enclosures. The PROD
> datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a
> cluster of two 4730s. These are configured RAID5 internally with
> NRAID1. The DEV and TEST datacenters are using P4500 iSCSI SANs and
> each datacenter has a cluster of three P4500s configured with RAID10
> internally and NRAID5.
>
> The HP C7000 each have two Flex10/10D interconnect modules configured
> in a redundant ring so that we can upgrade the interconnects without
> dropping network connectivity to the infrastructure. We use fat RHEL-H
> 7.2 hypervisors (HP BL460) and these are all configured with six
> network interfaces:
> - eno1 and eno2 are bond0 which is the rhevm interface
> - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this
> bond using 802.1q
> - eno5 and eno6 are bond2 and dedicated to iSCSI traffic
>
> Is this the "correct" way to do this? If not, what should I be doing
> instead?
>
> Thanks
>
> CC
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160625/6f4a7727/attachment.html>
More information about the Users
mailing list