<div dir="ltr">Hi Fernando<div><br></div><div>The network is pretty much cast in stone now. I even if I could change it, I'd be relucant todo so as the firewall/router has 1Gb interfaces but the iSCSI SANs and blade servers are all 10Gb. Having these in different subnets will create a 1Gb bottle neck.</div><div><br></div><div>Thanks</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Jun 25, 2016 at 12:45 PM, Fernando Frediani <span dir="ltr"><<a href="mailto:fernando.frediani@upx.com.br" target="_blank">fernando.frediani@upx.com.br</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p>Hello Colin,</p>
<p>I know well all the equipment you have in your hands as I used to
work with these during a long time. Great stuff I can say.</p>
<p>All seems Ok from what you describe, except the iSCSI network
which should not be a bond, but two independent vlans (and
subnets) using iSCSI multipath. Bond works, but it's not the
recommended setup for these scenarios.</p><span class="HOEnZb"><font color="#888888">
<p>Fernando<br>
</p></font></span><div><div class="h5">
<div>On 24/06/2016 22:12, Colin Coe wrote:<br>
</div>
</div></div><blockquote type="cite"><div><div class="h5">
<div dir="ltr">Hi all
<div><br>
</div>
<div>We run four RHEV datacenters, two PROD, one DEV and one
TEST/Training. They are all working OK but I'd like a
definitive answer on how I should be configuring the
networking side as I'm pretty sure we're getting sub-optimal
networking performance.</div>
<div><br>
</div>
<div>All datacenters are housed in HP C7000 Blade enclosures.
The PROD datacenters use HP 4730 iSCSI SAN clusters, each
datacenter has a cluster of two 4730s. These are configured
RAID5 internally with NRAID1. The DEV and TEST datacenters are
using P4500 iSCSI SANs and each datacenter has a cluster of
three P4500s configured with RAID10 internally and NRAID5.</div>
<div><br>
</div>
<div>The HP C7000 each have two Flex10/10D interconnect modules
configured in a redundant ring so that we can upgrade the
interconnects without dropping network connectivity to the
infrastructure. We use fat RHEL-H 7.2 hypervisors (HP BL460)
and these are all configured with six network interfaces:</div>
<div>- eno1 and eno2 are bond0 which is the rhevm interface</div>
<div>- eno3 and eno4 are bond1 and all the VM VLANs are trunked
over this bond using 802.1q</div>
<div>- eno5 and eno6 are bond2 and dedicated to iSCSI traffic</div>
<div><br>
</div>
<div>Is this the "correct" way to do this? If not, what should
I be doing instead?</div>
<div><br>
</div>
<div>Thanks</div>
<div><br>
</div>
<div>CC</div>
</div>
<br>
<fieldset></fieldset>
<br>
</div></div><span class=""><pre>_______________________________________________
Users mailing list
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</span></blockquote>
<br>
</div>
<br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
<br></blockquote></div><br></div>