<div dir="ltr">HI Dan<div><br></div><div>As this is production, critical infrastructure large downtime is not possible. We have a hardware refresh coming up in about 12 months so I'll have to wait until then. </div><div><br></div><div>I recall asking this of GSS quite some time ago and not really getting too helpful an answer....</div><div><br></div><div>We use a combination of Cisco C4500-X (core/distribution) and C2960-X (access) switches. The SAN units connect into the C4500-X switches (32 x 10Gbps ports).</div><div><br></div><div>Thanks</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Jun 26, 2016 at 9:47 AM, Dan Yasny <span dir="ltr"><<a href="mailto:dyasny@gmail.com" target="_blank">dyasny@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Fri, Jun 24, 2016 at 11:05 PM, Colin Coe <span dir="ltr"><<a href="mailto:colin.coe@gmail.com" target="_blank">colin.coe@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Dan<div><br></div><div>I should have mentioned that we need to use the same subnet for both iSCSI interfaces which is why I ended up bonding (mode 1) these. </div></div></blockquote><div><br></div></span><div>This is not best practice. Perhaps you should have asked these questions when planning? Right now, I'd start planning for a large downtime window in order to redo things right. </div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Looking at <a href="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing" style="font-size:12.8px" target="_blank">https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing</a>, it doesn't say anything about tying the iSCSI Bond back to the host. In our DEV environment I removed the bond the iSCSI interfaces were using and created the iSCSI Bond as per this link. What do I do now? Recreate the bond and give it an IP? I don't see where to put an IP for iSCSI against the hosts?</div></div></blockquote><div><br></div></span><div>I don't have a setup in front of me to provide instructions, but you did mention you're using RHEV, why not just call support, they can just remote in and help you, or send some screenshots...</div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>Lastly, not using jumbo frames as where a critical infrastructure organisation and I fear possible side effects.</div></div></blockquote><div><br></div></span><div>You have an iSCSI dedicated network, I don't see the problem setting up a dedicated network the correct way, unless your switches have a single MTU setting for all ports, like the cisco 2960's. There's a lot of performance to gain there, depending on the kind of IO your VMs of generating.</div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>Thanks</div></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Jun 25, 2016 at 10:30 AM, Dan Yasny <span dir="ltr"><<a href="mailto:dyasny@gmail.com" target="_blank">dyasny@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">Two things off the top of my head after skimming the given details:<div>1. iSCSI will work better without the bond. It already uses multipath, so all you need is to separate the portal IPs/subnets and provide separate IPs/subnets to the iSCSI dedicated NICs, as is the recommended way here: <a href="https://access.redhat.com/solutions/131153" target="_blank">https://access.redhat.com/solutions/131153</a> and also be sure to follow this: <a href="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing" target="_blank">https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing</a></div><div>2. You haven't mentioned anything about jumbo frames, are you using those? If not, it is a very good idea to start.</div><div><br></div><div>And 3: since this is RHEV, you might get much more help from the official support than from this list.</div><div><br></div><div>Hope this helps</div><div>Dan</div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div>On Fri, Jun 24, 2016 at 9:12 PM, Colin Coe <span dir="ltr"><<a href="mailto:colin.coe@gmail.com" target="_blank">colin.coe@gmail.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div><div><div dir="ltr">Hi all<div><br></div><div>We run four RHEV datacenters, two PROD, one DEV and one TEST/Training. They are all working OK but I'd like a definitive answer on how I should be configuring the networking side as I'm pretty sure we're getting sub-optimal networking performance.</div><div><br></div><div>All datacenters are housed in HP C7000 Blade enclosures. The PROD datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster of two 4730s. These are configured RAID5 internally with NRAID1. The DEV and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a cluster of three P4500s configured with RAID10 internally and NRAID5.</div><div><br></div><div>The HP C7000 each have two Flex10/10D interconnect modules configured in a redundant ring so that we can upgrade the interconnects without dropping network connectivity to the infrastructure. We use fat RHEL-H 7.2 hypervisors (HP BL460) and these are all configured with six network interfaces:</div><div>- eno1 and eno2 are bond0 which is the rhevm interface</div><div>- eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond using 802.1q</div><div>- eno5 and eno6 are bond2 and dedicated to iSCSI traffic</div><div><br></div><div>Is this the "correct" way to do this? If not, what should I be doing instead?</div><div><br></div><div>Thanks</div><span><font color="#888888"><div><br></div><div>CC</div></font></span></div>
<br></div></div>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
<br></blockquote></div><br></div>
</blockquote></div><br></div>
</div></div></blockquote></span></div><br></div></div>
</blockquote></div><br></div>