oVirt/RHEV and HP Blades and HP iSCSI SAN

Hi all We run four RHEV datacenters, two PROD, one DEV and one TEST/Training. They are all working OK but I'd like a definitive answer on how I should be configuring the networking side as I'm pretty sure we're getting sub-optimal networking performance. All datacenters are housed in HP C7000 Blade enclosures. The PROD datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster of two 4730s. These are configured RAID5 internally with NRAID1. The DEV and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a cluster of three P4500s configured with RAID10 internally and NRAID5. The HP C7000 each have two Flex10/10D interconnect modules configured in a redundant ring so that we can upgrade the interconnects without dropping network connectivity to the infrastructure. We use fat RHEL-H 7.2 hypervisors (HP BL460) and these are all configured with six network interfaces: - eno1 and eno2 are bond0 which is the rhevm interface - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond using 802.1q - eno5 and eno6 are bond2 and dedicated to iSCSI traffic Is this the "correct" way to do this? If not, what should I be doing instead? Thanks CC

Two things off the top of my head after skimming the given details: 1. iSCSI will work better without the bond. It already uses multipath, so all you need is to separate the portal IPs/subnets and provide separate IPs/subnets to the iSCSI dedicated NICs, as is the recommended way here: https://access.redhat.com/solutions/131153 and also be sure to follow this: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat... 2. You haven't mentioned anything about jumbo frames, are you using those? If not, it is a very good idea to start. And 3: since this is RHEV, you might get much more help from the official support than from this list. Hope this helps Dan On Fri, Jun 24, 2016 at 9:12 PM, Colin Coe <colin.coe@gmail.com> wrote:
Hi all
We run four RHEV datacenters, two PROD, one DEV and one TEST/Training. They are all working OK but I'd like a definitive answer on how I should be configuring the networking side as I'm pretty sure we're getting sub-optimal networking performance.
All datacenters are housed in HP C7000 Blade enclosures. The PROD datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster of two 4730s. These are configured RAID5 internally with NRAID1. The DEV and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a cluster of three P4500s configured with RAID10 internally and NRAID5.
The HP C7000 each have two Flex10/10D interconnect modules configured in a redundant ring so that we can upgrade the interconnects without dropping network connectivity to the infrastructure. We use fat RHEL-H 7.2 hypervisors (HP BL460) and these are all configured with six network interfaces: - eno1 and eno2 are bond0 which is the rhevm interface - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond using 802.1q - eno5 and eno6 are bond2 and dedicated to iSCSI traffic
Is this the "correct" way to do this? If not, what should I be doing instead?
Thanks
CC
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Dan I should have mentioned that we need to use the same subnet for both iSCSI interfaces which is why I ended up bonding (mode 1) these. Looking at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat..., it doesn't say anything about tying the iSCSI Bond back to the host. In our DEV environment I removed the bond the iSCSI interfaces were using and created the iSCSI Bond as per this link. What do I do now? Recreate the bond and give it an IP? I don't see where to put an IP for iSCSI against the hosts? Lastly, not using jumbo frames as where a critical infrastructure organisation and I fear possible side effects. Thanks On Sat, Jun 25, 2016 at 10:30 AM, Dan Yasny <dyasny@gmail.com> wrote:
Two things off the top of my head after skimming the given details: 1. iSCSI will work better without the bond. It already uses multipath, so all you need is to separate the portal IPs/subnets and provide separate IPs/subnets to the iSCSI dedicated NICs, as is the recommended way here: https://access.redhat.com/solutions/131153 and also be sure to follow this: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat... 2. You haven't mentioned anything about jumbo frames, are you using those? If not, it is a very good idea to start.
And 3: since this is RHEV, you might get much more help from the official support than from this list.
Hope this helps Dan
On Fri, Jun 24, 2016 at 9:12 PM, Colin Coe <colin.coe@gmail.com> wrote:
Hi all
We run four RHEV datacenters, two PROD, one DEV and one TEST/Training. They are all working OK but I'd like a definitive answer on how I should be configuring the networking side as I'm pretty sure we're getting sub-optimal networking performance.
All datacenters are housed in HP C7000 Blade enclosures. The PROD datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster of two 4730s. These are configured RAID5 internally with NRAID1. The DEV and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a cluster of three P4500s configured with RAID10 internally and NRAID5.
The HP C7000 each have two Flex10/10D interconnect modules configured in a redundant ring so that we can upgrade the interconnects without dropping network connectivity to the infrastructure. We use fat RHEL-H 7.2 hypervisors (HP BL460) and these are all configured with six network interfaces: - eno1 and eno2 are bond0 which is the rhevm interface - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond using 802.1q - eno5 and eno6 are bond2 and dedicated to iSCSI traffic
Is this the "correct" way to do this? If not, what should I be doing instead?
Thanks
CC
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Fri, Jun 24, 2016 at 11:05 PM, Colin Coe <colin.coe@gmail.com> wrote:
Hi Dan
I should have mentioned that we need to use the same subnet for both iSCSI interfaces which is why I ended up bonding (mode 1) these.
This is not best practice. Perhaps you should have asked these questions when planning? Right now, I'd start planning for a large downtime window in order to redo things right.
Looking at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat..., it doesn't say anything about tying the iSCSI Bond back to the host. In our DEV environment I removed the bond the iSCSI interfaces were using and created the iSCSI Bond as per this link. What do I do now? Recreate the bond and give it an IP? I don't see where to put an IP for iSCSI against the hosts?
I don't have a setup in front of me to provide instructions, but you did mention you're using RHEV, why not just call support, they can just remote in and help you, or send some screenshots...
Lastly, not using jumbo frames as where a critical infrastructure organisation and I fear possible side effects.
You have an iSCSI dedicated network, I don't see the problem setting up a dedicated network the correct way, unless your switches have a single MTU setting for all ports, like the cisco 2960's. There's a lot of performance to gain there, depending on the kind of IO your VMs of generating.
Thanks
On Sat, Jun 25, 2016 at 10:30 AM, Dan Yasny <dyasny@gmail.com> wrote:
Two things off the top of my head after skimming the given details: 1. iSCSI will work better without the bond. It already uses multipath, so all you need is to separate the portal IPs/subnets and provide separate IPs/subnets to the iSCSI dedicated NICs, as is the recommended way here: https://access.redhat.com/solutions/131153 and also be sure to follow this: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat... 2. You haven't mentioned anything about jumbo frames, are you using those? If not, it is a very good idea to start.
And 3: since this is RHEV, you might get much more help from the official support than from this list.
Hope this helps Dan
On Fri, Jun 24, 2016 at 9:12 PM, Colin Coe <colin.coe@gmail.com> wrote:
Hi all
We run four RHEV datacenters, two PROD, one DEV and one TEST/Training. They are all working OK but I'd like a definitive answer on how I should be configuring the networking side as I'm pretty sure we're getting sub-optimal networking performance.
All datacenters are housed in HP C7000 Blade enclosures. The PROD datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster of two 4730s. These are configured RAID5 internally with NRAID1. The DEV and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a cluster of three P4500s configured with RAID10 internally and NRAID5.
The HP C7000 each have two Flex10/10D interconnect modules configured in a redundant ring so that we can upgrade the interconnects without dropping network connectivity to the infrastructure. We use fat RHEL-H 7.2 hypervisors (HP BL460) and these are all configured with six network interfaces: - eno1 and eno2 are bond0 which is the rhevm interface - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond using 802.1q - eno5 and eno6 are bond2 and dedicated to iSCSI traffic
Is this the "correct" way to do this? If not, what should I be doing instead?
Thanks
CC
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

HI Dan As this is production, critical infrastructure large downtime is not possible. We have a hardware refresh coming up in about 12 months so I'll have to wait until then. I recall asking this of GSS quite some time ago and not really getting too helpful an answer.... We use a combination of Cisco C4500-X (core/distribution) and C2960-X (access) switches. The SAN units connect into the C4500-X switches (32 x 10Gbps ports). Thanks On Sun, Jun 26, 2016 at 9:47 AM, Dan Yasny <dyasny@gmail.com> wrote:
On Fri, Jun 24, 2016 at 11:05 PM, Colin Coe <colin.coe@gmail.com> wrote:
Hi Dan
I should have mentioned that we need to use the same subnet for both iSCSI interfaces which is why I ended up bonding (mode 1) these.
This is not best practice. Perhaps you should have asked these questions when planning? Right now, I'd start planning for a large downtime window in order to redo things right.
Looking at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat..., it doesn't say anything about tying the iSCSI Bond back to the host. In our DEV environment I removed the bond the iSCSI interfaces were using and created the iSCSI Bond as per this link. What do I do now? Recreate the bond and give it an IP? I don't see where to put an IP for iSCSI against the hosts?
I don't have a setup in front of me to provide instructions, but you did mention you're using RHEV, why not just call support, they can just remote in and help you, or send some screenshots...
Lastly, not using jumbo frames as where a critical infrastructure organisation and I fear possible side effects.
You have an iSCSI dedicated network, I don't see the problem setting up a dedicated network the correct way, unless your switches have a single MTU setting for all ports, like the cisco 2960's. There's a lot of performance to gain there, depending on the kind of IO your VMs of generating.
Thanks
On Sat, Jun 25, 2016 at 10:30 AM, Dan Yasny <dyasny@gmail.com> wrote:
Two things off the top of my head after skimming the given details: 1. iSCSI will work better without the bond. It already uses multipath, so all you need is to separate the portal IPs/subnets and provide separate IPs/subnets to the iSCSI dedicated NICs, as is the recommended way here: https://access.redhat.com/solutions/131153 and also be sure to follow this: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat... 2. You haven't mentioned anything about jumbo frames, are you using those? If not, it is a very good idea to start.
And 3: since this is RHEV, you might get much more help from the official support than from this list.
Hope this helps Dan
On Fri, Jun 24, 2016 at 9:12 PM, Colin Coe <colin.coe@gmail.com> wrote:
Hi all
We run four RHEV datacenters, two PROD, one DEV and one TEST/Training. They are all working OK but I'd like a definitive answer on how I should be configuring the networking side as I'm pretty sure we're getting sub-optimal networking performance.
All datacenters are housed in HP C7000 Blade enclosures. The PROD datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster of two 4730s. These are configured RAID5 internally with NRAID1. The DEV and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a cluster of three P4500s configured with RAID10 internally and NRAID5.
The HP C7000 each have two Flex10/10D interconnect modules configured in a redundant ring so that we can upgrade the interconnects without dropping network connectivity to the infrastructure. We use fat RHEL-H 7.2 hypervisors (HP BL460) and these are all configured with six network interfaces: - eno1 and eno2 are bond0 which is the rhevm interface - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond using 802.1q - eno5 and eno6 are bond2 and dedicated to iSCSI traffic
Is this the "correct" way to do this? If not, what should I be doing instead?
Thanks
CC
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------0B32486AFDADAE2F4CB49F09 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Hello Colin, I know well all the equipment you have in your hands as I used to work with these during a long time. Great stuff I can say. All seems Ok from what you describe, except the iSCSI network which should not be a bond, but two independent vlans (and subnets) using iSCSI multipath. Bond works, but it's not the recommended setup for these scenarios. Fernando On 24/06/2016 22:12, Colin Coe wrote:
Hi all
We run four RHEV datacenters, two PROD, one DEV and one TEST/Training. They are all working OK but I'd like a definitive answer on how I should be configuring the networking side as I'm pretty sure we're getting sub-optimal networking performance.
All datacenters are housed in HP C7000 Blade enclosures. The PROD datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster of two 4730s. These are configured RAID5 internally with NRAID1. The DEV and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a cluster of three P4500s configured with RAID10 internally and NRAID5.
The HP C7000 each have two Flex10/10D interconnect modules configured in a redundant ring so that we can upgrade the interconnects without dropping network connectivity to the infrastructure. We use fat RHEL-H 7.2 hypervisors (HP BL460) and these are all configured with six network interfaces: - eno1 and eno2 are bond0 which is the rhevm interface - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond using 802.1q - eno5 and eno6 are bond2 and dedicated to iSCSI traffic
Is this the "correct" way to do this? If not, what should I be doing instead?
Thanks
CC
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------0B32486AFDADAE2F4CB49F09 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <p>Hello Colin,</p> <p>I know well all the equipment you have in your hands as I used to work with these during a long time. Great stuff I can say.</p> <p>All seems Ok from what you describe, except the iSCSI network which should not be a bond, but two independent vlans (and subnets) using iSCSI multipath. Bond works, but it's not the recommended setup for these scenarios.</p> <p>Fernando<br> </p> <div class="moz-cite-prefix">On 24/06/2016 22:12, Colin Coe wrote:<br> </div> <blockquote cite="mid:CANvHAxT8jgJVLQnbMuX==pYtRo88ut-a7TBf47-1Jgv4uJ1yiQ@mail.gmail.com" type="cite"> <div dir="ltr">Hi all <div><br> </div> <div>We run four RHEV datacenters, two PROD, one DEV and one TEST/Training. They are all working OK but I'd like a definitive answer on how I should be configuring the networking side as I'm pretty sure we're getting sub-optimal networking performance.</div> <div><br> </div> <div>All datacenters are housed in HP C7000 Blade enclosures. The PROD datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster of two 4730s. These are configured RAID5 internally with NRAID1. The DEV and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a cluster of three P4500s configured with RAID10 internally and NRAID5.</div> <div><br> </div> <div>The HP C7000 each have two Flex10/10D interconnect modules configured in a redundant ring so that we can upgrade the interconnects without dropping network connectivity to the infrastructure. We use fat RHEL-H 7.2 hypervisors (HP BL460) and these are all configured with six network interfaces:</div> <div>- eno1 and eno2 are bond0 which is the rhevm interface</div> <div>- eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond using 802.1q</div> <div>- eno5 and eno6 are bond2 and dedicated to iSCSI traffic</div> <div><br> </div> <div>Is this the "correct" way to do this? If not, what should I be doing instead?</div> <div><br> </div> <div>Thanks</div> <div><br> </div> <div>CC</div> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------0B32486AFDADAE2F4CB49F09--

Hi Fernando The network is pretty much cast in stone now. I even if I could change it, I'd be relucant todo so as the firewall/router has 1Gb interfaces but the iSCSI SANs and blade servers are all 10Gb. Having these in different subnets will create a 1Gb bottle neck. Thanks On Sat, Jun 25, 2016 at 12:45 PM, Fernando Frediani < fernando.frediani@upx.com.br> wrote:
Hello Colin,
I know well all the equipment you have in your hands as I used to work with these during a long time. Great stuff I can say.
All seems Ok from what you describe, except the iSCSI network which should not be a bond, but two independent vlans (and subnets) using iSCSI multipath. Bond works, but it's not the recommended setup for these scenarios.
Fernando On 24/06/2016 22:12, Colin Coe wrote:
Hi all
We run four RHEV datacenters, two PROD, one DEV and one TEST/Training. They are all working OK but I'd like a definitive answer on how I should be configuring the networking side as I'm pretty sure we're getting sub-optimal networking performance.
All datacenters are housed in HP C7000 Blade enclosures. The PROD datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster of two 4730s. These are configured RAID5 internally with NRAID1. The DEV and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a cluster of three P4500s configured with RAID10 internally and NRAID5.
The HP C7000 each have two Flex10/10D interconnect modules configured in a redundant ring so that we can upgrade the interconnects without dropping network connectivity to the infrastructure. We use fat RHEL-H 7.2 hypervisors (HP BL460) and these are all configured with six network interfaces: - eno1 and eno2 are bond0 which is the rhevm interface - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond using 802.1q - eno5 and eno6 are bond2 and dedicated to iSCSI traffic
Is this the "correct" way to do this? If not, what should I be doing instead?
Thanks
CC
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (3)
-
Colin Coe
-
Dan Yasny
-
Fernando Frediani