WS2012 Hyper-V Networking On HP Proliant Blades Using Just 2 Flex Fabric Virtual Connects

On another recent outing I got to play with some Gen8 HP blade servers.  I was asked to come up with a networking design where (please bear in mind that I am not a h/w guy):

  • The blades would have a dual port 10 Gbps mezzanine card that appeared to be doing FCoE
  • There were 2 Flex Fabric virtual connects in the blade chassis
  • They wanted to build a WS2012 Hyper-V cluster using fiber channel storage

I came up with the following design:

The 2 FCoE (I’m guess that’s what they were) adapters were each given a static 4 Gbps slice of the bandwidth from each Virtual Connect (2 * 4 Gbps), which would match 4 Gbps Fiber Channel (FC).  MPIO was deployed to “team” the FC HBA’s.

One Ethernet NIC was presented from each Virtual Connect to each blade (2 per blade), with each NIC getting 6 Gbps.  WS2012 NIC teaming was used to team these NICs, and then we deployed a converged networks design in WS2012 using virtual NICs and QoS to dynamically carve up the bandwidth of the virtual switch (attached to the NIC team).

Some testing was done and we were running Live Migration at a full 6 Gbps, moving a 35 GB RAM VM via TCP/IP Live Migration in 1 minute and 8 seconds.

For WS2012 R2, I’d rather have 2 * 10 GbE for the 2 cluster & backup networks and 2 * 1 or 10 GbE for the management and VM network.  If the VC allowed it (didn’t have the time), I might have tried the below.  This would reduce the demands on the NIC team (actual VM traffic is usually light, but assessment is required to determine that) and allow an additional 2 non-teamed NICs:

Leaving the 2 new NICs (running at 4 Gbps) non-teamed leaves open the option of using SMB 3.0 storage (without RDMA/SMB Direct) on a Scale-Out File Server.  However, the big plus of SMB 3.0 Multichannel would be that I would now have a potential 8 Gbps to use for Live Migration via SMB 3.0 Open-mouthed smile But this is assuming that I could carve up the networking like this via Virtual Connects … and I don’t know if that is actually possible.

4 thoughts on “WS2012 Hyper-V Networking On HP Proliant Blades Using Just 2 Flex Fabric Virtual Connects”

  1. Yea it is possible depending on the blade server. You do that even with a half-height server like bl460c and 2 Flex-Fabric Modules.

  2. With these config you can have the OS see up to 4 Independent NIC (called FlexNIC ) per 10GBit port ,one of which can be ISCSI accelerated or FcOE, so you can do the second scenario.
    Better , with the latest Virtual Connect firmware ( from 4.01 June 2013 ) you can have a dynamic bandwith assignment ( you can set minimum and maximum bandwitdh per FlexNIC ) so you can have up to 20 Gbps live migration AND up to 20 Gbps management & VM traffic and up to 20 Gbps FCoE ( not at once of course ).
    Another trick is to optimize the Virtual connect LAN for live migration for East-West traffic so this traffic for live migration remain in Virtual Connect and do not go to the core ( while usually the Management and VM traffic is better configured for South/North traffic ).
    The problem is that all the configs are set at server startup and that is annoying.

  3. Hi Aidan,

    I like the sound of what you were able to achieve although I’m not seeing either of the two graphics you referenced.
    Can you please check if the links are valid?

    thanks,

    Craig

  4. Hello Aidan,

    Images are not visible. Could you please share those images or look into this.

    Thanks
    Arun

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.