2012
05.24

Every time Microsoft gave us a new version of Hyper-V (including W2008 R2 SP1) we got more features to get the solution closer in functionality to the competition.  With the current W2008 R2 SP release, I reckon that we have a solution that is superior to most vSphere deployments (think of licensed or employed features).  Every objection, one after the next, was knocked down: Live Migration, CSV, Dynamic Memory, and so on.  The last objection was NIC teaming … VMware had it but Microsoft didn’t have a supported solution.

True, MSFT hasn’t had NIC teaming and there’s a KB article which says they don’t support it.  NIC teaming is something that the likes of HP, Dell, Intel and Broadcom provided using their software.  If you had a problem, MSFT might ask you to remove it.  And guess what, just about every networking issue I’ve heard on on Hyper-V was driver or NIC teaming related.

As a result, I’ve always recommended against NIC teaming using OEM software.

We want NIC teaming!  That was the cry … every time, every event, every month.  And the usual response from Microsoft is “we heard you but we don’t talk about futures”.  Then Build came along in 2011, and they announced that NIC teaming would be included in W2012 and fully supported for Hyper-V and Failover Clustering.

image

NIC teaming gives us LBFO.  In other words, we can aggregate the bandwidth of NICs and have automatic failover between NICs.  If I had 2 * 10 GbE NICs then I could team them to have a single pipe with 20 Gbps if both NICs are working and connected.  With failover we typically connect both NICs to ports on different access switches.  The result is that if one switch, it’s NIC becomes disconnected, but the other one stays connected and the team stays up and running, leaving the dependent services available to the network and their clients.

Here’s a few facts about W2012 NIC teaming:

  • We can connect up to 32 NICs in a single team.  That’s a lot of bandwidth!
  • NICs in a single team can be different models from the same manufacturer or even NICs from different manufacturers.  Seeing as drivers can be troublesome, maybe you want to mix Intel and Broadcom NICs in a team for extreme uptime.  Then a dodgy driver has a lesser chance of bringing down your services.
  • There are multiple teaming modes for a team: Generic/Static Teaming requires the switches to be configured for the team and isn’t dynamic.  LACP is self-discovering and enables dynamic expansion and reduction of the NICs in the team.  Switch independent works with just a single switch – switches have no knowledge of the team.
  • There are two hashing algorithms for traffic distribution in the NIC team.  With Hyper-V switch mode, a VM’s traffic is limited to a single NIC.  In lightly loaded hosts, this might no distribute the network load across the team.  Apparently it can work well on heavily loaded hosts with VMQ enabled.  Address hashing uses a hashing algorithm to spread the load across NICs.  There is 4-tuple hashing (great distribution) but it doesn’t work with “hidden” protocols such as IPsec and fails back to 2-tuple hashing.
  •  

    NIC teaming is easy to set up.  You can use Server Manager (under Local Server) to create a team.  This GUI is similar to what I’ve seen from OEMs in the past. 

    image

    You can also use PowerShell cmdlets such as New-NetLbfoTeam and Set-VMNetworkAdapter.

    One of the cool things about a NIC team is that, just like with OEM versions, you can create virtual networks/connections on a team.  Each of those connections have have an IP stack, it’s own policies, and VLAN binding.

    image

    In the Hyper-V world, we can use NIC teams to do LBFO for important connections.  We can also use it for creating converged fabrics.  For example, I can take a 2U server with 2 * 10 GbE connections and use that team for all traffic.  I will need some more control … but that’s another blog post.

    10 comments so far

    Add Your Comment
    1. I did this, with 4 NIC’s. Tryed every method of teaming (Swirtch independent/static and LACP). When i transftering “random files” thru SMB i get full speed (350-380MB/s). But everything thats involve hyper-v (replication/moving mv) only get max 1GBit/s (115MB/s). Also, if i make the team a hyper-v switch, with management the whole server go down to 1GBit/s (still identifies the team as 4GBit/s) Do you know if its by design?

      (its the same, whatever mothod i use)
      (both server got E6545 CPU, 48GB RAM, 4x300GB 10.000rpm disk/raid 10 with 512MB ram)

      • Thanks for your response Aidan!

        Are you saying that even if I run LACP with Address Hash on my team, Hyper-V port is still used for Hyper-V specific things? And behave as LACP with Hyper-V port, even if it’s Address Hash that runs in front?

        Just tried to run all guest vm’s have their own NIC (non teaming), and my team is no longer a switch in hyper-v. The team I’m using (4Gbit / s) now runs only replication, relocation and management. Now I 4Gbit / s thru SMB and still only 1Gbit during replication, relocation (everything related to hyper-v)

        Still expected behavior?

        • Don’t believe so. If you have SMB going natively through a correctly configured team (also look at Switch Independent/Dependent versus your number of switches) then multichannel should kick in if the recipient side is also capable.

        • Hank,
          Have a look at section 3.4 of http://www.microsoft.com/en-us/download/details.aspx?id=30160. If your team is now switch dependent (LACP) and using port hashing, it should be spreading the load across all the NICs in the team, esp. if doing SMB 3.0 transfers.

    2. are they “best practice settings” for teaming for specific cluster services such as live migration and cluster comms?

    3. I’m looking for a quad Intel NIC card (a NIC with 4 network ports) fully compatible with Server 2008 R2 & Server 2012.

      This NIC needs to be fully supportive of and certified to be used with Server 2008 R2 and Server 2012 Hyper-V.

      We will put this in our existing server which is going to be upgraded to Server 2012 in two months.

      The problem I am having when I do a Google search for such an item is that most of the NICs are older and are supported only for Server 2003 or Server 2008.

      I need a quad Intel NIC card (a NIC with 4 network ports) fully compatible with Server 2008 R2 & Server 2012.

      Please provide me with the exact model numbers for such NICs so that I can then further research them and then make my purchase.

      • You can search the Hardware Compatibility List as well as I can.

    4. The problem with upgrading from 2008 to 2012 with a server that already has the Intel Team already set up is that 2012 requires not only the unteaming of the NICs but the uninstallation of the Intel software. This is practically impossible to do. The closest instructions that we found were on the Intel website, and it was convoluted, confusing, and after spending 4 hours just on this, we are planning to reformat the hard drive and start from scratch.

      We can’t even do a boot installation for 2012 if it senses that there is already a Windows OS on the server.

      Good luck upgrading.

    5. Good explanation of the teaming capability in windows 2012. What I do not understand how it works with teaming using the broadcom NIC config utility. Is it so that you configure teaming in windows server 2012 only or with the Broadcom utility as well or definitely not on both places. Do you know?

      • Use the supported WS2012 NIC teaming. Get rid of the unsupported 3rd party NIC teaming. Simple.

    Get Adobe Flash player