2013
01.14

Windows Server 2012 NIC Teaming Part 1 – Back To Basics

Windows Server 2012 NIC Teaming Part 2 – What’s What?

Windows Server 2012 NIC Teaming Part 3 – Switch Connection Modes

Windows Server 2012 NIC Teaming Part 4 – Load Distribution

Windows Server 2012 NIC Teaming Part 5 – Configuration Matrix

Up to now in this series of posts, I’ve been focusing on scenarios where you want to do NIC teaming using physical NICs on the host (configured in the management OS).  But what if you wanted to do NIC teaming in the virtual machine?  You can do this if you’re using:

  • Windows Server 2012 Hyper-V
  • Windows Server 2012 as the guest OS

Why the hell would you want to create NIC teams in the guest OS?  Look at the following image.  The virtual switch is connected to a NIC team.  If any physical NIC or fault tolerant physical network appliance fails then the virtual machines stay online.  That gives us LBFO for the single virtual NICs.

But this architecture is not always applicable.  What if you decided to implement Single Root IO Virtualization (SR-IOV)?  With this architecture, virtual machine network traffic will bypass the management OS network stack, and this means it bypasses the NIC team!  Virtual machines connect directly to Virtual Functions (VFs) on the physical NIC.  The host won’t give you LBFO then.  So what do you do?

The below illustration is the NIC teaming architecture for SR-IOV.  There is no NIC team in the host.  In my example, each physical NIC is connected to a different physical access switch.  This gives network path fault tolerance.  A virtual switch is created for each physical NIC.  Note: no NIC team in the management OS.  The virtual machine is created with 2 NICs.  The guest OS (WS2012) is deployed/installed in the virtual machine, and WS2012 NIC teaming is configured in the guest OS.

Note: I said that the SR-IOV enabled NIC route traffic through a virtual switch.  That is true, most of the time.  The virtual switch is mapped to a Physical Function (PF) on the NIC, the virtual NIC is connected to the virtual switch, and this creates an association with the physical NIC.  Traditional virtual network is employed during Live Migration and maintained if the destination host does not support SR-IOV.  And that’s why the below diagram is drawn as it is.

image

The benefit here is you get all the scalability and performance of SR-IOV, and you get LBFO.  The problem is that you’re now having to do LBFO in every VM that will require it (and we know they all will) instead of creating the NIC team once in the management OS.  This would prove to be tricky in clouds, where IT has nothing to do with a VM – clouds feature self service.  That means guest OS administration is the responsibility of the delegated admin (a “user”: developer, tester, application administrator, a branch office admin, and so on).  These are often the people you least want to have touching complex infrastructure.

My advice when it comes to considering mixing SR-IOV and NIC teaming in a self-service cloud: be very careful.  I would not mix them.  I would do the legacy architecture without SR-IOV in a self-service cloud.  I would only use SR-IOV for those hosts that require it, and the ones that will run virtual machines that IT will be managing.

There are some support statements from Microsoft for guest OS teaming:

Hyper-V Switches

To be supported the virtual NICs in the guest OS NIC team must be on different virtual switches, as in the above diagram.  There is nothing to stop you from putting all the virtual NICs on the same virtual switch.  But it is not supported.

The virtual switches must be external virtual switches.  The team will be offline if connected to internal or private virtual switches.

Virtual NICs

Although you can have 12 virtual NICs in a virtual machine (by mixing Legacy and Synthetic types), a guest OS NIC team supports just 2 virtual NICs.  There is nothing to stop you creating larger NIC teams, but they will not be supported.

Do not configure VLAN IDs for virtual NICs that will be teamed. 

Note: NIC teaming must be enabled in the advanced properties of the virtual NIC in the VM’s properties to do guest OS NIC teaming.

The Guest OS NIC Team

The only valid configuration for a NIC team created inside of a VM is:

  • Switch Independent: The team is connected to two Hyper-V external virtual switches.  They are not stacked, and they do not support static or LACP teaming.
  • Address Hashing: The team will use 1 of the three hashing methods to distribute network traffic through the team members (virtual NICs).

Helpfully, the GUI (LBFOADMIN.EXE) greys out these two settings when you create a NIC team in a guest OS.  Just keep these settings in mind if you try to create a guest OS NIC team using New-NetLBFOTeam.

This information has been brought to you by Windows Server 2012 Hyper-V Installation and Configuration Guide (available on pre-order on Amazon) where you’ll find lots of PowerShell like in this script:

image

 

Technorati Tags: Windows Server 2012,Hyper-V,Networking

8 comments so far

Add Your Comment
  1. Hi Adian,

    I have a question though. What if there is more than uplink switch, assume:
    NIC1 NIC2 : Active/Active or Active/Passive
    | |
    SW1 SW2 : Blade Switch
    | |
    BB1—- BB2 : Backbone

    What if one of the BBs fails? the server will keep sending un unsuccessful traffic.
    I found a solution from Braodcom called “Live Link” which means the NIC teaming Management will keep pinging the link all the way and NIC takes over based on the ping state.
    I wonder if Microsoft Teaming has a workaround solution for it?

    Regards,
    Ibrahim

    • There is more to network path fault tolerance. In your example, SW1 should connect to both BB1 and BB2, and this means that if BB1 fails, both NIC1 and NIC2 stay connected.

  2. Great article.

    Just to be sure, if I also wanted a nic team in the management os for Livemigration and such, i would have to configure 2 other physical nics ?

    Regards,

    Morten

    • Depends. See converged fabrics/networks

  3. Hello
    I have dell r720 with 4 broadcom nics.I created a lbac team using all the nics with dell 5548 switch.when i copy files from another server with the same configuration i get close to 2 gbps speed however when i configure virtual nic with hyper-v using teamed nic i can only get 1 gbps .
    Can you tell me what i might be missing ?

    • Don’t know the configuration of your Dell NIC teaming. Sounds like it might be something like Dynamic load distribution in WS2012 R2.

  4. Hi guys I got this article http://technet.microsoft.com/en-us/library/jj735302.aspx and we tried to set up the model for 2 nics with TEAM at http://technet.microsoft.com/en-us/library/jj735302.aspx#bkmk_2 but something goes wrong we can’t bind the network adapter (with the specified vlan working) to the new VMs we created. What do you think is happening here?
    regards,

    • I have no idea what you just asked. Rephrase it with shorter sentences and use specific WinServ terminology.

Get Adobe Flash player