Windows Server 2012 NIC Teaming and Multichannel

Notes from TechEd NA 2012 WSV314:

image

Terminology

  • It is a Team, not NIC bonding, etc.
  • A team is made of Team Members
  • Team Interfaces are the virtual NICs that can connect to a team and have IP stacks, etc.  You can call them tNICs to differentiate them from vNICs in the Hyper-V world.

image

Team Connection Modes

Most people don’t know the teaming mode they select when using OEM products.  MSFT are clear about what teaming does under the cover.  Connection mode = how do you connect to the switch?

  • Switch Independent can be used where the switch doesn’t need to know anything about the team.
  • Switch dependent teaming is when the switch does need to know something about the team. The switch decides where to send the inbound traffic.

There are 2 switch dependent modes:

  • LACP (Link Aggregation Control Protocol) is where the is where the host and switch agree on who the team members are. IEEE 802.1ax
  • Static Teaming is where you configure it on the switch.

image

Load Distribution Modes

You also need to know how you will spread traffic across the team members in the team.

1) Address Hash comes in 3 flavours:

  • 4-tuple (the default): Uses RSS on the TCP/UDP ports. 
  • 2-tuple: If the ports aren’t available (encrypted traffic such as IPsec) then it’ll go to 2-tuple where it uses the IP address.
  • MAC address hash: If not IP traffic, then MAC addresses are hashed.

2) We also have Hyper-V Port, where it hashes the port number on the Hyper-V switch that the traffic is coming from.  Normally this equates to per-VM traffic.  No distribution of traffic.  It maps a VM to a single NIC.  If a VM needs more pipe than a single NIC can handle then this won’t be able to do it.  Shouldn’t be a problem because we are consolidating after all.

Maybe create a team in the VM?  Make sure the vNICs are on different Hyper-V Switches. 

SR-IOV

Remember that SR-IOV bypasses the host stack and therefore can’t be teamed at the host level.  The VM bypasses it.  You can team two SR-IOV enabled vNICs in the guest OS for LBFO.

Switch Independent – Address Hash

Outbound traffic in Address Hashing will spread across NICs. All inbound traffic is targeted at a single inbound MAC address for routing purposes, and therefore only uses 1 NIC.  Best used when:

  • Switch diversity is a concern
  • Active/Standby mode
  • Heavy outbound but light inbound workloads

Switch Independent – Hyper-V Port

All traffic from each VM is sent out on that VM’s physical NIC or team member.  Inbound traffic also comes in on the same team member.  So we can maximise NIC bandwidth.  It also allows for maximum use of VMQs for better virtual networking performance.

Best for:

  • Number of VMs well exceeds number of team members
  • You’re OK with VM being restricted to bandwidth of a single team member

Switch Dependent Address Hash

Sends on all active members by using one of the hashing methods.  Receives on all ports – the switch distributes inbound traffic.  No association between inbound and outbound team members.  Best used for:

  • Native teaming for maximum performance and switch diversity is not required.
  • Teaming under the Hyper-V switch when a VM needs to exceed the bandwidth limits of a single team member  Not as efficient with VMQ because we can’t predict the traffic.

Best performance for both inbound and outbound.

Switch Dependent – Hyper-V Port

Sends on all active members using the hashed port – 1 team member per VM.  Inbound traffic is distributed by the switch  on all ports so there is no correlation to inbound and outbound.  Best used when:

  • When number of VMs on the switch well exceeds the number of team members AND
  • You have a policy that says you must use switch dependent teaming.

When using Hyper-V you will normally want to use Switch Independent & Hyper-V Port mode. 

When using native physical servers you’ll likely want to use Switch Independent & Address Hash.  Unless you have a policy that can’t tolerate a switch failure.

Team Interfaces

There are different ways of interfacing with the team:

  • Default mode: all traffic from all VLANs is passed through the team
  • VLAN mode: Any traffic that matches a VLAN ID/tag is passed through.  Everything else is dropped.

Inbound traffic passes through to one team interface at once.

image

The only supported configuration for Hyper-V is shown above: Default mode passing through all traffic t the Hyper-V Switch.  Do all the VLAN tagging and filtering on the Hyper-V Switch.  You cannot mix other interfaces with this team – the team must be dedicated to the Hyper-V Switch.  REPEAT: This is the only supported configuration for Hyper-V.

A new team has one team interface by default. 

Any team interfaces created after the initial team creation must be VLAN mode team interfaces (bound to a VLAN ID).  You can delete these team interfaces.

Get-NetAdapter: Get the properties of a team interface

Rename-NetAdapter: rename a team interface

Team Members

  • Any physical ETHERNET adapter with a Windows Logo (for stability reasons and promiscuous mode for VLAN trunking) can be a team member.
  • Teaming of InfiniBand, Wifi, WWAN not supported.
  • Teams made up of teams not supported.

You can have team members in active or standby mode.

Virtual Teams

Supported if:

  • No more than 2 team members in the guest OS team

Notes:

  • Intended for SR-IOV NICs but will work without it.
  • Both vNICs in the team should be connected to different virtual switches on different physical NICs

If you try to team a vNIC that is not on an External switch, it will show up fine and OK until you try to team it.  Teaming will shut down the vNIC at that point. 

You also have to allow teaming in a vNIC in Advanced Properties – Allow NIC teaming.  Do this for each of the VM’s vNICs.  Without this, failover will not succeed. 

PowerShell CMDLETs for Teaming

The UI is actually using POSH under the hood.  You can use the NIC Teaming UI to remotely manage/configure a server using RSAT for Windows 8.  WARNING: Your remote access will need to run over a NIC that you aren’t altering because you would lose connectivity.

image

Supported Networking Features

NIC teaming works with almost everything:

image

TCP Chimney Offload, RDMA and SR-IOV bypass the stack so obviously they cannot be teamed in the host.

Limits

  • 32 NICs in a team
  • 32 teams
  • 32 team interfaces in a team

That’s a lot of quad port NICs.  Good luck with that! Winking smile 

SMB Multichannel

An alternative to a team in an SMB 3.0 scenario.  Can use multiple NICs with same connectivity, and use multiple cores via NIC RSS to have simultaneous streams over a single NIC (RSS) or many NICs (teamed, not teamed, and also with RSS if available).  Basically, leverage more bandwidth to get faster SMB 3.0 throughput.

Without it, a 10 GbE NIC would only be partly used by SMB – single CPU core trying to transmit.  RSS makes it multi-threaded/core, and therefore many connections by the data transfer.

Remember – you cannot team RDMA.  So another case to use Multichannel and get an LBFO effect is to use SMB Multichannel …. or I should say “use” … SMB 3.0 turns it on automatically if multiple paths are available between client and server.

SMB 3.0 is NUMA aware.

Multichannel will only use NICs of same speed/type.  Won’t see traffic spread over a 10 GbE and a 1 GbE NIC, for example, or over RDMA-enabled and non-RDMA NICs. 

In tests, the throughput on RSS enabled 10 GbE NICs (1, 2, 3, and 4 NICs), seemed to grow in a predictable near-linear rate.

SMB 3.0 uses a shortest queue first algorithm for load balancing – basic but efficient.

SMB Multichannel and Teaming

Teaming allows for faster failover.  MSFT recommending teaming where applicable.  Address-hash port mode with Multichannel can be a nice solution.  Multichannel will detect a team and create multiple connections over the team.

RDMA

If RDMA is possible on both client and server then SMB 3.0 switches over to SMB Direct.  Net monitoring will see negotiation, and then … “silence” for the data transmission.  Multichannel is supported across single or multiple NICs – no NIC teaming, remember!

Won’t Work With Multichannel

  • Single non-RSS capable NIC
  • Different type/speed NICs, e.g. 10 GbE RDMA favoured over 10 GbE non-RDMA NIC
  • Wireless can be failed from but won’t be used in multi-channel

Supported Configurations

Note that Multichannel over a team of NICs is favoured over multichannel over the same NICs that are not in a team.  Added benefits of teaming (types, and fast failover detection).  This applies, whether the NICs are RSS capable or not.  And the team also benefits non-SMB 3.0 traffic.

image

Troubleshooting SMB Multichannel

image

Plenty to think about there, folks!  Where it applies in Hyper-V?

  • NIC teaming obviously applies.
  • Multichannel applies in the cluster: redirected IO over the cluster communications network
  • Storing VMs on SMB 3.0 file shares

13 thoughts on “Windows Server 2012 NIC Teaming and Multichannel”

  1. Hello Aidan,

    Do you maybe know why when creating additional Team Interfaces for a Team(switch independent/address hash), the Team Interfaces have the same MAC address?
    Thanks
    Zarko

  2. I followed your directions. Nic teaming was enabled. The server had internet connectivity but no way to access the server using remote desktop. The Nic was not available via DNS. What about the “fine art” of nic teaming? So what if the server enables nic teaming but no one can connect to the server because Nic Teaming is enabled?

  3. SMB-MC combined with LBFO-Team (Address Hash) only applies to 10Gb NICs ? IT’s written everywhere that combining the two will yield combined bandwidth, but can’t see how… Address hash rcv comes down on one NIC, so max. bandwidth is 1 NIC, even team is 6, right ? Yes, SMB-MC sees the team, creates more connections, but doesn’t help if you want to get 6Gb with a team of 6 1 Gb NICs, or am I doing something wrong ? WIthout the team I do get 6Gb max. across from node1 to node2… (talking 2012 R2, tried address hash, HPort and Dynamic, always get 1 NIC max. throughput)

    1. According to Jose Barreto, SMB Multichannel + NIC Team gives you the fastest failover. I suspect there is combined bandwidth using Address Hashing or Dynamic load balancing (NIC to NIC x 2). But remember: this is only applicable if you do not have RDMA.

  4. Do you know if it’s possible to apply bandwidth limitation to sub interfaces ?
    Is it possible to have multiple VLAN tunneled into the same sub interface ?
    If an interface is added to a TEAM carrying the VLAN 100 does it mean that that VLAN is removed from the ‘default’ TEAM INTERFACE ?

    Is it still true that using the TEAM interface bound to hyper-v virtual switch to add a sub interface is not supported ? ( I tested and it works however )

    thanks

    1. I have no idea what you mean by sub interface. Try using the MSFT terminology: Team NIC, Team, Team Interface.

  5. Hello,

    I could use any help possible on how to configure multichannel between 2 Win8.1 PCs. I have 2 PCs connected and I often exchange several TB of data between them. Each PC is running Windows 8.1 x 64 enterprise.

    I have 2 onboard realtek cards and 1 add on Intel CT card.

    I would like to have 2 nics directly attached between the 2 PCs with no gateway for the transfers and 1 nic just for internet on each with a default gateway. No switch in between the nics.

    Any info would be helpful with details. I found many articles about what you need -e .g. a nic with rss receive and that it is enabled by default but no actual implementation details for Windows 8.1 – plenty for Windows server 2012 and how you get better performance and hyper V improvements but I need a spoonfeed actual how to configure it article if possible.

    Thanks

    I have the 2 nics that are directly attached between the 2 PCs (4 nics total) configured for 192.168.101.1, 2, 3 and 4 – no gateway – now how do I get them to bond/multichannel. I imagine there is a next step.

    1. Multichannel just works. If it finds more than 2 paths between client and server, it uses them. The only configuration is turn it off (on by default) and to restrict NICs using Set-SMBMultiChannelContraints.

  6. Hello Aidan,

    Thanks for the article, it is the extensively and best explained i’ have seen…

    Q: what is wrong with teaming when you do not get any internet connectivity? since mine i’s a standard hp server with two nic’s and no special config, i was wondering…and without teaming it works seamlessly.

Leave a Reply to AFinn Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.