Some Different Converged Fabric Architectures For Windows Server 2012 Hyper-V

Converged fabrics give us options.  There’s no one right way to implement them.  Browse around TechNet and you can see that.  That means we have options, and options are good.  Maybe you don’t like options, so maybe you pick an architecture, script the deployment, and reuse that script for every host configuration.  The benefit of that approach is extreme standardisation and it removes most of the human element where mistakes happen.

Sample Configuration 1 – Standalone Host Using All The NICs

image

Right now, I’m thinking to myself, “how many people looked at the picture and though that this stuff is only for big companies” and didn’t bother reading this text, missing out on something important, including the small business with a couple of VMs.

In this example a small company is installing a single host, or a few non-clustered hosts.  Or it could be a hosting company installing dozens or hundreds of non-clustered hosts.  The server comes with 4 * 1 GbE NICs or with 2 * 10 GbE NICs.  All the NICs are teamed.  A single virtual switch is created and bound to the team.  The VMs talk via that.  Then a single virtual NIC is created in the management OS for managing and connecting to the host. 

The benefit is that all functions of the host and VMs go through a single LBFO team.  I can script the entire setup, by adding all NICs into the team.  There’s no figuring out what NIC is what.  Combined with QoS, I also get link aggregation meaning lots of pipe, even with 4 * 1 GbE NICs.

Sample Configuration 2 – Clustered Host With SAS/Fibre Channel

image

In this example, I have 2 more additional virtual NICs in the management OS, giving me cluster communications (and CSV) and Live Migration networks.  All three NICs (VM and management OS)are probably on isolated physical VLANs through VLAN ID binding and trunking the physical switch ports of the converged fabric. 

The benefit of this example is that I’ve been able to switch to 10 GbE using 2 on-board NICs that come in the new DL380 and R720.  I don’t need 8 NICs for these connections (4 * 2) for NIC teaming like I would have in W2008 R2.  I get access to big pipe with much fewer switch ports and NICs, with QoS guaranteeing quality of service with burst capability.

Sample Configuration 3 – Clustered Host with Physically Isolated iSCSI

image

The one major rule we have with iSCSI NICs is never use NIC teaming.  We use MPIO for pairs of iSCSI NICs.  But what if we want to converge the iSCSI fabric as well?  We’re still in Release Candidate days so there is not right/wrong, best practice, or support statements yet.  We just don’t know yet.  In my demos, I’ve had a single virtual NIC for iSCSI without using DCB.  If I wanted to be a bit more conservative, I could use the above configuration.  It takes the previous configuration, and adds a pair of physically isolated NICs to use for iSCSI. 

Sample Configuration 4 – Clustered Host with SMB 3.0 and Physically Isolated Virtual Switch

image

The above is one that was presented at the Build conference last September.  The left machine is an SMB 3.0 file server for storing the VMs’ files.  The virtual switch is physically isolated, using a pair of teamed NICs.  Another NIC team in the host has virtual NICs directly connected to it for the management OS and cluster functions.

A benefit of this is that RSS can be employed on the management OS NIC team to give us SMB 3.0 multichannel – multiple SMB data streams over multiple RSS capable NICs.  The virtual switch NICs can assume the Hyper-V load distribution model and DVMQ can be enabled to optimise VM networking, assuming the NICs support it.  Note that DVMQ and RSS should not be used on the same NICs.  That’s why the loads are isolated here.

I’m sure if I sat down and thought about it, there would be many more configurations.  Would the be best practice?  Would they be supported?  We’ll find out later on.  But I do know for certain that I can reduce my NIC requirements and increase network path fault tolerance with converged fabrics.

8 thoughts on “Some Different Converged Fabric Architectures For Windows Server 2012 Hyper-V”

  1. Very interesting topic. Some more things to dicuss in the entire picture:
    Redundant network connectivity for iSCSI guest clustering
    RDMA/SR-IOV & NIC teaming => gains & losses 🙂

    1. Yeah, that’s why we’ve choices and options. It’ll be all about balancing acts. If my NICs don’t do RDMA then I don’t care. If my VM workloads don’t justify SR-IOV then I don’t care. If I need RSS then I split out from VMQ NICs, etc. I think we have to wait until best practices are published, and then we can start nailing colours to masts.

  2. I found a post that is almost exactly the same as your Sample Configuration 3 except that they include iSCSI. This is what I would like to try since my hosts have 2 10GbE NICs teamed together. I am nervous about trying this though because the majority of articles describing converged fabric separate the storage network element.

    http://blogs.technet.com/b/meamcs/archive/2012/05/06/converged-fabric-in-windows-server-2012-hyper-v-server-8-beta.aspx

    1. The problem with converging iSCSI is switch performance and SAN manufacturer support. Most major manufacturers only support iSCSI on dedicated switches which cannot be done when the iSCSI NICs are converged.

  3. Great post Aidan
    A small question -on picture 2 and 3 Hyper-V extensible switch is connected to two 10Gb physical NICs. Shouldn’t this two NIC be teamed, and hyper-v ext. switch be connected to that NIC team, simillar like on picture 4?

  4. I am still laying out plans for upgrading our environment to Server 2012 with NIC Teams.
    We are also looking at moving away from VLANs to use NVGRE to segregate our client networks.

    Since I understand NVGRE encapsulates and sends the VM traffic to the host IP/vSwitch, how does the NIC team come into play? Would it always use one NIC of the team? Is there a way to get a NIC team to load balance when using NVGRE without using LACP on the switches and teams?

Leave a Reply to Aidan Finn Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.