A WS2012 Hyper-V Converged Fabric Design With Host And Guest iSCSI Connections

A friend recently asked me a question. He had recently deployed a Windows Server 2012 cluster with converged fabrics. He had limited amounts of NICs that he could install and limited number of switch ports that he could use.  His Hyper-V host cluster is using a 10 GbE connected iSCSI SAN.  He also wants to run guest clusters that are also connected to this storage.  In the past, I would have said: “you need another pair of NICs on the iSCSI SAN and use a virtual network on each to connect the virtual machines. But now … we have options!

Here’s what I have come up with:

image

iSCSI storage typically has these two requirements:

  • Two NICs to connect to the SAN switches, each on a different subnet.
  • Each NIC is on a different subnet

In the diagram focus on the iSCSI piece.  That’s the NIC team on the left.

The Physical NICs and Switches

As usual with an iSCSI SAN, there are two dedicated switches for the storage connections.  That’s a normal (not always) support requirement by SAN manufacturers.  This is why we don’t have complete convergence to a single NIC team, like you see in most examples. 

The host will have 2 iSCSI NICs (10 GbE).  The connected switch ports are trunked, and both of the SAN VLANs (subnets) are available via the trunk.

The NIC Team and Virtual Switch

A NIC team is created.  The team is configured with Hyper-V Port load distribution (load balancing), meaning that a single virtual NIC cannot exceed the bandwidth of a single physical NIC in the team.  I prefer LACP (teaming mode) teams because they are dynamic (and require minimal physical switch configuration).  This type of switch dependent mode requires switch stacking.  If that’s not your configuration then you should use Switch Independent (requires no switch configuration) instead of LACP.

The resulting team interface will appear in Network Connections (Control Panel).  Use this interface to connect a new external virtual switch that will be dedicated to iSCSI traffic.  Don’t create the virtual switch until you decide how you will implement QoS.

The Management OS (Host)

The host does not have 2 NICs dedicated to it’s own iSCSI needs. Instead, it will share the bandwidth of the NIC team with guests (VMs) running on the host.  That sharing will be controlled using Quality of Service (QoS) minimum bandwidth rules (later in the post).

The host will need two NICs of some kind, each one on a different iSCSI subnet.  To do this:

  1. Create 2 management OS virtual NICs
  2. Connect them to the iSCSI virtual switch
  3. Bind each management OS virtual NIC to a different iSCSI SAN VLAN ID
  4. Apply the appropriate IPv4/v6 configurations to the iSCSI virtual NICs in the management OS Control Panel
  5. Configure iSCSI/MPIO/DSM as usual in the management OS, using the virtual NICs

Do not configure/use the physical iSCSI NICs!  Your iSCSI traffic will source in the management OS virtual NICs, flow through the virtual switch, then the team, and then the physical NICs, and then back again.

The Virtual Machines

Create a pair of virtual NICs in each virtual machine that requires iSCSI connected storage.

Note: Remember that you lose virtualisation features with this type of storage, such as snapshots (yuk anyway!), VSS backup from the host (a very big loss), and Hyper-V Replica.  Consider using virtual storage that you can replicate using Hyper-V Replica.

The process for the virtual NICs in the guest OS of the virtual machine will be identical to the management OS process.  Connect each iSCSI virtual NIC in the VM to the iSCSI virtual switch (see the diagram).  Configure a VLAN ID for each virtual NIC, connecting 1 to each iSCSI VLAN (subnet) – this is done in Hyper-V Manager and is controlled by the virtualisation administrators.  In the guest OS:

  • Configure the IP stack of the virtual NICs, appropriate to their VLANs
  • Configure iSCSI/MPIO/DSM as required by the SAN manufacturer

Now you can present LUNs to the VMs.

Quality of Service (QoS)

QoS will preserve minimum amounts of bandwidth on the iSCSI NICs for connections.  You’re using a virtual switch so you will implement QoS in the virtual switch.  Guarantee a certain amount for each of the management OS (host) virtual NICs.  This has to be enough for all the storage requirements of the host (the virtual machines running on that host).  You can choose one of two approaches for the VMs:

  • Create an explicit policy for each virtual NIC in each virtual machine – more engineering and maintenance required
  • Create a single default bucket policy on the virtual switch that applies to all connected virtual NICs that don’t have an explicit QoS policy

This virtual switch policy give the host administrator control, regardless of what a guest OS admin does.  Note that you can also apply classification and tagging policies in the guest OS to be applied by the physical network.  There’s no point applying rules in the OS Packet Scheduler because the only traffic on these two NICs should be iSCSI.

Note: remember to change the NIC binding order in the host management OS and guest OSs so the iSCSI NICs are bottom of the order.

Support?

I checked with the Microsoft PMs because this configuration is nothing like any of the presented or shared designs.  This design appears to be OK with Microsoft

For those of you that are concerned about NIC teaming and MPIO: In this design, MPIO has no visibility of the NIC team that resides underneath of the virtual switch so there is not a support issue.

Please remember:

  • Use the latest stable drivers and firmwares
  • Apply any shared hotfixes (not just Automatic Updates via WSUS, etc) if they are published
  • Do your own pre-production tests
  • Do a pilot test
  • Your SAN manufacturer will have the last say on support for this design

EDIT1:

If you wanted, you could use a single iSCSI virtual NIC in the management OS and in the guest OS without MPIO.  You have the path fault tolerance that MPIO provides via NIC teaming. Cluster validation would give you a warning (not a fail), and the SAN manufacturer might get their knickers in a twist over the lack of dual subnets and MPIO.

And … check with your SAN manufacturer for the guidance on the subnets because not all have the same requirements.

22 thoughts on “A WS2012 Hyper-V Converged Fabric Design With Host And Guest iSCSI Connections”

  1. Wouldn’t it be “even more” supported if you would create two iSCSI virtual switches each bound to a iSCSI physical NIC, and then split bind both the host and the guests to each of those switches, so that there is no chance that teaming and MPIO get in each other’s way?

    1. The posted configuration is supported by Microsoft so there is no “even more supported”. NIC teaming and MPIO -do not- interfere with each other in the posted solution.

  2. Is there an advantage to having vNICs in the Management OS (host) for iSCSI? Could you instead have the host use the physically teamed pNICs (iSCSI NIC Team) and then have the guest VM’s vNICs map to the iSCSI virtual switch which would have the the box “Allow management OS to share the network adapter” checked/enabled?

    1. By using the sharing option, you are actually creating a virtual NIC, as in my architecture. However, you’ve only created one virtual NIC (which might be OK for your SAN manufacturer’s support policy – or not). You also haven’t considered QoS! QoS is critical in this design, and you have to enable that during the creation of the virtual switch.

      1. Ahh yes, I forgot when you map a virtual network to a host pNIC, it turns the pNIC into a vNIC as well because of Hyper-V. OK, so that means you’re creating an additional vNIC on the host to map to that same virtual switch to support MPIO from the host. I think I got it now.

        Does the host’s GUI for network adapters actually show all the additional parent vNICs you created? Also, can you create the host/parent vNICs from the GUI or only in powershell?

        QoS I have never used so I won’t even touch that one heheh. Thanks as always for the great info and feedback.

        1. > Ahh yes, I forgot when you ….

          No, not really. It doesn’t do anything to the pNIC. It creates a new vNIC in the management OS, and connects it to the virtual switch. That’s how the sharing process is accomplished.

          Yes, the new management OS virtual NIC appears in Networks (Control Panel). PowerShell only.

          If you’re doing this for iSCSI, then you need QoS to ensure that both the host and VMs get their fair share storage.

  3. What is the best way to enable a SCVMM 2012 SP1 VM access to the storage network? I only need to give this 1 VM access and not a whole load of Guest VM’s. Are we still safer to create a new cluster for management?

    1. If there’s only going to be VMM connected to a LUN on the SAN, then this architecture would be overkill. Maybe you’d be best with a *gasp* passthrough disk for the library or using a dedicated machine for VMM. If you’re going the whole System Center suite (and you might as well because you’re buying it all) and you’re big enough, then dedicated host(s) for System Center VMs seems like the way to go. Then this architecture might make sense. There’s a whole lot of if-then-elses depending on the scale and complexity of your design.

      1. Hi Aidan,
        We’ve been recommended this design as a way to leverage the SAN vendor’s SMP integration features for a single VMM VM, like Christian’s example above. There will be no other VM which needs access to the storage. Does this end up having a detrimental effect on the other VMs, as the host itself is now having to access the storage via a vNIC and I swear I read that vNIC implementation for the host partition (unlike for VMs) doesn’t support vRSS. Does this mean the host has ‘worse’ access to the storage than if it was presented via physical NICs?

        1. Management OS vNICs do not have vRSS support and will have limited bandwidth on 10 GbE or faster – between 2.5 and 6 Gbps.

  4. Hi Aidan,
    I may be being a bit pedantic here, but say I have 6 x 1GbE nics in a Blade and use the two for basic iSCSI MPIO. Albeit the bandwidth is greatly reduced, will using the CSV,LM,Host and VM networks across a 4 x 1GbE NIC team still be supported!?

  5. I read that you shouldn’t use NIC teaming underneath for the simple reason a NIC failover takes relatively much longer than using MPIO only… I’ve tried both and was able to confirm that. Although in real life it doesn’t really matter that much since teamfailover appears quick enough before MPIO staggers with timeouts… Still, using MPIO only is just that “little” bit more resilient to failures, which is why I stuck with 1pNIC->vSw->vNIC(OS)->1vNIC(VM)x2. If Microsoft supports teams underneath I’m afraid alot of people will start thinking they can easily aggregate 6-8 NICs in one team underneath, despite of the explicit config described here, which seriously makes me wonder if this config would still be MS supported if done, say, with 4 NICs in one team underneath ?

  6. Hi Aidan, in the iSCSI case ,s sharing virtual switch with management OS ‘supported’ for Windows Server 2008 R2 ? we will end to a virtual switch that will be used for iSCSI for both Management OS and VMs

    1. Where is the W2008 R2 installed? If it’s a guest OS in my WS2012 host design, yes. As long as you’re not directly teaming the iSCSI vNICs then it’s good.

  7. Can you provide some more details on your note

    “Note: Remember that you lose virtualisation features with this type of storage, such as snapshots (yuk anyway!), VSS backup from the host (a very big loss), and Hyper-V Replica. Consider using virtual storage that you can replicate using Hyper-V Replica.”

    In particular about VSS backup from the host.

    I can’t find anything that talks about this, but I am having a guest cluster losing it’s CSV’s everytime a backup is done.

    1. Yes, assuming that you are talking about Hyper-V Network Virtualization (SDN). A lot, because VMs in a virtual network (SDN) are isolated from other vNets and the provider network. And that’s a MUCH bigger subject.

  8. ISCSIa and ISCSIb (physicals) are in the ISCSI_TEAM switch independent and Hyper-v port load balancing. ISCSISwitch is created off of that team and is untagged. ISCSI-A and ISCSI-B are untagged and used for Parent. I have a single vlan for iSCSI (yeah, yeah, I know) that needs through so trunking and the like are unnecessary at this point. GuestISCSI is the vNIC for storage on guest. I am getting no connectivity on my vNICS on parent or guest.

    I’ve tried setting mode to access vlanid 0, 7 (the one I need), trunking 7, and untagging. No combination is getting this working.

    What did I miss here?

Leave a Reply to James Thompson Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.