Note: This post was originally written using the Windows Server “8″ (aka 2012) Beta.  The PowerShell cmdlets have changed in the Release Candidate and this code has been corrected to suit it.

After the posts of the last few weeks, I thought I’d share a script that I am using to build a converged fabric hosts in the lab.  Some notes:

  1. You have installed Windows Server 2012 on the machine.
  2. You are either on the console or using something like iLO/DRAC to get KVM access.
  3. All NICs on the host will be used for the converged fabric.  You can tweak this.
  4. This will not create a virtual NIC in the management OS (parent partition or host OS).
  5. You will make a different copy of the script for each host in the cluster to change the IPs.
  6. You could strip out all but the Host-Parent NIC to create converged fabric for standalone host with 2 or 4 * 1 GbE NICs

And finally …. MSFT has not published best practices yet.  This is still a beta release.  Please verify that you are following best practices before you use this script.

OK…. here we go.  Watch out for the line breaks if you copy & paste:

write-host “Creating virtual switch with QoS enabled”
New-VMSwitch “ConvergedNetSwitch” -MinimumBandwidthMode weight -NetAdapterName “ConvergedNetTeam” -AllowManagementOS 0

write-host “Setting default QoS policy”
Set-VMSwitch “ConvergedNetSwitch” -DefaultFlowMinimumBandwidthWeight 10

write-host “Creating virtual NICs for the management OS”
Add-VMNetworkAdapter -ManagementOS -Name “Host-Parent” -SwitchName “ConvergedNetSwitch”
Set-VMNetworkAdapter -ManagementOS -Name “Host-Parent” -MinimumBandwidthWeight 10

Add-VMNetworkAdapter -ManagementOS -Name “Host-Cluster” -SwitchName “ConvergedNetSwitch”
Set-VMNetworkAdapter -ManagementOS -Name “Host-Cluster” -MinimumBandwidthWeight 10

Add-VMNetworkAdapter -ManagementOS -Name “Host-LiveMigration” -SwitchName “ConvergedNetSwitch”
Set-VMNetworkAdapter -ManagementOS -Name “Host-LiveMigration” -MinimumBandwidthWeight 10

Add-VMNetworkAdapter -ManagementOS -Name “Host-iSCSI1″ -SwitchName “ConvergedNetSwitch”
Set-VMNetworkAdapter -ManagementOS -Name “Host-iSCSI1″ -MinimumBandwidthWeight 10

#Add-VMNetworkAdapter -ManagementOS -Name “Host-iSCSI2″ -SwitchName “ConvergedNetSwitch”
#Set-VMNetworkAdapter -ManagementOS -Name “Host-iSCSI2″ -MinimumBandwidthWeight 15

write-host “Waiting 30 seconds for virtual devices to initialise”
Start-Sleep -s 30

write-host “Configuring IPv4 addresses for the management OS virtual NICs”
New-NetIPAddress -InterfaceAlias “vEthernet (Host-Parent)” -IPAddress -PrefixLength 24 -DefaultGateway
Set-DnsClientServerAddress -InterfaceAlias “vEthernet (Host-Parent)” -ServerAddresses “″

New-NetIPAddress -InterfaceAlias “vEthernet (Host-Cluster)” -IPAddress -PrefixLength “24″

New-NetIPAddress -InterfaceAlias “vEthernet (Host-LiveMigration)” -IPAddress -PrefixLength “24″

New-NetIPAddress -InterfaceAlias “vEthernet (Host-iSCSI1)” -IPAddress -PrefixLength “24″

#New-NetIPAddress -InterfaceAlias “vEthernet (Host-iSCSI2)” -IPAddress -PrefixLength “24″

That will set up the following architecture:


QoS is set up as follows:

  • The default (unspecified links) is 10% minimum
  • Parent: 10%
  • Cluster: 10%
  • Live Migration: 20%

My lab has a single VLAN network.  In production, you should have VLANs and trunk the physical switch ports.  Then (I believe), you’ll need to add a line for each virtual NIC in the management OS (host) to specify the right VLAN (I’ve not tested this line yet on the RC release of WS2012 – watch out for teh VMNetowrkAdaptername parameter):

Set-VMNetworkAdapterVLAN –ManagementOS –VMNetworkAdapterName “vEthernet (Host-Parent)” –Trunk –AllowedVLANList 101

Now you have all the cluster connections you need, with NIC teaming, using maybe 2 * 10 GbE, 4 * 1 GbE, or maybe even 4 * 10 GbE if you’re lucky.

11 comments so far

Add Your Comment
  1. No luck on getting the Set-VMNetworkAdapterVLAN command to work. Any luck on your side with RC?

  2. I got it working; you have to specify -NativeVlanId in addition to -AllowedVlanIdList when using -Trunk

    • I’ve been tied up with writing. Deadline for last night …. which I’ll make tonight :)

  3. This works in Windows 8, I assume Server 2012 as well:

    add-vmnetworkadapter -managementOS -name Storage -SwitchName “New Virtual Switch”
    get-vmnetworkadapter -managementos -name Storage | set-vmnetworkadaptervlan -access -vlanid 213

    Trunk commands do require the NativeVlanID and AllowedVlanIDList as well.

  4. If i have two 10G Broadcom NICs on my server, can I team the two NICs using WS2012 native NIC teaming and create virtual switches to use for Management, Cluster, VM traffic and also for ISCSI traffic.
    ISCSI only for VM in-guest clustering.
    Can I create ISCSI virtual NICs on teamed physical NICs. Do Microsoft best practices allow this?

    • I’ve discussed this plenty on my blog. There’s a search option on the top right.

  5. Tried to deploy described scenario in my lab and… some time later VMM 2012 SP1 crashed all those dreams in one click :) I suppose ConvergedFabric scenario are not supported one by VMM – only I’ve done is “Allow host access using VLAN: 0″ checkbox cleared (Host properties -> Virtual Switches -> ConvergetNetSwitch). Then VMM killed all vNICs created in PoSh and left host on it’s own without connections.

    • VMM network deployment is incompatible with any networking you deploy directly on a host. It will overwrite it, and likely break VM connectivity. You have to choose one or the other.

  6. is this for outgoing traffic only or does this work for incoming too?
    if so do you have to configure switches to get most out of this?

    • Outgoing. You can tag packets for the switches to apply QoS too.

  7. Just wondering what switch independent teaming would be used in this scenario, seeing as VMs (usually Hyper-V Port) would also be connected?

Get Adobe Flash player