Note: This post was originally written using the Windows Server “8” (aka 2012) Beta. The PowerShell cmdlets have changed in the Release Candidate and this code has been corrected to suit it.
After the posts of the last few weeks, I thought I’d share a script that I am using to build a converged fabric hosts in the lab. Some notes:
- You have installed Windows Server 2012 on the machine.
- You are either on the console or using something like iLO/DRAC to get KVM access.
- All NICs on the host will be used for the converged fabric. You can tweak this.
- This will not create a virtual NIC in the management OS (parent partition or host OS).
- You will make a different copy of the script for each host in the cluster to change the IPs.
- You could strip out all but the Host-Parent NIC to create converged fabric for standalone host with 2 or 4 * 1 GbE NICs
And finally …. MSFT has not published best practices yet. This is still a beta release. Please verify that you are following best practices before you use this script.
OK…. here we go. Watch out for the line breaks if you copy & paste:
write-host “Creating virtual switch with QoS enabled”
New-VMSwitch “ConvergedNetSwitch” -MinimumBandwidthMode weight -NetAdapterName “ConvergedNetTeam” -AllowManagementOS 0
write-host “Setting default QoS policy”
Set-VMSwitch “ConvergedNetSwitch” -DefaultFlowMinimumBandwidthWeight 10
write-host “Creating virtual NICs for the management OS”
Add-VMNetworkAdapter -ManagementOS -Name “Host-Parent” -SwitchName “ConvergedNetSwitch”
Set-VMNetworkAdapter -ManagementOS -Name “Host-Parent” -MinimumBandwidthWeight 10
Add-VMNetworkAdapter -ManagementOS -Name “Host-Cluster” -SwitchName “ConvergedNetSwitch”
Set-VMNetworkAdapter -ManagementOS -Name “Host-Cluster” -MinimumBandwidthWeight 10
Add-VMNetworkAdapter -ManagementOS -Name “Host-LiveMigration” -SwitchName “ConvergedNetSwitch”
Set-VMNetworkAdapter -ManagementOS -Name “Host-LiveMigration” -MinimumBandwidthWeight 10
Add-VMNetworkAdapter -ManagementOS -Name “Host-iSCSI1” -SwitchName “ConvergedNetSwitch”
Set-VMNetworkAdapter -ManagementOS -Name “Host-iSCSI1” -MinimumBandwidthWeight 10
#Add-VMNetworkAdapter -ManagementOS -Name “Host-iSCSI2” -SwitchName “ConvergedNetSwitch”
#Set-VMNetworkAdapter -ManagementOS -Name “Host-iSCSI2” -MinimumBandwidthWeight 15
write-host “Waiting 30 seconds for virtual devices to initialise”
Start-Sleep -s 30
write-host “Configuring IPv4 addresses for the management OS virtual NICs”
New-NetIPAddress -InterfaceAlias “vEthernet (Host-Parent)” -IPAddress 192.168.1.51 -PrefixLength 24 -DefaultGateway 192.168.1.1
Set-DnsClientServerAddress -InterfaceAlias “vEthernet (Host-Parent)” -ServerAddresses “192.168.1.40”
New-NetIPAddress -InterfaceAlias “vEthernet (Host-Cluster)” -IPAddress 172.16.1.1 -PrefixLength “24”
New-NetIPAddress -InterfaceAlias “vEthernet (Host-LiveMigration)” -IPAddress 172.16.2.1 -PrefixLength “24”
New-NetIPAddress -InterfaceAlias “vEthernet (Host-iSCSI1)” -IPAddress 10.0.1.55 -PrefixLength “24”
#New-NetIPAddress -InterfaceAlias “vEthernet (Host-iSCSI2)” -IPAddress 10.0.1.56 -PrefixLength “24”
That will set up the following architecture:
QoS is set up as follows:
- The default (unspecified links) is 10% minimum
- Parent: 10%
- Cluster: 10%
- Live Migration: 20%
My lab has a single VLAN network. In production, you should have VLANs and trunk the physical switch ports. Then (I believe), you’ll need to add a line for each virtual NIC in the management OS (host) to specify the right VLAN (I’ve not tested this line yet on the RC release of WS2012 – watch out for teh VMNetowrkAdaptername parameter):
Set-VMNetworkAdapterVLAN –ManagementOS –VMNetworkAdapterName “vEthernet (Host-Parent)” –Trunk –AllowedVLANList 101
Now you have all the cluster connections you need, with NIC teaming, using maybe 2 * 10 GbE, 4 * 1 GbE, or maybe even 4 * 10 GbE if you’re lucky.
No luck on getting the Set-VMNetworkAdapterVLAN command to work. Any luck on your side with RC?
I got it working; you have to specify -NativeVlanId in addition to -AllowedVlanIdList when using -Trunk
I’ve been tied up with writing. Deadline for last night …. which I’ll make tonight 🙂
This works in Windows 8, I assume Server 2012 as well:
add-vmnetworkadapter -managementOS -name Storage -SwitchName “New Virtual Switch”
get-vmnetworkadapter -managementos -name Storage | set-vmnetworkadaptervlan -access -vlanid 213
Trunk commands do require the NativeVlanID and AllowedVlanIDList as well.
If i have two 10G Broadcom NICs on my server, can I team the two NICs using WS2012 native NIC teaming and create virtual switches to use for Management, Cluster, VM traffic and also for ISCSI traffic.
ISCSI only for VM in-guest clustering.
Can I create ISCSI virtual NICs on teamed physical NICs. Do Microsoft best practices allow this?
I’ve discussed this plenty on my blog. There’s a search option on the top right.
Tried to deploy described scenario in my lab and… some time later VMM 2012 SP1 crashed all those dreams in one click 🙂 I suppose ConvergedFabric scenario are not supported one by VMM – only I’ve done is “Allow host access using VLAN: 0” checkbox cleared (Host properties -> Virtual Switches -> ConvergetNetSwitch). Then VMM killed all vNICs created in PoSh and left host on it’s own without connections.
VMM network deployment is incompatible with any networking you deploy directly on a host. It will overwrite it, and likely break VM connectivity. You have to choose one or the other.
is this for outgoing traffic only or does this work for incoming too?
if so do you have to configure switches to get most out of this?
Outgoing. You can tag packets for the switches to apply QoS too.
Just wondering what switch independent teaming would be used in this scenario, seeing as VMs (usually Hyper-V Port) would also be connected?
I’ve a team of two physical nics with three virtual nics presented to the OS. The associated physical switch ports are trunked. I’ve tried a couple of variations of the set-vmnetworkadaptervlan cmdlet, but unable to communicate with the virtual nics. On each host, I can ping the relevant IP addresses for each virtual nic, but cannot ping beyond the host. Any suggestions welcome.
Set-VMNetworkAdapterVlan –ManagementOS –VMNetworkAdapterName “Management” –Access –VlanId 151
Set-VMNetworkAdapterVLAN -ManagementOS -VMNetworkAdapterName “Management” -Trunk -NativeVlanId 151 -AllowedVlanIdList 151