We continue further down the road of understanding converged fabrics in WS2012 Hyper-V.Â The following diagram illustrates a possible design goal:
Go through the diagram of this clustered Windows Server 2012 Hyper-V host:
- In case youâ€™re wondering, this example is using SAS or FC attached storage so it doesnâ€™t require Ethernet NICs for iSCSI.Â Donâ€™t worry iSCSI fans â€“ Iâ€™ll come to that topic in another post.
- There are two 10 GbE NICs in a NIC team.Â We covered that already.
- There is a Hyper-V Extensible Switch that is connected to the NIC team.Â OK.
- Two VMs are connected to the virtual switch.Â Nothing unexpected there!
- Huh!Â The host, or the parent partition, has 3 NICs for cluster communications/CSV, management, and live migration.Â But â€¦ theyâ€™re connected to the Hyper-V Extensible Switch?!?!?Â Thatâ€™s new!Â They used to require physical NICs.
In Windows Server 2008 a host with this storage would require the following NICs as a minimum:
- Parent (Management)
- VM (for the Virtual Network, prior to the Virtual Switch)
- Cluster Communications/CSV
- Live Migration
All that accumulation of NICs wasnâ€™t a matter of bandwidth. What we really care about in clustering is quality of service: bandwidth when we need it and low latency. Converged fabrics assume we can guarantee those things. If we have those SLA features available to us (more in later posts) then 2 * 10 GbE physical NICs in each clustered hosts might be enough, depending on business and technology requirements of the site.Â 4 NICs per host â€¦ and thatâ€™s without NIC teaming.Â Double the NICs!
The amount of NICs go up.Â The number of switch ports goes up.Â The wasted rack space cost goes up.Â The power bill for all that goes up.Â The support cost for your network goes up.Â In truth, the complexity goes up.
NICs arenâ€™t important.Â Quality communications channels are important.
In this WS2012 converged fabrics design, we can create virtual NICs that attach to the Virtual Switch.Â Thatâ€™s done by using the Add-VMNetworkAdapter PowerShell cmdlet, for example:
Add-VMNetworkAdapter -ManagementOS -Name “Manage” -SwitchName External1
â€¦ where Manage will be the name of the new NIC and the name of the Virtual Switch is External1.Â The â€“ManagementOS tells the cmdlet that the new vNIC is for the parent partition or the host OS.
You can then:
- Configure the vNIC using Set-VMNetworkAdapter
- Specify the VLAN for the vNIC using Set-VMNetworkAdapterVLAN
- Configure IPv4/IPv6
I think configuring the VLAN binding of these NICs with port trunking (or whatever) would be the right way to go with this.Â That will further isolate the traffic on the physical network.Â Please bear in mind that weâ€™re still in the beta days and I havenâ€™t had a chance to try this architecture yet.
Armed with this knowledge and these cmdlets, we can now create all the NICs we need that connect to our converged physical fabrics.Â Next we need to look at securing and guaranteeing quality levels of communications.
This blog post is the property of Aidan Finn (@joe_elway / http://www.aidanfinn.com) and may not be reused in any manner without prior consent of Aidan Finn. You may quote one paragraph from this blog post if you link to the original blog post.