Assuming that you converge all fabrics (including iSCSI and that may require DCB for NICs and physical switches) then my recent work in the lab has found me another reason to like converged fabrics, beyond using fewer NICs.
If I am binding roles (parent, live migration, etc) to physical NICs then any host networking configuration scripts that I write must determine what NIC is correct. That would not be easy and would be subject to human cabling error, especially if hardware configurations change.
If however, I bind all my NICs into a team, and then build a converged fabric on that team, I have completely abstracted the physical networks from the logical connections. Virtual management OS NICs and trunking/VLAN bindings mean I don’t care any more … I just need 2 or 4 NICs in my team and connected to my switch.
Now that physical bindings don’t matter, I have simplified my configuration and I can script my depoyments and configuration to my heart’s content!
The only question that remains … do I really converge my iSCSI connections? More to come …
This blog post is the property of Aidan Finn (@joe_elway / http://www.aidanfinn.com) and may not be reused in any manner without prior consent of Aidan Finn. You may quote one paragraph from this blog post if you link to the original blog post.
- Windows Server 2012 Hyper-V Converged Fabrics & Remote Host Engineering
- Comparing Methods To Implement Converged Fabrics For Windows Server 2012 Hyper-V
- A WS2012 Hyper-V Converged Fabric Design With Host And Guest iSCSI Connections
- PowerShell Script To Create A Converged Fabric For Clustered Windows Server 2012 Hyper-V Host