2013
03.27

I’ve done a lot of posts over the last year on converged fabrics in Windows Server 2012 Hyper-V, not to mention nearly 100 pages on the topic in the new Hyper-V book.  Pretty much all of them center on using PowerShell to create your converged fabric in the management OS of the host itself.  But doing this is just 1 of the 3 ways (that I know of) for creating a converged fabric.  This topic has come up several times in conversation and blog comment over the past month so I thought I’d explore it a bit.

Using Hyper-V PowerShell in the Management OS

The benefit to implementing converged fabrics in the management OS is that with a pretty simple script, you can implement 1 design across an entire data centre no matter what hardware vendor you choose, or if you have rack servers here and blade servers there.  It’s the same every time, depending on physical NIC designs.  It’s also using technology that’s built into the virtualisation solution.  There is no dependency on additional expensive hardware.  And it’s software defined.  We like software-defined-anything right now because it is flexible.  In theory (and in practice as you’ll soon see) we can change it from a central point when the need arises.  That’s not the case with hardware defined solutions.

There is a concern for some about dependability.  All this MSFT networking is very new.  Can you build mission critical systems on it?  Some want to take the time to learn it a bit more before deploying it.

Hardware Network Appliances

An older option that’s been used for quite a while is to use hardware networking appliances to create converged fabrics, such as FlexFabric by HP (and others).  In the case of FlexFabric, with a pair of EUR 18K Virtual Connects you can carve up your 2 * 10 GbE blade server NICs into multiple 1 GbE NICs.  The benefit here is that you do the carving once per blade chassis with up to 8 or 16 blades per chassis.  It’s also a hardware appliance.  That means there is no CPU cost to implementing QoS in the management OS (as minor as that might be).  But importantly, there is a support policy from the hardware vendor – assuming that you (a) pay for the support and (b) the hardware is not more than 3 years old.

On the downside, hardware based solutions are very expensive.  That’s an issue when you’re looking at cloud computing and cross-charging, especially for public clouds where every capital expense makes your customer charges less competitive.  You’re also tied to that hardware vendor (thus impacting your future bid pricing) and possibly even that model of server.  And blades are not the most cost effective way to rack out a data center – walk into any substantial modern cloud and I bet you’ll see a hell of a lot more rack 1U and 2U servers than anything else!  The solution is hardware defined.  That makes it inflexible.  You set it per rack using the tools provided by the h/w manufacturer.  That’s not necessarily the most cloud integrated solution around.  I’d rather have control of the stack form top-to-bottom.

I’ve never used this approach so I don’t know where the NIC teaming is done or if you have to use the not-Microsoft-supported 3rd party software.  In the end, the networking will probably appear like it did in W2008 R2 Hyper-V.

VMM 2012 SP1 Logical Switch

There is a third option … which is related to a blog comment I got recently.  You can deploy a software defined converged fabric from System Center 2012 Virtual Machine Manager SP1 (VMM 2012 SP1).  Instead of deploying the WS2012 Hyper-V converged fabric from within the management OS, you create and deploy a logical switch from VMM.  You can do this in two ways:

  • As a part of bare metal host build
  • Or deploy it to an existing host … and overwrite the existing networking config on that host

Using VMM gives you all the benefits of software defined converged fabrics as in the aforementioned PowerShell option.  However, there’s a lot of stuff to create first in VMM.  But once that’s done, you can deploy that logical switch and the converged fabric design to any host (bare metal or existing) with some mouse clicks from the VMM console.  That gives you top-to-bottom control of the stack from a central point.

Two things to remember here:

  • Not everyone should be a VMM administrator.  That’s why delegation exists.
  • Yes, you can erase the existing networking config on a running host by deploying a logical switch to it.

Choose One or the Other Software Defined Approach

VMM 2012 SP1 does not recognise existing Hyper-V PowerShell deployed converged fabric designs because they aren’t implemented with the VMM logical switch.  This does not mean the host cannot be managed.  You can still create logical networks and IP address pools.  You just lose the central configuration that the logical switch can offer … and you cannot do Network Virtualization in the real world (which requires VMM networking).  My advice: if you are doing Hyper-V software defined converged fabrics then choose 1 method only:

  • Use PowerShell in the management OS if you want simplicity XOR
  • Use the VMM logical switch to push out the configuration, especially if you want central configuration, Network Virtualization, or to use VMM-managed virtual switch extensions

There will be downtime to switch from the PowerShell method to the VMM one.

What’s the Right Solution?

In the end, you should pick the right choice for you or your customer, be it hardware or software defined.  There is no universal right answer.  Shh, there is … do software defined converged fabrics! Winking smile

6 comments so far

Add Your Comment
  1. Hi Aidan,
    Why standalone servers are more cost efficient than enclosures with blades? Wouldn’t standalone servers increase significant the price for the network devices (routers, switches and etc.)? Can you share some light on this?

    • 2 words: Virtual Connect. In my experience, a blade is cheaper than a rack server, but once you add up the total costs of an enclosure, I’d rather buy 16 servers. Take a trip into the truly massive cloud data centers. You won’t see too many blade chassis there.

  2. In this whole picture. How do you see iSCSI Storage? Should you add them to the “software defined converged fabrics” or still isolate them in a “real-hardware” or “hardware converged (like HP)” way?

    • Good question. Best might be to have dedicated physical NICs for iSCSI for the host. But if you need to do converged iSCSI you need to ensure that the SAN vendor will support it (mainly because the switches won’t be dedicated).

  3. Not sure if my comment posted.
    So let me ask this. When using VMM 2012 SP1 to manage your converged networks, do you manage your networks as separate entities entirely?

    For instance Logical Network DMZ is equivalent to a common vlan across sites? IE DMZ would be a single logical network with 1 VLAN defined per site. Same with Prod, Test, Dev, etc.

    Or do you define a Logical Network equivalent to a virtual switch (ie vSwitch0) and create all VLANs under that

    I think the former is valid, just looking for some confirmation from another pro :)

    • That is a good question. I’m a glorified sales person now, rather than living with working products/solutions. Maybe Damian Flynn might be a good person to ask.

Get Adobe Flash player