Windows Server 2012 Hyper-V Virtual Fibre Channel

You now have the ability to virtualise a fibre channel adapter in WS2012 Hyper-V.  This synthetic fibre channel adapter allows a virtual machine to directly connect to a LUN in a fibre channel SAN.

Benefits

It is one thing to make a virtual machine highly available.  That protects it against hardware failure or host maintenance.  But what about the operating system or software in the VM?  What if they fail or require patching/upgrades?  With a guest cluster, you can move the application workload to another VM.  This requires connectivity to shared storage.  Windows 2008 R2 clusters, for example, require SAS, fibre channel, or iSCSI attached shared storage.  SAS is right for connecting VMs to storage.  iSCSI consumers were OK.  But those who made the huge investment in fibre channel were left in the cold, sometimes having to implement an iSCSI gateway to their FC storage.  Woudn’t it be nice to allow them to use their FC HBAs in the host to create guest clusters?

Another example is where we want to provision really large LUNs to a VM.  As I just posted a little while ago, VHDX expands out to 64 TB so really we would need to have a requirement for LUNs beyond 64 TB to justify this reason to provide physical LUNs to a VM and limit mobility.  But I guess with the expanded scalability of VMs, big workloads like OLTP can be virtualised on Windows 8 Hyper-V and they require big disk.

What It Is

Virtual Fibre Channel allows you to virtualise the HBA in a Windows 8 Hyper-V host, have a virtual fibre channel in the VM with it’s own WWN (actually, 2 to be precise) and connect the VM directly to LUNs in a FC SAN.

Windows Server 2012 Hyper-V Virtual Fibre Channel is not intended or supported to do boot from SAN.

The VM will share bandwidth on the host’s HBA, unless I guess you spend extra on additional HBAs, and cross the SAN to connect to the controllers in the FC storage solution.

The SAN must support NPIV (N_Port ID Virtualization).  Each VM can have up to 4 virtual HBAs.  Each HBA has it’s own identification on the SAN.

How It Works

You create a virtual SAN on the host (parent partition), for each HBA on the host that will be virtualised for VM connectivity to the SAN.  This is a 1-1 binding between virtual SAN and physical HBA, similar to the old model of virtual network and physical NIC.  You then create virtual HBAs in your VMs and connect them to virtual SANs.

And that’s where things can get interesting.  When you get into the FC world, you want fault tolerance with MPIO.  A mistake people will make is that they will create two virtual HBAs and put them both on the same virtual network, and therefore on a single FC path on a single HBA.  If that single cable breaks, or that physical HBA port fails, then the VM has pointless MPIO because both virtual HBAs are on the same physical connection.

The correct approach for fault tolerance will be:

  1. 2 or more HBA connections in the host
  2. 1 virtual SAN for each HBA connection in the host.
  3. 1 virtual HBA in each VM, with each one connected to a different virtual SAN
  4. MPIO configured in the VM’s guest OS.  In fact, you can (and should) use your storage vendor’s MPIO/DSM software in the VM’s guest OS.

Now you have true SAN path fault tolerance at the physical, host, and virtual levels.

Live Migration

One of the key themes of Hyper-V is “no new features that prevent Live Migration”.  So how does a VM that is connected to a FC SAN move from one host to another without breaking the IO stream from VM to storage?

There’s a little bit of trickery involved here.  Each virtual HBA in your VM must have 2 WWNs (either automatically created or manually defined), not just one.  And here’s why.  There is a very brief period where a VM exists on two hosts during live migration.  It is running on HostA and waiting to start on HostB.  The switchover process is that the VM is paused on A and started on B.  With FC, we need to ensure that the VM is able to connect and process IO.

So in this below example, the VM is connecting to storage using WWN A.  During Live Migration the new instance of the VM on the destination host is set up with WWN B.  When LM un-pauses on the destination host, the VM can instantly connect to the LUN and continue IO uninterrupted.  Each subsequent LM, either to the original host or any other host, will cause the VM to alternate between WWN A and WWN B.  That’ holds true of each virtual HBA in the VM.  You can have up to 64 hosts in your Hyper-V cluster, but each virtual fibre channel adapter will alternate between just 2 WWNs.

Alternating WWN addresses during a live migration

What you need to take from this is that your VM’s LUNs need to be masked or zoned for two WWNs for every VM.

Technical Requirements and Limits

Fist and foremost, you must have a FC SAN that supports NPIV.  Your host must run Windows Server 2012.  The host must have a FC HBA with a driver that supports Hyper-V and NPIV.  You cannot use virtual fibre channel adapters to boot VMs from the SAN; they are for data LUNs only.  The only supported guest operating systems for virtual fibre channel at this point are Windows Server 2008, Windows Server 2008 R2, and Windows Server 2102.

This is a list of the HBAs that have support built into the Windows Server 2012 Beta:

Vendor Model
Brocade BR415 / BR815
Brocade BR425 / BR825
Brocade BR804
Brocade BR1860-1p / BR1860-2p
Emulex LPe16000 / LPe16002
Emulex LPe12000 / LPe12002 / LPe12004 / LPe1250
Emulex LPe11000 / LPe11002 / LPe11004 / LPe1150 / LPe111
QLogic Qxx25xx Fibre Channel HBAs

Summary

With supported hardware, virtual fibre channel support allows supported Windows Server 2012 Hyper-V guests to connect to and use fibre channel SAN LUNs for data purposes that enable extreme scalable storage and in-guest clustering without compromising the uptime and mobility of Live Migration.

16 thoughts on “Windows Server 2012 Hyper-V Virtual Fibre Channel”

  1. Hi mate, found this blog is be very helpful and timely as well with server 2012 coming out shortly, its hard to find exact content to help with testing phase.
    anyways, i have a similar setup. using qlogic HBA cards and created by vports as well. the fibre switch can see the WWN of the virtual ports i created and have created zones for it including the controllers of my SAN. but cannot see the WWN on SAN yet. also, when going into Hyper-v setting, i notice when i try to create a vSAN, i get an error stating that the device or drivers does not support virtual fibre channel. i can create my virtual ports, so not sure what i cannot see the WWN on the SAN.
    you did say that my SAN should be able to support NPIV. how do i check this, especially on my controllers? SAN is DS5100 IBM.

      1. Okay, got the SAN bit checked. this is my current Dilemna. i have gone into my QLOGIC card and enabled v-port on them, got a virtual WWN as well. used that number to zone SAN and virtual WWN together. got the SAN to create a host with that particular WWN, all good so far. when i go back to HYPER-V, i still see the error saying that device or driver does not support virtual Fiber channel. what could be wrong now. QLOGIC is happy (it created VPORTS, MDS is happy, as it created a zone for it with SAN and SAN is ahppy as it created a host out of that Virtual WWN. what am i missing?

        1. okay, managed to solve this problem. added the new drivers for server 2012 and i can now add my virtual SAN interface. however, once i have assigned the new hardware to a guest, it does not showup on the guest. on the storage manager, i do not see WWN numbers. since i cannot see that, i do not see any disks that i have already mapped for that machine. running server 2008 R2 wSP1 as a guest.

      1. Something like Firestreamer Virtual tape library might do the trick over the Virtual HBA, but this works equally as well over ISCSI from within a VM so no real benefit i’m afraid in this scenario.
        VTL worth a look though if you want to virtualise both short and long term backups with DPM.

  2. Hi,
    I try to set up WS 2012 RTM Virtual SAN with Emulex LPe1150, but got the warning that ‘the device or driver does not support virtual Fiber Channel’.
    NPIV option is turned on and according to windows server catalog supportability Emulex LPe1150 is not yet supported with WS2012.

    1. Roman

      Did you try upgrading the HBA Firmware? I have run into the same issue and upgrading the firmware on emulex HBA to 2.82a4 has fixed the issue for me.

      Regards
      Ujval Vayuvegula

  3. We have upgraded the host server to windows server 2012, and update the Qlogic Driver to new version supported with windows 2012 to use the new feature (Virtual Fiber Channel) in Hyper-V, but still I received the following error when we start the VM:
    “NPIV virtual port operation on virtual port (C003FF9C43D30000) failed with an error: The operation is not supported by the fabric. (Virtual machine ID 1778DD91-81CE-431A-A4BA-5B51BF081A54)”

    This error related to HBA configuration missing in windows 2012, or still I need to connect to fiber switch and configure VPort to use this features. Is a Fiber switch required to connect to the SAN storage?.

    I’m using QLogic Qxx2562 Fibre Channel HBAs, I download the driver and I’m able to see the SAN storage from windows 2012. When I tried to open the virtual machine appear the above error.

    1. I’m having this exact same issue.
      this is a great overview by the way. The only one that I could seem to find that addresses this particular setup.

  4. we are using Emulex 42D0485 with latest firmware (2.01a10)/driver(2.72.12.1) and enabled NPIV, but Hyper-V says the device and driver does not support virtual fibre channel when I try to created new virtual san switch. and idea? Please help ><

  5. So you stated the following, the host doesn’t require MPIO, only guest OS/VMs with multiple adapters?
    “The correct approach for fault tolerance will be:

    2 or more HBA connections in the host
    1 virtual SAN for each HBA connection in the host.
    1 virtual HBA in each VM, with each one connected to a different virtual SAN
    MPIO configured in the VM’s guest OS. In fact, you can (and should) use your storage vendor’s MPIO/DSM software in the VM’s guest OS.”

  6. Is there any vitrual adapter is available using the ethernet card act as FCOE Virtual adapter in Physical hyper-V host in order to test NPIV virtualization in Hyper-V without an FC card by uding the NIC Card ?

Leave a Reply to Craig Jones Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.