2012
07.25

You’ll find much more detailed posts on the topic of creating a continuously available, scalable, transparent failover application file server cluster by Tamer Sherif Mahmoud and Jose Bareto, both of Microsoft.  But I thought I’d do something rough to give you an oversight of what’s going on.

Networking

First, let’s deal with the host network configuration.  The below has 2 nodes in the SOFS cluster, and this could scale up to 8 nodes (think 8 SAN controllers!).  There are 4 NICs:

  • 2 for the LAN, to allow SMB 3.0 clients (Hyper-V or SQL Server) to access the SOFS shares.  Having 2 NICs enables multichannel over both NICs.  It is best that both NICs are teamed for quicker failover.
  • 2 cluster heartbeat NICs.  Having 2 give fault tolerance, and also enables SMB Multichannel for CSV redirected I/O.

image

Storage

A WS2012 cluster supports the following storage:

  • SAS
  • iSCSI
  • Fibre Channel
  • JBOD with SAS Expander/PCI RAID

If you had SAS, iSCSI or Fibre Channel SANs then I’d ask why you’re bothering to create a SOFS for production; you’d only be adding another layer and more management.  Just connect the Hyper-V hosts or SQL servers directly to the SAN using the appropriate HBAs.

However, you might be like me and want to learn this stuff or demo it, and all you have is iSCSI (either a software iSCSI like the WS2012 iSCSI target or a HP VSA like mine at work).  In that case, I have a pair of NICs in each my file server cluster nodes, connected to the iSCSI network, and using MPIO.

image

If you do deploy SOFS in the future, I’m guessing (because we don’t know yet because SOFS is so new) that’ll you’ll mostly likely do it with a CiB (cluster in a box) solution with everything pre-hard-wired in a chassis, using (probably) a wizard to create mirrored storage spaces from the JBOD and configure the cluster/SOFS role/shares.

Note that in my 2 server example, I create three LUNs in the SAN and zone them for the 2 nodes in the SOFS cluster:

  1. Witness disk for quorum (512 MB)
  2. Disk for CSV1
  3. Disk for CSV2

Some have tried to be clever, creating lots of little LUNs on iSCSI to try simulate JBOD and Storage Spaces.  This is not supported.

Create The Cluster

Prereqs:

  • Windows Server 2012 is installed on both nodes.  Both machines named and joined to the AD domain.
  • In Network Connections, rename the networks according to role (as in the diagrams).  This makes things easier to track and troubleshoot.
  • All IP addresses are assigned.
  • NIC1 and NIC2 are top of the NIC binding order.  Any iSCSI NICs are bottom of the binding order.
  • Format the disks, ensuring that you label them correctly as CSV1, CSV2, and Witness (matching the labels in your SAN if you are using one).

Create the cluster:

  1. Enable Failover Clustering in Server Manager
  2. Also add the File Server role service in Server Manager (under File And Storage Services – File Services)
  3. Validate the configuration using the wizard.  Repeat until you remove all issues that fail the test.  Try to resolve any warnings.
  4. Create the cluster using the wizard – do not add the disks at this stage.  Call the cluster something that refers to the cluster, not the SOFS. The cluster is not the SOFS; the cluster will host the SOFS role.
  5. Rename the cluster networks, using the NIC names (which should have already been renamed according to roles).
  6. Add the disk (in storage in FCM) for the witness disk.  Remember to edit the properties of the disk and rename if from the anonymous default name to Witness in FCM Storage.
  7. Reconfigure the cluster to use the Witness disk for quorum if you have an even number of nodes in the SOFS cluster.
  8. Add CSV1 to the cluster.  In FCM Storage, convert it into a CSV and rename it to CSV1.
  9. Repeat step 7 for CSV2.

Note: Hyper-V does not support SMB 3.0 loopback.  In other words, the Hyper-V hosts cannot be a file server for their own VMs.

Create the SOFS

  1. In FCM, add a new clustered role.  Choose File Server.
  2. Then choose File Server For Scale-Out Application Data; the other option in the traditional active/passive clustered file server.
  3. You will now create a Client Access Point or CAP.  It requires only a name.  This is the name of your “file server”.  Note that the SOFS uses the IPs of the cluster nodes for SMB 3.0 traffic rather than CAP virtual IP addresses.

That’s it.  You now have an SOFS.  A clone of the SOFS is created across all of the nodes in the cluster, mastered by the owner of the SOFS role in the cluster.  You just need some file shares to store VMs or SQL databases.

Create File Shares

Your file shares will be stored on CSVs, making them active/active across all nodes in the SOFS cluster.  We don’t have best practices yet, but I’m leaning towards 1 share per CSV.  But that might change if I have lots of clusters/servers storing VMs/databases on a single SOFS.  Each share will need permissions appropriate for their clients (the servers storing/using data on the SOFS).

Note: place any Hyper-V hosts into security groups.  For example, if I had a Hyper-V cluster storing VMs on the SOFS, I’d place all nodes in a single security group, e.g. HV-ClusterGroup1.  That’ll make share/folder permissions stuff easier/quicker to manage.

  1. Right-click on the SOFS role and click Add Shared Folder
  2. Choose SMB Share – Server Applications as the share profile
  3. Place the first share on CSV1
  4. Name the first share as CSV1
  5. Permit the appropriate servers/administrators to have full control if this share will be used for Hyper-V.  If you’re using it for storing SQL files, then give the SQL service account(s) full control.
  6. Complete the wizard, and repeat for CSV2.

You can view/manage the shares via Server Manager under File Server.  If my SOFS CAP was called Demo-SOFS1 then I could browse to \Demo-SOFSCSV1 and \Demo-SOFSCSV2 in Windows Explorer.  If my permissions are correct, then I can start storing VM files there instead of using a SAN, or I could store SQL database/log files there.

As I said, it’s a rough guide, but it’s enough to give you an oversight.  Have a read of the above linked posts to see much more detail.  Also check out my notes from the Continuously Available File Server – Under The Hood TechEd session to learn how a SOFS works.

9 comments so far

Add Your Comment
  1. Your single point of failure here remains your shared storage. Why not cluster 2 servers and mirror the storage in storage spaces? Would that be an option?

    • Only if a third party comes up with a solution, but this might affect your support from Microsoft.

  2. There is no MS-supported solution for mirroring (or DFS-Replication) when using SOFS. But you can build a second Hyper-V Cluster with a second MS-CSV-SOFS and use Hyper-V Replication for such a disaster recovery solution.

    And some thoughts from me:
    SOFS could be for me – no complex SAN, just shared SAS JBOD. Just imagine – HP c7000, up to 16 blades – and you could insert 2x 6120XG-Switches to connect all blades to your network and 2x 6GBps SAS Switches to connect to one or more storage enclosures. You could be using one enclosure for SSD, one for 15k SAS, one for 7.2k SATA – I can connect the tape library, too. So now it is up to you: 1x DPM-Host, 2x SOFS-Cluster-Hosts, 13x Hyper-V-Hosts… or for more IO just reduce the number of Hyper-V-Hosts and increase the SOFS-Hosts. And if you need a second c7000-Enclosure – some of HP Shared SAS solutions can be connected to two enclosures.

  3. Hi Aiden,
    I read your article, but i have some requirement can you please help me out.
    Our requirement is that we want to deploy hyper v on two servers and these server should not use share storage like SAN or others. Each node should have its own local storage and they would replicating to each other for the hyper v VMs.Is that possible. If then let me know how can i do this.

    Many thanks
    Zaheer

    • Hyper-V Replica

  4. Hello Aidan,
    I have been reading your articles on Hyper-V and Storage Spaces. SMB and SOFS etc. I love the idea of building an affordable SAN as I do work for the Small to Medium business sector and as Server 2003 comes to end of life we are pushing on moving our customers to 2012 but also providing better failover and redundancy.
    I am after a base scenario of what sort of kit I would need for a 2 or 3 node cluster all running Sever 2012 and having an additional 1 or 2 file server as the SOFS to hold the Hyper-V machines or SQL databases. And also is it possible to also use these file servers for other file shares such as User folders and other data.
    Cost is a huge factor for small business and I would be keen to know your throughts on a typical budget to build this sort of solution based on having a selection of JBODs and it all configured for failover and clustering.
    Many thanks

    • For just 2 nodes, keep it simple: JBOD + disks + SAS cables + 2 controllers … and the 2 nodes. Attach the JBOD to the Hyper-V nodes. If a small company only needs 3 nodes, then see if the JBOD can support 3 connecting servers.

      For the JBOD: check the HCL. This is the 2012 HCL.

      DO NOT PUT USER SHARES ON A SOFS. Put virtual file servers on the SOFS and put user shares in the virtual file server.

      My lab storage solution will come in around 1/3 the price of the equivalent Dell starter kit SAN, but with twice the storage capacity.

  5. Thanks for the quick response Aidan. That gives me a good idea on where to start and idea of cost.

    One other query would be if the SOFS wasn’t an option can the 2 nodes be set up without a SAN and an iscsi target created on each to use for CSV.
    I understand this is not a good scenario but for the smaller business can it be set up that way.

    Thanks again

    • No. The iSCSI target would be on another server. But at that point, you might as well use SMB 3.0 file shares on that other server. And that brings you to a single point of failure without clustering that storage (iSCSI target does not have transparent failover).

Get Adobe Flash player