2013
08.27

In the past few months it’s become clear to me that people are confusing Storage Spaces and Scale-Out File Server (SOFS).  They seem to incorrectly think that one requires the other or that the terms are interchangeable.  I want to make this clear:

Storage Spaces and Scale-Out File Server are completely different features and do not require each other.

 

Storage Spaces

The concept of Storage Spaces is simple: you take a JBOD (a bunch of disks with no RAID) and unify them into a single block of management called a Storage Pool.  From this pool you create Virtual Disks.  Each Virtual Disk can be simple (no fault tolerance), mirrored (2-way or 3-way), or parity (like RAID 5 in concept).  The type of Virtual Disk fault tolerance dictates how the slabs (chunks) of each Virtual Disk are spread across the physical disks included in the pool.  This is similar to how LUNs are created and protected in a SAN.  And yes, a Virtual Disk can be spread across 2, 3+ JBODs.

Note: In WS2012 you only get JBOD tray fault tolerance via 3 JBOD trays.

Storage Spaces can be used as the shared storage of a cluster (note that I did not limit this to a SOFS cluster).  For example, 2 or more (check JBOD vendor) servers are connected to a JBOD tray via SAS cables (2 per server with MPIO) instead of connecting the servers to a SAN.  Storage Spaces is managed via the Failover Cluster Manager console.  Now you have the shared storage requirement of a cluster, such as a Hyper-V cluster or a cluster running the SOFS role.

Yes, the servers in the cluster can be your Hyper-V hosts in a small environment.  No, there is no SMB 3.0 or file shares in that configuration.  Stop over thinking things – all you need to do is provide shared storage and convert it into CSV that is used as normal by Hyper-V.  It is really that simple. 

Yes, JBOD + Storage Spaces can be used in a SOFS as the shared storage.  In that case, the virtual disks are active on each cluster node, and converted into CSVs.  Shares are created on the CSVs, and application servers access the shares via SMB 3.0.

Scale-Out File Server (SOFS)

The SOFS is actually an active/active role that runs on a cluster.  The cluster has shared storage between the cluster nodes.  Disks are provisioned on the shared storage, made available to each cluster node, added to the cluster, and converted into CSVs.  Shares are then created on the CSV and are made active/active on each cluster node via the active/active SOFS cluster role. 

SOFS is for application servers only.  For example Hyper-V can store the VM files (config, VHD/X, etc) on the SMB 3.0 file shares.  SOFS is not for end user shares; instead use virtual file servers that are stored on the SOFS.

Nowhere in this description of a SOFS have I mentioned Storage Spaces.  The storage requirement of a SOFS is cluster supported storage.  That includes:

  • SAS SAN
  • iSCSI SAN
  • Fibre Channel SAN
  • FCoE SAN
  • PCI RAID (like the Dell VRTX)
  • … and SAS attached shared JBOD + Storage Spaces

Note that I only mentioned Storage Spaces with the JBOD option.  Each of the other storage options for a cluster uses hardware RAID and therefore Storage Spaces is unsupported.

Summary

Storage Spaces works with a JBOD to provide a hardware RAID alternative.  Storage Spaces on a shared JBOD can be used as cluster storage.  This could be a small Hyper-V cluster or it could be a cluster running the active/active SOFS role.

A SOFS is an alternative way of presenting active/active storage to application servers. It requires cluster supported storage, which can be a shared JBOD + Storage Spaces.

23 comments so far

Add Your Comment
  1. Question on Storage Spaces: Can you build Storage spaces on top of existing hardware RAID?

    Eg, you have 12 RAID1 pairs and you build a striped partition on top to make a full RAID10?

    • Not supported. Bad mojo will happen. When you think Storage Spaces, think JBOD (no h/w RAID at all).

  2. So if you’re looking into the “clusterd hardware RAID” path (LSI Syncro CS 9286-8e), how do you acquire storage tiering with SSD/SAS? Since for what I understand, it’s when you’re running Storage Spaces (storage pool -> virtual disk) you will get this functionality and storage spaces on top of hardware RAID = not support. Do you use the RAID-controller’s functionality called CacheCade instead?

    • You don’t. Storage Spaces and “PCI RAID” are different techs. You do one or the other.

  3. Thanks for the article.. There are so many different levels of abstraction these days that it’s getting tough to keep them all straight!

    Can I make a CSV that consists of three drives, one in each of three separate 2012R2 servers (for example 3 servers each with a 2tb SATA drive to build out a single CSV that is a protected, shared 2TB disk)? My goal is to have a lab where each of three Hyper-V hosts houses a copy of ALL VMs on a shared CSV, so that I have fault tolerance, while allowing each Hyper-V host share the VM load.

    I am envisioning this similar to how Exchange 2010 DAGs work – Each server has a copy of each DB which is all replicated and can be mounted on any node with a copy of it. In my analogy, the VMs would be like the DBs in that there will always be a copy replicated to each node, but any VM can run on any host.

    Thanks!
    -Steve

    • No.

  4. Steve Jones: What you are looking for is this: http://hyperv.starwindsoftware.com/native-san-for-hyper-v-free-edition

    The free edition allows a 128GB CSV to span 2 physical servers. You present that CSV to the Hyper-V cluster as a CSV drive and then put your VM”s on it. The StarWind software keeps the CSV in sync across the 2 nodes via the network.

    The paid version allows unlimited CSV sizes across up to 3 physical hosts.

    • And will be totally unsupported by Microsoft.

  5. I’m confused about this paragraph: “SOFS is for application servers only. (That I understand.) For example Hyper-V can store the VM files (config, VHD/X, etc) on the SMB 3.0 file shares. SOFS is not for end user shares; instead use virtual file servers that are stored on the SOFS.

    Can I store the virtual file servers on the File Server Cluster as a Role?

    • ?!?!?!?!?!?!?

      No. Create a virtual machine to be your file server. Store that virtual machine on the SOFS. Simple.

  6. Planning a storage spaces solution atm for our Hyper-V cluster. 4 Nodes, ~30 vm’s. Would you recommend a 2 node SOFS or a JBOD connected to all 4 servers using storage spaces?

    Worried about the extra complexity/networking and potential windows update fail causing the SOFS to be less reliable that our current iSCSI SAN.

    Would microsoft support a 4 node cluster running hyper-v with storage spaces in this way?

    • Yup, they’d support it, assuming the JBOD does. I’d recommend having a SOFS as a separate tier; this won’t limit your compute (Hyper-V) scalability.

  7. I was one of those so confused — thanks for writing this!

    Can Storage Spaces combine storage attached to more than one server? (Always looking for ways to reduce hardware count — like avoiding external shared JBODs.) If so, it seem like it could allow RAID1 across servers to help provide fault tolerance in small Hyper-V clusters. I wonder how SSD cache and dedupe would fit in. I’m probably dreaming, right?

    While this article is very helpful in distinguishing SS from SOFS, I was left vaguely wondering where SOFS provides best value. Sounds like it’s best suited to providing file services fault tolerance for cluster servers — including file servers — in bigger-than-minimum-sized sites. It’s totally out of scope for minimum-sized SMB-style Hyper-V failover clusters. Is that right?

  8. Answering my own question, it appears SS cannot combine physical storage attached to multiple servers. (Interesting — Hyper-V can, sort of.)
    Further, shared JBODs cannot provide fault tolerance without pricey dual-port SAS drives, dual controllers, etc.

    So if we drop back a notch to High Availability, accept lower robustness, and go with Hyper-V Replica Server and non-shared storage (does it make sense to use SS on single servers and add SSD cache?), or even a shared non-fault-tolerant JBOD with boot-from-backup Disaster Recovery…

    • Shop around on the JBODs – check the HCL for the Storage Spaces category to see what’s available. Also look at Cluster-in-a-Box (CiB).

      Clustering = HA. Hyper-V Replica = disaster recovery (DR).

  9. I thought I had it, but the more I read, the more I get confused. Thanks for the awesome site. What is the best scenario for 2 nodes with about 6TB each to host their Hyper-V environment (file shares, domain controllers, etc.)? Of course the customer only has 1 node in and the other coming after the fiscal year….

    • Consultants answer: that depends.

      • Good answer. It does depend…one customer went with the 2 node no shared, and the other with shared storage.

        Would it make any sense to bust up LUNs on a SAN then bring them back together with simple storage space vdisk layout to create a sofs for hyper-v?

        • No it would not. You do not do Storage Spaces on a SAN to create CSVs. You use the SAN’s RAID technology to create CSVs.

  10. Hello Aidan,

    I’ve been having a lot of fun trying everything out but i wonder if you could explain something to me.

    When i look in the cluster manager for roles and disks, my csv is linked to only one member of the cluster.

    I am useing a smb3 sofs for vm’s at this moment but i never exceed the capacity of only one sofs member.

    When i look at the performance monitor, only the active server is providing resources.

    When i pull the plug out of a server in my two server cluster, the other takes over with quite a lot of missing pings.

    I don’t seem to have an active/active solution but cluster validation reports that everything is fine.

    Is this normal behaviour?

    Regards, Jort

    • Yes. The CSV -ownership- is balanced. If you’re using Storage Spaces then SMB clients will be redirected to the CSV owner after the initial connection to get best throughput. The balancing of CSV ownership ensures the workload is balanced.

  11. Let’s suppose we have 2 servers and 3 DAS trays with JBODs. Half of disks – SSD, other – SATA. Also we have 3 windows server 2012 r2 hyper-v hosts
    Is it possible to make two storage spaces (high-speed and low-speed) with file shares and create 3-node hyper-v cluster using this storage spaces?
    Will this solution be HA in case of tray or server fault?

    • Yes, assuming you spread the disks across the JBOD trays and implement 2-way or 3-way mirroring for your virtual disks.

Get Adobe Flash player