Storage Spaces Inside a Virtual Machine Is Not Supported

I’m hooked on Storage Spaces, the mechanism in Windows Server 2012 where we can aggregate non-RAID disks and create thinly provisioned (optional), fault tolerant volumes, just like you’ve been doing on a modern SAN (but Storage Spaces is more flexible, if not as feature rich).

It appears that some like this feature so much that they’ve started to implement it inside of virtual machines:

THIS IS NOT A SUPPORTED CONFIGURATION

Sure, you might see presenters like myself do this in demos.  I make it clear: I only do this because I don’t have the hardware to do Storage Spaces at the physical layer.  Storage Spaces was designed to be created using physical disks … and then you can store your virtual machines on a Storage Space virtual disk.

Why are people implementing Storage Spaces in a production VM?  My primary guess is that they want to aggregate virtual hard disks to create a larger volume.  VHD format files can only expand up to 2040 GB.  OK … that’s the wrong way to go about it!  The correct solution would be one of the following:

  • Deploy Windows Server 2012 Hyper-V and use VHDX files.  They scale out to 64 TB – the maximum size of a VSS snapshot BTW.
  • If you’re stuck on vSphere (2 TB VMDK) or pre-WS2012 Hyper-V (2040 GB VHD) then (I hate saying this …) use a physical disk of some kind until you can upgrade to a scalable hypervisor like WS2012 Hyper-V and convert to the more flexible VHDX.

A second possible excuse is: “I want to create volumes inside a VM”.  Anyone who has spent any time owning a virtualised platform will laugh at this person.  There is a simple rule in our business: 1 volume = 1 virtual hard disk.  It gives us complete flexibility over volume management both at the physical (placement) and virtual (resizing) layer.  If you need an E: volume, hot-add a VHDX to the SCSI controller.  If you need an F: volume, hot-add a VHDX to the SCSI controller.  If you need to expand the G: volume, expand the G: VHDX and then expand the G: volume.

The other reason I expect to hear via comments is “we’re scared of virtual hard disk corruption so we want to RAID the disks in some way using Storage Spaces”.  Where to start?

  • I have never personally witnessed a corrupt virtual hard disk.  When I have heard of such things it’s because people do stupid things with snapshots or differential disks and they deserve what follows.
  • The VHDX format has built-in protection for corruption that can be caused by power loss.
  • DOING STORAGE SPACES INSIDE A VM IS NOT SUPPORTED!  It’s no one’s fault, other than yours, when it misbehaves or breaks.

Please, just start using VHDX format virtual hard disks ASAP.

32 thoughts on “Storage Spaces Inside a Virtual Machine Is Not Supported”

  1. Straight forward and to the point. From reading this post and the one on Hyper-V vs vSphere comparisons, I get the impression that Mr. Finn does not suffer fools well. 🙂

    1. Mr. Finn is wa-ay too used to people trying to find ways to do bad things when the support statement is quite clear 🙂

  2. Hmm i’m supprised Aidan, all reviews and tests i’ve read pointed to storage spaces being pretty useless for everything but the smallest shops. Whats so great about it? Its painfully slow performance wise and doesnt even redistribute data when you add disks…

    Is this a “drinking the coolaid” post? Or did I overlook why storage spaces would be used by any enterprise?

    1. You’ve misunderstood the point of storage spaces and, probably, not understood the post.

      A) Please re-read the post to see what use of storage spaces is not supported
      B) Storage spaces is good for small businesses because it is much cheaper than traditional SAN
      C) Storage spaces scales out, gives great fault tolerance across JBODS and is cheaper than SAN, making it great for public cloud
      D) I think 1.2 million IOPS would be attractive to any humungous corporation for their OLTP workloads

  3. Thanks for the article. Can you explain why Storage Spaces appear to be not only supported, but recommended within Azure? (aside from the obvious ‘you can’t get bigger than 1TB disks in azure’)
    They’re even created automatically within some of the SQL 2014 virtual servers than you can deploy in Azure…

    http://download.microsoft.com/download/D/2/0/D20E1C5F-72EA-4505-9F26-FEF9550EFD44/Performance%20Guidance%20for%20SQL%20Server%20in%20Windows%20Azure%20Virtual%20Machines.docx

        1. No idea to be honest. I -rarely- log into a VM anymore. The closest I’ve gotten to the guest OS in production in over a year has been to do disk formats via Run Command or Serial Console Access. I’m all at the infrastructure/platform these days.

  4. Hi Aidan, can you point me at the official support statement from Microsoft for this? My google-fu fails me obviously. I was interested in using storage spaces within a 2012 R2 VM purely for the tiering features; good old hardware RAID card being fine for providing redundancy.

    Thanks!

  5. I would like to see storage spaces inside VM to build location redundant, synchronous mirror inside a VM (and of course – replicate the vm to the other location). Locations are connected via 20Gb Ethernet, so except of some latency because of distance, performance should be okay for a file share (exposed by the vm) that is geo redundant. Do you know anything about future support?

    1. I really doubt it. To be honest, you should be building the fault tolerance in the fabric – storage and replication of either VM or storage.

      1. thx – but there is no synchronous replication with Hyper-V or SOFS …. 3rd party probably but would be nice out of the box.

  6. I am interested in this to aggregate pass-through disks to increase bandwidth. With two LUNs, one on each storage controller of the SAN, I could then stripe the I/O and maximize throughput with resiliency still being provided by the underlying storage. With a single server this is easily accomplished by a dynamic striped volume. Unfortunately Failover Clustering will not utilize dynamic disks for storage. So Storage Spaces is the only way to accomplish this in a Failover Cluster, virtual or physical.

    In my experience “not supported” does not necessarily mean it will not work. Usually it means that it has not been tested or that if there are problems Tech Support will not help you solve them. Do you have any further information on the caveats of doing this anyway? Possibly what the likely symptoms would be “when it misbehaves or breaks”?

  7. Hi Aidan

    Great blog as always, makes me laugh how brutally you put some of these guys down but it’s for their own good!

    Maybe a few of the guys here that aren’t taking your warnings about being fired seriously need to look at the Storage replica feature you mentioned recently in the next release of Windows Server.

    I was hoping something like this would become available soon, already looking to test this on a few of our platforms but looks amazing.

    1. No. vSphere has falled way behind in the private/hosted cloud arena, and in hybrid solutions too. Microsoft’s whole solution leaves everyone else behind.

  8. Do you know the reason it is not supported and what are the concerns with storage spaces within a VM? Generation 2 VM’s on Hyper-V present SCSI devices, just like VM’s inside of VMware vSphere do. They both honour the SCSI response codes. So what is the risk or issue specifically with storage spaces? Whether it’s supported or not, whether it’s a good idea or not, are sometimes irrelevant if it meets a business requirement and the risks can be mitigated. It seems odd that it’s supported and encouraged within Azure, which runs on Hyper-V, and yet not supported in a VM running on Hyper-V or vSphere. Given customers with premier support agreements are entitled to commercially reasonable support without explicit instructions against using it I think they would expect MS to help them if something went wrong.

  9. To support Aidan,

    I have tried this configuration on a home test network. I was “forced” to, as my physical machine runs only server 2008 r2, so in order to test Storage Spaces, I ran them in 2012 r2 Fileserver VM’s with passthrough physical disks. …they failed.

    Everything worked fine for a few months, then network became totally unstable with multiple different VMs crashing. Error logs showed multiple lsi_sas errors – event 11, (like 100 in a week), with eventual VM crash, and occasionally bringing down other VMs not using storage spaces, but on same controller.

    Don’t do this in a production environment. As Aidan says, its foolish. Wish I had read this post sooner. Never saw anything official about it, but I have no doubt it is not a supported configuration.

  10. Good article and many brutal answers from Aidan. I saw few month ago his opinion about people that don’t like to read docs and understand simple things, or let’s say don’t want to think at least.
    Right now, studding for 70-410 I was stuck at the Objective: Storage Pools. I don’t have an environment and disks to test and learn. I was trying to get it work till I get to Aidan’s article.
    It’s not supported and that’s it… This sentence was stuck in my mind with my issue how to study that objective. I can’t just read about it and go to the exam. And I found a solution: I’ve installed OpenFiler, created bunch of LUNs and shared them as iSCSI disks to the file server and voila. I can see Primordial storage pool with bunch of physical disks.
    But again. My solution is to study Storage Spaces and everything related. Many things will not be available as it’s not supported and nobody does like this, but for study purposes… it will work.

  11. Hi Aidan, there is an other way to aggregate space in a huge volume ?
    I mean by creating a virtual machine with several VHDX on different places and use storage space to aggregate all VHDX inside the VM ?

    1. Striping. But I’m wondering if SS in a VM status might have changed. It’s supported in Azure VMs.

  12. Wondering if there’s anything new on this? After surfing for hours can’t find anything really clear about how to create a large, ie. 16TB storage space to be used by VMs. Really simply should I create storage space out of a bunch of physical drives on the host and passthrough, which works but I thought we weren’t supposed to have services other than hyper-v on the host (WS2012r2)? Or should I pass them through, either bare or as VHDXs and then use SS inside the VM? I think a basic example of this and pros/cons for basic home use (ie. on one machine without hyper-v clustering, etc) would be very useful.

  13. Just to put this issue to the rest and help my fellow MVP, here is the official statement from Microsoft that people have been asking above.

    Microsoft FAQ for Storage Spaces
    http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx

    You can use commodity drives attached via Serial-Attached SCSI (SAS), Serial ATA (SATA), or USB. Storage layers that abstract the physical disks are not compatible with Storage Spaces. This includes VHDs and pass-through disks in a virtual machine, and storage subsystems that layer a RAID implementation on top of the physical disks.

  14. Aidan, I trying to implement this solution in a virtual lab, running in VMWare Workstation 12 Player, but, when I trying to create the storage pool in the cluster console, the disks stucks in “starting” operational state and nothing happens.
    The storage pool are created into a error state and I must to stop the cluster service, Then, I must re set the recently created storage pool to read/write mode in order to recover the control of disks.
    You have any ideas about what I can be doing wrong?

Leave a Reply to R. Benjamin Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.