I previously wrote about a new feature in Windows Server 2012 R2 Storage Spaces called Write-Back Cache (WBC) and how it improved write performance from a Hyper-V host.  What I didn’t show you was how WBC improved performance from where it counts; how does WBC improve the write-performance of services running inside of a virtual machine?

So, I set up a virtual machine.  It has 3 virtual hard disks:

  • Disk.vhdx: The guest OS (WS2012 R2 Preview), and this is stored on SOFS2.  This is a virtual Scale-Out File Server (SOFS) and is isolated from my tests.  This is the C: drive in the VM.
  • Disk1.vhdx: This is on SCSI 0 0 and is placed on \SOFS1CSV1.  The share is stored on a tiered storage space (50 GB SSD + 150 GB HDD) with 1 column and a write cache of 5 GB.  This is the D drive in the VM.
  • Disk2.vhdx: This is on SCSI 0 1 and is placed on \SOFS1CSV2.  The share is stored on a non-tiered storage space (200 GB HDD) with 4 columns.  There is no write cache.  This is the E: drive in the VM.

I set up SQLIO in the VM, with a test file in each D: (Disk1.vhdx – WBC on the underlying volume) and E: (Disk2.vhdx – no WBC on the underlying volume).  Once again, I ran SQLIO against each test file, one at a time, with random 64 KB writes for 30 seconds – I copied/pasted the scripts from the previous test.  The results were impressive:


Interestingly, these are better numbers than from the host itself!  The extra layer of virtualization is adding performance in my lab!

Once again, Write-Back Cache has rocked, making the write performance 6.27 times faster.  A few points on this:

  • The VM’s performance with the VHDX on the WBC-enabled volume was slightly better than the host’s raw performance with the same physical disk.
  • The VM’s performance with the VHDX on the WBC-disabled volume was nearly twice as good as the host’s raw performance with the same physical disk.  That’s why we see a WBC improvement of 6-times instead of 11-times. This is a write-job so it wasn’t CSV Cache.  I suspect sector size (physical versus logical might be what’s caused this.

I decided to tweak the scripts to get simultaneous testing of both VHDX files/shares/Storage Spaces virtual disks, and fired up performance monitor to view/compare the IOPS of each VHDX file.  The red bar is the optimised D: drive with higher write operations/second, and the green is the lower E: drive.


They say a picture paints a thousand words.  Let’s paint 2000 words; here’s the same test but over the length of a 60 second run.  Once again, read is the optimised D: drive and green is the E: drive.


Look what just 5 GB of SSD (yes, expensive enterprise class SSD) can do for your write performance!  That’s going to greatly benefit services when they have brief spikes in write activity – I don’t need countless spinning HDDs to build up IOS for those once an hour/day spikes, gobbling up capacity and power.  A few space/power efficient SSDs with Storage Spaces Write-Back Cache will do a much more efficient job.

15 comments so far

Add Your Comment
  1. What do you recommend for SSDs in the lab?

    • Whatever your storage device supports.

  2. Okay, what SSDs do you have in your lab setup?

    • A pair of STECs. Disk details are in the linked article (above).

  3. Thank you. :)

  4. So, would you say that for virtualizing SQL servers, particularly on a tight-budget testlab, that using Scale out File Servers is the way to go for storing your SQL Server’s VHDs?

    • It depends on the purpose of the test lab.

      • The idea would be a general System Center test lab, all 2012 R2, focusing on SCCM, SCOM and VMM

        • Do you need HA storage? Sounds to me like you’ve confused Storage Spaces and SOFS. See previous posts.

  5. Hi,
    Thanks for the article :)
    How does Windows decide where to put the WBCache? And is the WBCache “stripped” across multiple SSD?
    I have a 2-way mirror with 3 columns, and I’d like to use WBCache. Now it’s only 6 HDD, but how many SSD do I need to add to use the cache (not tiering)? 1? 2? 3? 6?
    If it works with 1, does it take avantage of 2 or more to get even higher performance?

    • Windows manages it once you set the size. I haven’t seen details yet on the number requirement. I suspect one, but I have not asked about supportability or tested it.

  6. Have you came across the “best-practice” of “best-performance” settings for WBC? I know the default WBC size is 1GB per disk, but have you raised it at all and seen better metrics?

    • There probably are extreme cases. My lab isn’t big enough to push that far. The guidance from Microsoft is that 1 GB should be good enough for most everyone; it is for absorbing unusual spikes and not for sustained activity.

  7. Can you run numbers for a vdisk of just HDD + WBC? I’m thinking of creating a lab with 22 HDD and 2 SSD for WBC.

    • You need at least 4 SSDs.

Get Adobe Flash player