The Effects Of WS2012 R2 Storage Spaces Write-Back Cache On A Hyper-V VM

I previously wrote about a new feature in Windows Server 2012 R2 Storage Spaces called Write-Back Cache (WBC) and how it improved write performance from a Hyper-V host.  What I didn’t show you was how WBC improved performance from where it counts; how does WBC improve the write-performance of services running inside of a virtual machine?

So, I set up a virtual machine.  It has 3 virtual hard disks:

  • Disk.vhdx: The guest OS (WS2012 R2 Preview), and this is stored on SOFS2.  This is a virtual Scale-Out File Server (SOFS) and is isolated from my tests.  This is the C: drive in the VM.
  • Disk1.vhdx: This is on SCSI 0 0 and is placed on \SOFS1CSV1.  The share is stored on a tiered storage space (50 GB SSD + 150 GB HDD) with 1 column and a write cache of 5 GB.  This is the D drive in the VM.
  • Disk2.vhdx: This is on SCSI 0 1 and is placed on \SOFS1CSV2.  The share is stored on a non-tiered storage space (200 GB HDD) with 4 columns.  There is no write cache.  This is the E: drive in the VM.

I set up SQLIO in the VM, with a test file in each D: (Disk1.vhdx – WBC on the underlying volume) and E: (Disk2.vhdx – no WBC on the underlying volume).  Once again, I ran SQLIO against each test file, one at a time, with random 64 KB writes for 30 seconds – I copied/pasted the scripts from the previous test.  The results were impressive:

image

Interestingly, these are better numbers than from the host itself!  The extra layer of virtualization is adding performance in my lab!

Once again, Write-Back Cache has rocked, making the write performance 6.27 times faster.  A few points on this:

  • The VM’s performance with the VHDX on the WBC-enabled volume was slightly better than the host’s raw performance with the same physical disk.
  • The VM’s performance with the VHDX on the WBC-disabled volume was nearly twice as good as the host’s raw performance with the same physical disk.  That’s why we see a WBC improvement of 6-times instead of 11-times. This is a write-job so it wasn’t CSV Cache.  I suspect sector size (physical versus logical might be what’s caused this.

I decided to tweak the scripts to get simultaneous testing of both VHDX files/shares/Storage Spaces virtual disks, and fired up performance monitor to view/compare the IOPS of each VHDX file.  The red bar is the optimised D: drive with higher write operations/second, and the green is the lower E: drive.

image

They say a picture paints a thousand words.  Let’s paint 2000 words; here’s the same test but over the length of a 60 second run.  Once again, read is the optimised D: drive and green is the E: drive.

image

Look what just 5 GB of SSD (yes, expensive enterprise class SSD) can do for your write performance!  That’s going to greatly benefit services when they have brief spikes in write activity – I don’t need countless spinning HDDs to build up IOS for those once an hour/day spikes, gobbling up capacity and power.  A few space/power efficient SSDs with Storage Spaces Write-Back Cache will do a much more efficient job.

17 thoughts on “The Effects Of WS2012 R2 Storage Spaces Write-Back Cache On A Hyper-V VM”

  1. So, would you say that for virtualizing SQL servers, particularly on a tight-budget testlab, that using Scale out File Servers is the way to go for storing your SQL Server’s VHDs?

  2. Hi,
    Thanks for the article 🙂
    How does Windows decide where to put the WBCache? And is the WBCache “stripped” across multiple SSD?
    I have a 2-way mirror with 3 columns, and I’d like to use WBCache. Now it’s only 6 HDD, but how many SSD do I need to add to use the cache (not tiering)? 1? 2? 3? 6?
    If it works with 1, does it take avantage of 2 or more to get even higher performance?

    1. Windows manages it once you set the size. I haven’t seen details yet on the number requirement. I suspect one, but I have not asked about supportability or tested it.

  3. Have you came across the “best-practice” of “best-performance” settings for WBC? I know the default WBC size is 1GB per disk, but have you raised it at all and seen better metrics?

    1. There probably are extreme cases. My lab isn’t big enough to push that far. The guidance from Microsoft is that 1 GB should be good enough for most everyone; it is for absorbing unusual spikes and not for sustained activity.

  4. Can you run numbers for a vdisk of just HDD + WBC? I’m thinking of creating a lab with 22 HDD and 2 SSD for WBC.

  5. Hello Aidan,
    Been reading your Blog for about 3 years now and you’ve inspired me to deploy storage spaces solutions at several of my clients. This was the most relevant blog posting of yours I could find pertaining to my question. Here it goes:

    I have two Dell R630s with Dell 12Gb HBAs connected to a Dell MD1420. I have setup tiered storage spaces and a virtual disk with a 2GB WBC. clustered with CSVs yadda yadda. I have 4 SSDs and 8HDDs in the tiered pool. It is mirrored with 2 columns. I get read speeds that definitely show multipathing is working (I get ~22Gbps) which is impressive. However write speeds are obviously showing that it is not multipathed. It never goes above 12Gbps. I’m wondering if it is because writes to WBC are NOT multipathed. I’ve also got LB MPIO policy and the registry keys applied according to the article which I tracked down on your Petri posting.
    https://support.microsoft.com/en-us/kb/2921836

    Real question: is something wrong or is this expected behavior? I would expect my writes to be multipathed just like reads. Any help would be great.

    1. Hi Ryan, you’ll have to deal with Dell on this. I know that there h/w has some “interesting” characteristics.

Leave a Reply to Aidan Finn Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.