I previously wrote about a new feature in Windows Server 2012 R2 Storage Spaces called Write-Back Cache (WBC) and how it improved write performance from a Hyper-V host. What I didn’t show you was how WBC improved performance from where it counts; how does WBC improve the write-performance of services running inside of a virtual machine?
So, I set up a virtual machine. It has 3 virtual hard disks:
- Disk.vhdx: The guest OS (WS2012 R2 Preview), and this is stored on SOFS2. This is a virtual Scale-Out File Server (SOFS) and is isolated from my tests. This is the C: drive in the VM.
- Disk1.vhdx: This is on SCSI 0 0 and is placed on \\SOFS1\CSV1. The share is stored on a tiered storage space (50 GB SSD + 150 GB HDD) with 1 column and a write cache of 5 GB. This is the D drive in the VM.
- Disk2.vhdx: This is on SCSI 0 1 and is placed on \\SOFS1\CSV2. The share is stored on a non-tiered storage space (200 GB HDD) with 4 columns. There is no write cache. This is the E: drive in the VM.
I set up SQLIO in the VM, with a test file in each D: (Disk1.vhdx – WBC on the underlying volume) and E: (Disk2.vhdx – no WBC on the underlying volume). Once again, I ran SQLIO against each test file, one at a time, with random 64 KB writes for 30 seconds – I copied/pasted the scripts from the previous test. The results were impressive:
Interestingly, these are better numbers than from the host itself! The extra layer of virtualization is adding performance in my lab!
Once again, Write-Back Cache has rocked, making the write performance 6.27 times faster. A few points on this:
- The VM’s performance with the VHDX on the WBC-enabled volume was slightly better than the host’s raw performance with the same physical disk.
- The VM’s performance with the VHDX on the WBC-disabled volume was nearly twice as good as the host’s raw performance with the same physical disk. That’s why we see a WBC improvement of 6-times instead of 11-times. This is a write-job so it wasn’t CSV Cache. I suspect sector size (physical versus logical might be what’s caused this.
I decided to tweak the scripts to get simultaneous testing of both VHDX files/shares/Storage Spaces virtual disks, and fired up performance monitor to view/compare the IOPS of each VHDX file. The red bar is the optimised D: drive with higher write operations/second, and the green is the lower E: drive.
They say a picture paints a thousand words. Let’s paint 2000 words; here’s the same test but over the length of a 60 second run. Once again, read is the optimised D: drive and green is the E: drive.
Look what just 5 GB of SSD (yes, expensive enterprise class SSD) can do for your write performance! That’s going to greatly benefit services when they have brief spikes in write activity – I don’t need countless spinning HDDs to build up IOS for those once an hour/day spikes, gobbling up capacity and power. A few space/power efficient SSDs with Storage Spaces Write-Back Cache will do a much more efficient job.