The Virtualisation Smackdown – Hyper-V VHDX Scales Out to 64 TB – Yes, I said 64 Terabytes!

I was gobsmacked when I learned this week that the new Windows 8/Windows Server 2012 (WS2012) format for virtual disks, VHDX, would have a maximum size of 64 TB.  64 TB!  Damn, I was impressed with the Build announcement that it would go out to 16 TB.  Even then, it was dwarfing the paltry 2040 GB that vSphere 5.0 VMDK can do.  Wow, Hyper-V has vSphere smacked down on storage scalability; isn’t that a shocker!?!?!

Back to the serious side of things … what does this mean?  One of the big reasons that people have implemented virtualisation (28.19% – Great Big Hyper-V Survey of 2011) was flexibility.  What makes that possible is that virtual machines are normally just files, unbound to their hardware they reside on unlike legacy hardware OS installations and data storage.  A limiting factor on that has been the scalability of virtual disks.  Both VHD (pre-Windows 8 Hyper-V) and VMDK (all current versions of vSphere) are limited to 2040 GB.  The alternative is Raw Device Mapping (vSphere) or Passthrough disk (Hyper-V).

I hate this type of storage.  It’s bound to disk because it’s just a raw LUN presented to a VM and therefore it’s a hardware boundary that limits mobility, flexibility, and precludes other things that we can do such as Hyper-V Replica, snapshots, VSS backups of running VMs, etc.  Way too often I see people using Passthrough for “performance” reasons (usually with no assessment done and based on pure guesswork) without realising that even VHD has great performance (and I cannot wait for VHDX performance results to be published publicly).  The only real reason to use Passthrough disk in my opinion has been to scale a VM’s LUN beyond 2040 GB.

That changes with Windows Server 2012 Hyper-V.  I am thinking that Passthrough disk will become one of those things that is theoretical to 99.999999% of us.  It’ll be that exam question that no one can answer because they never do it.  Think about it: a 64 TB virtual disk that performs near as good as the physical disk it sits on.  Wow!

Question: So which virtualisation platform isn’t scalable or enterprise ready? *sniggers* I cannot wait to see the excuses that the competition come up with next.

There are other benefits to the VHDX format:

  • “Larger block sizes for dynamic and differencing disks, which allows these disks to attune to the needs of the workload.
  • A 4-KB logical sector virtual disk that allows for increased performance when used by applications and workloads that are designed for 4-KB sectors.
  • The ability to store custom metadata about the file that the user might want to record, such as operating system version or patches applied.
  • Efficiency in representing data (also known as “trim”), which results in smaller file size and allows the underlying physical storage device to reclaim unused space. (Trim requires physical disks directly attached to a virtual machine or SCSI disks, and trim-compatible hardware.)”

There are other things I’d love to share about VHDX, but I’m not sure of their NDA status at the moment so I’ll be sitting on those facts until later.  Being an MVP aint easy Smile

4 thoughts on “The Virtualisation Smackdown – Hyper-V VHDX Scales Out to 64 TB – Yes, I said 64 Terabytes!”

  1. One of the primary reason we have do to passthrough is to implement SAN based, application aware snapshot. For example, if you want to snapshot your SQL or Exchange database, the only way to do it is via passthrough. Hopefully this will change in the future, cause like you mention, it greatly reduce the VM mobility.

  2. Regarding 64TB disks, just because you CAN do something, doesn’t mean that you should. Storage systems have been capable of creating 16TB LUNs and file systems for many years, now many of them support even larger capacities. Outside of some specific use-cases, very rarely do you hear any vendor actually recommend that you do it. The primary one being a dirty shutdown can cause a consistency-check at either the host layer or storage layer of a 64TB volume. How long will that take – nobody knows. Probably not a good idea for your most critical business data.

    And there are several reasons you should use a RDM or pass-thru disk besides simply size. Application-consistent snapshots at the hardware layer being a primary one.

    1. I’ve heard the consistency check and snapshot arguments before.

      Regarding consistency check: Windows 8 now includes an online fix repair process. A disk does not have to be offline to be scanned/fixed.

      As for the snapshot thing: By using a VHDX you can -VSS- snap (backup) an entire VM for the same result using a -fully- Hyper-V compliant backup solution such as DPM or many other partner products. This has the benefit of being able to leverage SAN snapshot functionality for speed, depending on h/w VSS provider support, and being able to “snap” a VM with support from the guest application vendor, while maintaining consistency of VSS aware applications. Going with physical LUNs ties down the physical location of the data and the VM, thus removing the flexibility of the VM. Going with VHDX gives you scale, snapshot (via backup), and retains mobility and flexibility.

      1. That is exactly right! Could not have said this better. Until you have your storage virtualized, you are really only half way there, because you are missing way too many features that virtualization brings.

        And there are absolutely no issues with application consistency or any application-specific logic requirements… I am with one of those “partner backup vendors” and we do application-aware image-level backup of VMs with VHD disks. Works perfect!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.