What’s The Maximum Number Of Hyper-V VMs You Can Put In Cluster Shared Volume?

“What’s the rule of thumb on the number of VMs you should put in a CSV?”  That’s a question I am asked on a regular basis.  We need to dig into this.

When you have a cluster of virtualisation hosts using shared storage systems, you need some sort of orchestration to say which host should access what folders and files.  That’s particularly important during Live Migration and failover.  Without orchestration you’d have chaos, locks, failed VMs, and corruption.

One virtualisation cluster file system out there does it’s orchestration in the file system itself.  That, in theory, places limits on how that file system can scale out.

Microsoft took a different option.  Instead, each cluster shared volume (CSV) has an orchestrator known as a CSV Coordinator that is automatically created and made fault tolerant.  The CSV coordinator is a highly available function that runs on one of the clustered hosts.  By not relying on the file system, Microsoft believes they have a more scalable and better performing option.

How scalable?  A few years ago, EMC (I believe it was EMC, the owner of VMware, but my memory could be failing me) stood on a stage at a Microsoft conference and proclaimed that they couldn’t find a limit on the scalability of CSV versus performance on their storage platform.  In other words, you could have a monstrous CSV and place lots and lots of 64 TB VHDX files on there (GPT volumes grow up to 16 EB).

OK; back to the question at hand: how many VMs should I place on a CSV.  I have to give you the consultant’s answer: that depends.  The fact is that there is no right answer.  This isn’t VMware where there are prescribed limits and you should create lots of lots of “little” VMFS volumes.

First, I’d say you should read my paper on CSV and backup.  Now keep in mind that the paper was written for Windows Server 2008 R2.  Windows Server 2012 doesn’t do redirected I/O (mode) when backing up VMs from CSV.  In that document I talk about a process I put together for CSV design and VM placement.

Next, have a look at Fast Track, Microsoft’s cloud architecture.  In there they have a CSV design where OS, page file, sequential files, and non-sequential files are split into VHDs on different CSVs.  To me, this complicates things greatly.  I prefer simplicity.  Plus I can’t imagine the complexity of the deployment automation for this design.

An alternative is to look at a rule of thumb that many are using: they have 1 CSV for every host in their cluster (or active site in a multi-site cluster).  Beware here: you don’t want to run out of SCSI-3 reservations (every SAN has an unadvertised limit) because you’ve added too many CSVs on your SAN (read the above paper to learn more).

My advice: keep it simple.  Don’t overthink things.  Remember, Hyper-V is not VMware and VMware is not Hyper-V.  They both might be enterprise virtualisation platforms but e do things differently on both platforms because they both work differently.

4 thoughts on “What’s The Maximum Number Of Hyper-V VMs You Can Put In Cluster Shared Volume?”

  1. Hi, I will be creating a 8 Node cluster with about 7.1TB worth of VM’s. We currently only have 1 CSV volume that is shared amongst 13 Host. What is your reccomendation if I have 8 nodes with 7TB of VHDs and about 100 VMs?
    Would you reccomended 8 seperate CSV Volumes or what would be my best option here?

  2. This environment will ONLY be used for 10 Hyper-V hosts. The workload will be for a non-production environment for a mixed workload with no heavy hitters. I want to keep pool/spaces as simple as possible. I am thinking only 1 Pool (72 disks). Are you saying the rule of thumb would be 10 3TB CSVs for the following configuration? What would be the harm (performance, maintenance) in going lower, say 1 or 2 CSV? There is not a lot of guidance around number of CSV.

    10 Windows Server 2012 R2 Hyper-V hosts (latest gen hardware)
    2 Windows Server 2012 R2 SOFS nodes (latest gen hardware)
    3 Dataon DNS-1640 with 6 400GB SSD and 18 1.2TB 10k SAS disks per enclosure (~30 TB usable) – plan to use tiering
    12G SAS cards from each node to each enclosure (6 cards total)
    Chelsio 10G RDMA NICs for HV and SOFS nodes
    2 new Cisco 10GB dedicated switches

    Let’s call it this: http://www.dataonstorage.com/images/PDF/Solutions/DandD/DataON_DandD_MX-3240-T3_Windows_Server_2012_R2_Storage_Spaces.pdf

    Thanks

    1. You’ve mixed up two scenarios: Hyper-V + SOFS and Hyper-V + SAN. In your scenario, the CSVs are connected to by the 2 SOFS nodes. Therefore, best practice is at least 1 CSVs per SOFS node. This is to deal with mirroring/redirected IO, and get best performance via CSV balancing and redirected SMB.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.