Lots of out-loud thinking here ….
If you put a gun to my head right now and asked me to pick a hardware virtualization solution for VDI then I honestly wouldn’t pick Hyper-V. I probably would go with VMware. Don’t get me wrong; I still prefer Hyper-V/System Center for server virtual machines. So why VMware for VDI?
- I can manage it using Virtual Machine Manager.
- It does have advanced memory management features.
The latter is important because I feel that:
- Memory is a big expense for host servers and there’s a big difference between PC memory cost and data centre memory cost.
- Memory is usually the bottleneck on low end virtualisation.
Windows Server 2008 R2 Service Pack 1 will change my mind when it RTM’s thanks to Dynamic Memory. What will be my decision making process then, because we do have options. You can always switch to Hyper-V then if you have to push out VMware (free ESXi) hosts now.
Will I want to make the VDI virtual machines highly available?
Some organizations will want to keep their desktop environment up and running, despite any scheduled or emergency maintenance. This will obviously cost more money because it requires some form of shared storage. Thin provisioning and deduplication will help reduce the costs here. But maybe a software solution like that from DataCore is an option?
Clustering will also be able to balance workloads thanks to OpsMgr and VMM.
Standalone hosts will use cheaper internal disk and won’t require redundant hosts.
Will I have a dedicated VDI Cluster?
My thinking is that VDI should be isolated from server virtualisation. This will increase hardware costs slightly. But maybe I can reduce this by using more economic hardware. Let’s face it, VDI virtual machines won’t have the same requirements as SQL VM’s.
What sort of disk will my VDI machines be placed on?
OK, let me start an argument here. Let’s start with RAID: I’m going RAID5. My VDI machines will experience next to no change. Data storage will be on file servers using file shares and redirected folders. RAID5 is probably 40% cheaper than RAID10.
However, if I am dynamically deploying new VM’s very frequently (for business reasons) then RAID10 is probably required. It’ll probably make new VM deployment up to 75% faster.
What type of disk? I think SATA will do the trick. It’s big and cheap. I’m not so sure that I really would need 15K disk speeds. Remember, the data is being stored on a file server. I’m willing to change my mind on this one, though.
The host operating system & edition?
OK: if the Hyper-V host servers are part of the server virtual machine cluster then I go with Windows Server 2008 R2 Datacenter Edition, purely because I have to (for server VM Live Migration).
However, I prefer having a dedicated VDI cluster. Here’s the tricky bit. I don’t like Server Core (no GUI) because it’s a nightmare for hardware management and troubleshooting. If I had to push a clustered host out now for VDI then I would use Windows Server 2008 Enterprise Edition. That will give me a GUI, Failover Clustering, and Live Migration.
If I had time, then I would prepare an environment where I could deploy Hyper-V Server 2008 R2 from something like WDS or MDT. That would allow me to treat a clustered host as a commodity. If the OS breaks, then 5 minutes of troubleshooting, followed by a rebuild with no questions asked (use VMM maintenance mode to flush VM’s off if necessary).
Standalone hosts are trickier. You cannot turn them into a commodity because of all the VM’s on them. There’s a big time investment there. They lose points for this. This might force me into troubleshooting an OS (parent partition) issue if it happens (to be honest, I cannot think of one that I’ve had in 2 years of running Hyper-V). That means a GUI. If my host has 32GB or less of RAM then I choose W2008 R2 Standard Edition. Otherwise I go with W2008 R2 Enterprise Edition.
I warned you that I was thinking out loud. It’s not all that structured but this might help you ask some questions if thinking about what to do for VDI hosts.
This blog post is the property of Aidan Finn (@joe_elway / http://www.aidanfinn.com) and may not be reused in any manner without prior consent of Aidan Finn. You may quote one paragraph from this blog post if you link to the original blog post.