2011
03.15

I joined the tail end of a webcast about private cloud computing to be greeted by a demonstration of the Microsoft Assessment and Planning Toolkit in a virtualisation conversion scenario.  That got me to thinking, raised some questions, and brought back some memories.

Way back when I started working in hosting/virtualisation (and it was VMware 3.x BTW) I had started a thread on a forum with some question.  It was something storage sizing or planning but I forget exactly what.  A VMware consultant (and a respected expert) responded by saying that I should have done an assessment of the existing environment before designing anything.

And there’s the problem.  In a hosting environment, you have zero idea of what your sales people are going to sell, what your customers are going to do with their VMs, and what the application loads are going to be.  And that’s because the sales people and customers have no idea of those variables either.  You start out with a small cluster of hosts/storage, and a deployment/management system, and you grow the host/storage capacity as required.  There is nothing to assess or convert.  You build capacity, and the business consumes it as it requires it, usually without any input from you. 

And after designing/deploying my first private cloud (as small as it is for our internal usage) I’ve realised how similar the private cloud experience is to the hosting (public cloud or think VPS) experience.  I’ve built host/storage capacity, I’ve shared the ability for BI consultants/developers to deploy their own VMs, and I have no idea what they will install, use them for, or what loads there will be on CPU, storage, or network.  They will deploy VMs into the private cloud as they need them, they are empowered to install software as they require, and they’ll test/develop as they see fit, thus consuming resources in an unpredicatable manner.  I have nothing to assess or convert.  MAP, or any other assessment tool for that matter, is useless to me.

So there I saw a webcast where MAP was being presented, maybe for 5-10 minutes, at the end of a session on private cloud computing.  One of the actions was to get assessing.  LOL, in a true private cloud, the manager of that cloud hasn’t a clue what’s to come.

And here’s a scary bit: you cannot plan for application supported CPU ratios.  Things like SharePoint (1:1) and SQL (2:1) have certain vCPU:pCPU ratios (virtual CPU:physical core) that are recommended/supported (search on TechNet or see Mastering Hyper-V Deployment).

So what do you do, if you have nothing to assess?  How do you size your hosts and storage?  That is a very tough question and I think the answer will be different for everyone.  Here’s something to start with and you can modify it for yourself.

 

  1. Try to figure out how big your infrastructure might get in the medium/long term.  That will define how big your storage will need to be able to scale out to.
  2. Size your hosts.  Take purchase cost, operating costs (rack space, power, network, etc), licensing, and Hyper-V host sizing (384 VMs max per host, 1,000 VMs max per cluster, 12:1 vCPU:pCPU ratio) into account.  Find the sweet spot between many small hosts and fewer gigantic hosts.
  3. Try to figure out the sweet spot for SQL licensing.  Are you going per-CPU on the host (maybe requiring a dedicated SQL VM Hyper-V cluster), per CPU in the VM, or server/CAL?  Remember, if your “users” can install SQL for themselves then you lose a lot of control and may have to license per CPU on every host.
  4. Buy new models of equipment that are early in their availability windows.  It might not be a requirement to have 100% identical hardware across a Hyper-V cluster but it sure doesn’t hurt when it comes to standardisation for support and performance.  Buying last year’s model (e.g. HP G6) because it’s a little cheaper than this year’s (e.g. HP G7) is foolish; That G6 probably will only be manufactured for 18 months before stocks disappear and you probably bought it at the tail end of the life.
  5. Start with something small (a bit of storage with 2-3 hosts) to meet immediate demand and have capacity for growth.  You can add hosts, disks, and disk trays as required.  This is why I recommended buying the latest; now you can add new machines to the compute cluster or storage capacity that is identical to previously purchased equipment – well … you’ve increased the odds of it to be honest.
  6. Smaller environments might be ok with 1 Gbps networking.  Larger environments may need to consider fault tolerant 10 Gbps networking, allowing for later demand.
  7. You may find yourself revisiting step 1 when you’ve gone through the cycle because some new fact pops up that alters your decision making process.

To be honest, you aren’t sizing; You’re providing access to elastic capacity that the business can (and will) consume.  It’s like building a baseball field in Iowa.  You build it, and they will come.  And then you need to build another field, and another, and another.  The exception is that you know there are 9 active players per team in baseball.  You’ve no idea if your users will be deploying 10 * 10 GB RAM lightly used VMs or 100 * 1 GB RAM heavily used VMs on a host.

I worked in hosting with virtualisation for 3 years.  The not knowing wrecks your head.  The only way I really got to grips with things was to have in depth monitoring.  System Center Operations Manager gave me that.  Using PRO Tips for VMM integration, I also got my dynamic load balancing.  Now I at least knew how things behaved and I also had a trigger for buying new hardware.

Finally comes the bit that really will vex the IT Pro:  Cross-charging.  How the hell do you cross-charge for this stuff?  Using third party solutions, you can measure things like CPU usage, memory usage, storage usage, and bill for them.  Those are all very messy things to cost – you’d need a team of accountants for that.  SCVMM SSP 2.0 gives a simple cross charging system based on GB or RAM/storage that are reserved or used, as well as a charge for templates deployed (license).  Figuring out the costs of GB of RAM/storage and the cost of a license is easy. 

However, figuring out the cost of installed software (like SharePoint) is not; who’s to say if the user puts the VM into your directory or not, and if a ConfigMgr agent (or whatever) gets to audit it.  Sometimes you just gotta trust that they’re honest and their business unit takes care of things.

EDIT:

I want to send you over to a post on Working Hard in IT.  There you will read a completely valid argument about the need to plan and size.  I 100% agree with it … when there’s something to measure and convert.  So please do read that post if you are doing a traditional virtualisation deployment to convert your infrastructure.  If you read Mastering Hyper-V Deployment, you’ll see how much I stress that stuff too.  And it scares me that there are consultants who refuse to assess, often using the wet finger in the wind approach to design/sizing.

No Comment.

Add Your Comment

Get Adobe Flash player