… and I bet if you don’t do one you end up on the TechNet Forums or contacting someone like me for help. Also known as the blog post where I laugh openly at those who assume things about virtualisation.
Last week, I did a tour of 4 cities in Ireland talking to Microsoft partners about how to improve their deployments of Hyper-V. One subject kept coming up, over and over: the Assessment … or to put it rightly, the fact that an assessment is rarely done in a virtualisation project.
There is a reason why I dedicated an entire chapter of Mastering Hyper-V Deployment to the subject of the assessment. I can guarantee that I didn’t want to fill up 20-40 pages.
The assessment accomplishes a critical discovery & measurement step at the start of a virtualisation project (Hyper-V, XenServer, or vSphere):
- Discovery of Servers: find out what servers are on the network. I have been on even mid-sized client sites where servers had been forgotten about. In fact, I’ve been on one site (not recently admittedly) where they had some sort of appliance on the network that the client’s admins were afraid to remove or mess with it because anyone who knew what it did was long since retired. Quite simply, you need to find out what machines are out there and what applications are running on them.
- Application Virtualisation Support Statements: I bet hardly anyone even considers this. I bet the most common thought process is – “sure, it’s only Windows or Linux, and it’s got to be the same in a VM”. If you assume something then you should assume that you are wrong, and I don’t care how experienced or expert you consider yourself or your employees to be. If you use the “we know the requirements your/our environment” then you assume and you are wrong. Server products and those who publish them have support statements. Domain controllers, SQL Server, SharePoint, Exchange Server, Oracle, Lotus Notes, and so on, all have support statements for virtualisation. They impact whether a product can be virtualised, what virtualisation software they can run on (see Oracle), what features of the virtualisation product they can use, how you should build a virtual machine running that application, and so on. The Hyper-V product group might support something in production, but does the application vendor also support it? You’ll only have yourself to blame if you assume.
- Measurement: “Measure twice and cut once”. That’s the best lesson I learned in woodwork class in school. There are things to understand here. Some people assume (there’s that word again) that there is a “standard” virtualisation build. Pfft! I’m tired of answering the “what’s a good spec for a small/mid business?” question. You need what you need. If your apps’ cumulative processor requirement is 8 quad core CPUs then that’s what’s required. There is no magic compression. The savings you get with virtualisation is that you are running many app workloads on fewer CPUs and server chassis. If an app requires 50% of a quad core CPU in rack server form then it needs that capacity in a VM form. The only way to find out what is required is to take the list of servers from the above step 2, and measure resource utilisation. Only with this information can you truly correctly size and design any virtualisation environment.
The assessment feeds into so much, that it’s ridiculous. Only with this data can you make design decisions based on size and performance. How many hosts do you need? How much CPU do you need? How much memory do you need? If you’re a systems integrator, sure you can oversell the customer by a few servers or terabytes of disk – but remember that you’re making a paltry 5-15% margin on that and you’ve drained the customer’s ability to pay for more profitable services. And that decision to deploy passthrough disks or 1 VM/LUN for performance reasons – was it justified? What were the IOPS requirements of the original installation? Heck, do you know the difference between an in-server array/LUN versus an in-SAN diskgroup/vDisk?
By the way, the skewed response to the Great Big Hyper-V Survey (skewed because they were more informed than the normal consumer) stated that less than 50% actually did an assessment. Pretty silly considering that most deployment are not huge and the Microsoft Assessment and Planning Toolkit is free, and would require only a fee hours to get some meaningful data.
Something tells me I’ve wasted a lot of valuable electrons. I figure that the “experts” out there “who know all this already” couldn’t give a stuff about doing their jobs correctly and giving their customers a good production environment. I’ve gotten to the point with this topic where politeness has to stop and harsh words have to be spoken. And if I hear you say that you assumed something and that was justification for not doing an assessment then you only have yourself to blame.
This blog post is the property of Aidan Finn (@joe_elway / http://www.aidanfinn.com) and may not be reused in any manner without prior consent of Aidan Finn. You may quote one paragraph from this blog post if you link to the original blog post.