The Importance of A Virtualisation Assessment …

… and I bet if you don’t do one you end up on the TechNet Forums or contacting someone like me for help.  Also known as the blog post where I laugh openly at those who assume things about virtualisation.

Last week, I did a tour of 4 cities in Ireland talking to Microsoft partners about how to improve their deployments of Hyper-V.  One subject kept coming up, over and over: the Assessment … or to put it rightly, the fact that an assessment is rarely done in a virtualisation project.

There is a reason why I dedicated an entire chapter of Mastering Hyper-V Deployment to the subject of the assessment.  I can guarantee that I didn’t want to fill up 20-40 pages. 

The assessment accomplishes a critical discovery & measurement step at the start of a virtualisation project (Hyper-V, XenServer, or vSphere):

  1. Discovery of Servers: find out what servers are on the network.  I have been on even mid-sized client sites where servers had been forgotten about.  In fact, I’ve been on one site (not recently admittedly) where they had some sort of appliance on the network that the client’s admins were afraid to remove  or mess with it because anyone who knew what it did was long since retired.  Quite simply, you need to find out what machines are out there and what applications are running on them.
  2. Application Virtualisation Support Statements: I bet hardly anyone even considers this.  I bet the most common thought process is – “sure, it’s only Windows or Linux, and it’s got to be the same in a VM”.  If you assume something then you should assume that you are wrong, and I don’t care how experienced or expert you consider yourself or your employees to be.  If you use the “we know the requirements your/our environment” then you assume and you are wrong.  Server products and those who publish them have support statements.  Domain controllers, SQL Server, SharePoint, Exchange Server, Oracle, Lotus Notes, and so on, all have support statements for virtualisation.  They impact whether a product can be virtualised, what virtualisation software they can run on (see Oracle), what features of the virtualisation product they can use, how you should build a virtual machine running that application, and so on.  The Hyper-V product group might support something in production, but does the application vendor also support it?  You’ll only have yourself to blame if you assume.
  3. Measurement: “Measure twice and cut once”.  That’s the best lesson I learned in woodwork class in school.  There are things to understand here.  Some people assume (there’s that word again) that there is a “standard” virtualisation build.  Pfft!  I’m tired of answering the “what’s a good spec for a small/mid business?” question.  You need what you need.  If your apps’ cumulative processor requirement is 8 quad core CPUs then that’s what’s required.  There is no magic compression.  The savings you get with virtualisation is that you are running many app workloads on fewer CPUs and server chassis.  If an app requires 50% of a quad core CPU in rack server form then it needs that capacity in a VM form.  The only way to find out what is required is to take the list of servers from the above step 2, and measure resource utilisation.  Only with this information can you truly correctly size and design any virtualisation environment.

The assessment feeds into so much, that it’s ridiculous.  Only with this data can you make design decisions based on size and performance.  How many hosts do you need?  How much CPU do you need?  How much memory do you need?  If you’re a systems integrator, sure you can oversell the customer by a few servers or terabytes of disk – but remember that you’re making a paltry 5-15% margin on that and you’ve drained the customer’s ability to pay for more profitable services.  And that decision to deploy passthrough disks or 1 VM/LUN for performance reasons – was it justified?  What were the IOPS requirements of the original installation?  Heck, do you know the difference between an in-server array/LUN versus an in-SAN diskgroup/vDisk? 

By the way, the skewed response to the Great Big Hyper-V Survey (skewed because they were more informed than the normal consumer) stated that less than 50% actually did an assessment.  Pretty silly considering that most deployment are not huge and the Microsoft Assessment and Planning Toolkit is free, and would require only a fee hours to get some meaningful data.

Something tells me I’ve wasted a lot of valuable electrons. I figure that the “experts” out there “who know all this already” couldn’t give a stuff about doing their jobs correctly and giving their customers a good production environment. I’ve gotten to the point with this topic where politeness has to stop and harsh words have to be spoken. And if I hear you say that you assumed something and that was justification for not doing an assessment then you only have yourself to blame.

70-681 (Windows 7/Office 2010 Deployment) Exam Preparation

I’ve been asked several times during the last week about how to prepare for 70-681, the exam on deploying Windows 7 and Office 2010, so I thought it was worthy of a blog post.  The issue is that there is no guidance from Microsoft on how to prepare for it in terms of materials.  And that is because it pulls in information from all over the place.  Think about it; Windows 7 deployment can include:

  • MAP
  • ACT
  • WAIK/ImageX
  • WDS
  • MDT
  • ConfigMgr OSD/Zero Touch

That’s 6 different products.  By the way, we cover all that in Mastering Windows 7 Deployment.  And that’s just Windows.  This exam also covers Office 2010.  They typically go hand in hand, which is why the exam includes both topics.  And this certification will be mandatory from May 2012 for the Microsoft partner Desktop competency (new and renewing partners).

If you want blogs/websites to read for preparation then check out:

From time to time, Microsoft is known to run classes for partners on training.  Your registered partner contacts in your company should be getting email announcements from the local MSFT partner team with any such information.  These courses are usually anywhere from free to very economic.  This is just a starting point to get the attendees on the ladder.  A course cannot be a complete exam prep.  And folks like Rhonda Layfield (USA) and Johan Arwidmark (in Europe but also USA) are known to run their own deployment training classes which can be attended by the public (for a fee).

In the end, most of the OS deployment stuff centres on a few things like WinPE, WSIM, SysPrep, and drivers.  I did the Vista/O2007 exam and Office deployment questions asked about evaluation/migration stuff.  To be honest, nothing prepares you for this exam like doing a lot of work in a lab.  That’s where your MSDN/TechNet licensing and a virtualisation host come in really handy.  You can get a little prep work done also in the TechNet Labs for Windows 7.

Private Cloud Computing: Designing in the Dark

I joined the tail end of a webcast about private cloud computing to be greeted by a demonstration of the Microsoft Assessment and Planning Toolkit in a virtualisation conversion scenario.  That got me to thinking, raised some questions, and brought back some memories.

Way back when I started working in hosting/virtualisation (and it was VMware 3.x BTW) I had started a thread on a forum with some question.  It was something storage sizing or planning but I forget exactly what.  A VMware consultant (and a respected expert) responded by saying that I should have done an assessment of the existing environment before designing anything.

And there’s the problem.  In a hosting environment, you have zero idea of what your sales people are going to sell, what your customers are going to do with their VMs, and what the application loads are going to be.  And that’s because the sales people and customers have no idea of those variables either.  You start out with a small cluster of hosts/storage, and a deployment/management system, and you grow the host/storage capacity as required.  There is nothing to assess or convert.  You build capacity, and the business consumes it as it requires it, usually without any input from you. 

And after designing/deploying my first private cloud (as small as it is for our internal usage) I’ve realised how similar the private cloud experience is to the hosting (public cloud or think VPS) experience.  I’ve built host/storage capacity, I’ve shared the ability for BI consultants/developers to deploy their own VMs, and I have no idea what they will install, use them for, or what loads there will be on CPU, storage, or network.  They will deploy VMs into the private cloud as they need them, they are empowered to install software as they require, and they’ll test/develop as they see fit, thus consuming resources in an unpredicatable manner.  I have nothing to assess or convert.  MAP, or any other assessment tool for that matter, is useless to me.

So there I saw a webcast where MAP was being presented, maybe for 5-10 minutes, at the end of a session on private cloud computing.  One of the actions was to get assessing.  LOL, in a true private cloud, the manager of that cloud hasn’t a clue what’s to come.

And here’s a scary bit: you cannot plan for application supported CPU ratios.  Things like SharePoint (1:1) and SQL (2:1) have certain vCPU:pCPU ratios (virtual CPU:physical core) that are recommended/supported (search on TechNet or see Mastering Hyper-V Deployment).

So what do you do, if you have nothing to assess?  How do you size your hosts and storage?  That is a very tough question and I think the answer will be different for everyone.  Here’s something to start with and you can modify it for yourself.

 

  1. Try to figure out how big your infrastructure might get in the medium/long term.  That will define how big your storage will need to be able to scale out to.
  2. Size your hosts.  Take purchase cost, operating costs (rack space, power, network, etc), licensing, and Hyper-V host sizing (384 VMs max per host, 1,000 VMs max per cluster, 12:1 vCPU:pCPU ratio) into account.  Find the sweet spot between many small hosts and fewer gigantic hosts.
  3. Try to figure out the sweet spot for SQL licensing.  Are you going per-CPU on the host (maybe requiring a dedicated SQL VM Hyper-V cluster), per CPU in the VM, or server/CAL?  Remember, if your “users” can install SQL for themselves then you lose a lot of control and may have to license per CPU on every host.
  4. Buy new models of equipment that are early in their availability windows.  It might not be a requirement to have 100% identical hardware across a Hyper-V cluster but it sure doesn’t hurt when it comes to standardisation for support and performance.  Buying last year’s model (e.g. HP G6) because it’s a little cheaper than this year’s (e.g. HP G7) is foolish; That G6 probably will only be manufactured for 18 months before stocks disappear and you probably bought it at the tail end of the life.
  5. Start with something small (a bit of storage with 2-3 hosts) to meet immediate demand and have capacity for growth.  You can add hosts, disks, and disk trays as required.  This is why I recommended buying the latest; now you can add new machines to the compute cluster or storage capacity that is identical to previously purchased equipment – well … you’ve increased the odds of it to be honest.
  6. Smaller environments might be ok with 1 Gbps networking.  Larger environments may need to consider fault tolerant 10 Gbps networking, allowing for later demand.
  7. You may find yourself revisiting step 1 when you’ve gone through the cycle because some new fact pops up that alters your decision making process.

To be honest, you aren’t sizing; You’re providing access to elastic capacity that the business can (and will) consume.  It’s like building a baseball field in Iowa.  You build it, and they will come.  And then you need to build another field, and another, and another.  The exception is that you know there are 9 active players per team in baseball.  You’ve no idea if your users will be deploying 10 * 10 GB RAM lightly used VMs or 100 * 1 GB RAM heavily used VMs on a host.

I worked in hosting with virtualisation for 3 years.  The not knowing wrecks your head.  The only way I really got to grips with things was to have in depth monitoring.  System Center Operations Manager gave me that.  Using PRO Tips for VMM integration, I also got my dynamic load balancing.  Now I at least knew how things behaved and I also had a trigger for buying new hardware.

Finally comes the bit that really will vex the IT Pro:  Cross-charging.  How the hell do you cross-charge for this stuff?  Using third party solutions, you can measure things like CPU usage, memory usage, storage usage, and bill for them.  Those are all very messy things to cost – you’d need a team of accountants for that.  SCVMM SSP 2.0 gives a simple cross charging system based on GB or RAM/storage that are reserved or used, as well as a charge for templates deployed (license).  Figuring out the costs of GB of RAM/storage and the cost of a license is easy. 

However, figuring out the cost of installed software (like SharePoint) is not; who’s to say if the user puts the VM into your directory or not, and if a ConfigMgr agent (or whatever) gets to audit it.  Sometimes you just gotta trust that they’re honest and their business unit takes care of things.

EDIT:

I want to send you over to a post on Working Hard in IT.  There you will read a completely valid argument about the need to plan and size.  I 100% agree with it … when there’s something to measure and convert.  So please do read that post if you are doing a traditional virtualisation deployment to convert your infrastructure.  If you read Mastering Hyper-V Deployment, you’ll see how much I stress that stuff too.  And it scares me that there are consultants who refuse to assess, often using the wet finger in the wind approach to design/sizing.

Mastering Windows 7 Deployment is Published

I’ve just recived an email from Sybex to say that the third book that I’ve been involved with, Mastering Windows 7 Deployment, has just started shipping from their warehouse(s).  Right now, Amazon.com is still on preorder but that will likely change in the coming hours or days.  The Wiley (Sybex is part of the Wiley group) site is live right now.

Who contributed?  Me, Darril Gibson (trainer/consultant, also of Mastering Windows Server), Kenneth van Surksum (Dutch MVP and well known blogger), Rhonda Layfield (deployment MVP, author, speaker, trainer), not to mention deployment MVPs/gurus Johan Arwidmark and Mikael Nystrom.  It was quite a cast to work with!  Big thanks to anyone I worked with on the project, especially those in Sybex who worked on the project.

The book takes a very practical look at how to do a Windows 7 deployment project.  It starts out by doing the assessment using MAP.  From there, issues with application compatibility are dealt with.  You learn about WAIK, using WDS, MDT, user state transfer, and even how to do zero touch installations using System Center Configuration Manager 2007 (including R2/R3).  I’d buy it if I wasn’t one of the contributors 🙂

Sample Chapter: Mastering Windows 7 Deployment

Last year was pretty busy.  Not only did I write Mastering Hyper-V Deployment (with MVP Patrick Lownds helping), but that project was sandwiched by me writing a number of chapters for Mastering Windows 7 Deployment.  That Windows 7 book is due out somethime this month.

If you browse onto the Sybex website you can get a sneak peak into what the book is like.  There is a sample exceprt from the book, along with the TOC.

The book aims to cover all the essential steps in a Windows 7 deployment … from the assessment, solving application compatibility issues, understanding WAIK (and digging deeper), learnign about WDS for the first time (and digging deeper), more of that on MDT, and even doing zero touch deployments using Configuration Manager 2007.  A good team of people contributed on the book from all over the place … and the tech reviewers were some of the biggest names around (I wet myself with fear when I saw who they were).

Give it a look, and don’t be shy of placing an order if you like what you see 🙂

Community Event: From The Desktop to the Cloud: Let’s Manage, Monitor and Deploy

We’ve just announced the details of the latest user group event in Dublin … it’s a biggie!  I’ll be presenting two of the deployment sessions, on MAP and MDT.

Join us at the Guinness Store House on February 24th at 09:00 for a full day of action packed sessions covering everything from the desktop to The Cloud, and maybe even a pint of Guinness afterwards.

We have our a fantastic range of speakers ranging from MVPs to Microsoft Staff and leading industry specialists to deliver our sessions ensuring a truly unique experience.  During this day, you will have the choice of attending sessions of your choice, covering topics such as Windows 7/Office 2010 deployment, management using System Center, and cloud computing for the IT pro (no developer content – we promise!).

We have our a fantastic range of speakers ranging from MVPs to Microsoft staff and leading industry specialists to deliver our sessions ensuring a truly unique experience. During this day, you will have the choice of attending sessions of your choice, covering topics such as Windows 7/Office 2010 deployment, management using System Center, and cloud computing for the IT pro (no developer content – we promise!).

We promised bigger and better and we meant it.  This session will feature 3 tracks, each with four sessions.  The tracks are:

  1. The Cloud: Managed by Microsoft Ireland
  2. Windows 7/Office 2010 Deployment: Managed by the Windows User Group
  3. Systems Management: Managed by the System Center User Group

You can learn more about the event, tracks, sessions, and speaker on the Windows User Group site.

You can register here.  Please only register if you seriously intend to go; Spaces are limited and we want to make sure as many can attend as possible.

The Twitter tag for the event is #ugfeb24.

MAP 5.5 Beta

Watch out, the Microsoft Assessment and Planning Toolkit 5.5 will be in a store near you real soon.  Microsoft just sent out emails about the start of the MAP 5.5 beta:

What’s new with MAP Toolkit 5.5?

Assess your environment for upgrade to Windows 7 and Internet Explorer 8 (or the latest version)

Are you looking for a tool to simplify your organization’s migration to Windows 7 and Internet Explorer 8—and, in turn, enjoy improved desktop security, reliability and manageability? The MAP 5.5 IE Upgrade Assessment inventories your environment and reports on deployed web browsers, Microsoft ActiveX controls, plug-ins and toolbars, and then generates a migration assessment report and proposal—information you need to more easily migrate to Windows 7 and Internet Explorer 8 (or the latest version).

Identify and analyze web application, and database readiness for migration to Windows Azure and SQL Azure

Simplify your move to the cloud with the MAP 5.5 automated discovery and detailed inventory reporting on database and web application readiness for Windows Azure and SQL Azure. MAP identifies web applications, IIS servers, and SQL Server databases, analyzes their performance characteristics, and estimates required cloud features such as number of Windows Azure compute instances, number of SQL Azure databases, bandwidth usage, and storage.

Discover heterogeneous database instances for migration to SQL Server

Now with heterogeneous database inventory supported, MAP 5.5 helps you accelerate migration to SQL Server with network inventory reporting for MySQL, Oracle, and Sybase instances.

Enhanced server consolidation assessments for Hyper-V

Enhanced server consolidation capabilities help save time and effort when creating virtualization assessments and proposals. Enhancements include:

  • Updated hardware libraries allowing you to select from the latest Intel and AMD processors.
  • Customized server selection for easy editing of assessment data. Data collection and store every five minutes for more accurate reporting.
  • Better scalability and reliability, requiring less oversight of the data collection process.
  • Support for more machines”.

Mastering Hyper-V Deployment Book is Available Now

Amazon has started shipping the book that I wrote, with the help of Patrick Lownds MVP, Mastering Hyper-V Deployment.

Contrary to belief, an author of a technical book is not given a truckload of copies of the book when it is done.  The contract actually says we get one copy.  And here is my copy of Mastering Hyper-V Deployment which UPS just delivered to me from Sybex:

BookDelivered

Amazon are now shipping the book.  I have been told by a few of you that deliveries in the USA should start happening on Tuesday.  It’s been a long road to get to here.  Thanks to all who were involved.

Doing a Windows 7 Assessment in the Real World

Last night I talked about how I needed to use ConfigMgr to help with my MAP assessment.  Today, I had to drop MAP.

I have to be realistic with this project.  The site has a mix of PCs.  Some are old and some are new.  There are 32-bit and 64-bit processors.  Some users require 4 GB RAM or more (and thus 64 bit processors).  And as with everyone, money cannot just be thrown at a problem.  In this project, PCs with what we see as inferior processors will be recycled (or donated) after being securely wiped.  New PCs will be purchased, prepared, and given to power users.  Their old PCs will be reconditioned and re-used.  PCs with not enough RAM or disk will be upgraded where possible.  64-bit operating systems will be used where possible but it is likely that most will be 32-bit (unless more than 3 GB RAM is required).

And this is where MAP fails:

  • It doesn’t tell me what size a disk is, only that it has a certain amount of free space.
  • It doesn’t give me information about 64-bit processor functionality.
  • It doesn’t give me hardware model information so that I can check if I can put more than 2 GB RAM into the chassis.

I also had another problem with MAP.  Remember that this is a site where there are lots of old machines with old builds.  Remote access of WMI (even with all the permissions and policies configured) doesn’t seem to work.  Plus people are in and out with laptops so I have to time my scan perfectly.

So I went back to ConfigMgr and its reports.  The benefit is that an installed agent will do the hardware inventory and report back to the ConfigMgr server.  No remote WMI required.  This makes it more reliable.  I also get a scan when the agent is installed.  And I’ve done that 3 ways:

  1. ConfigMgr push.
  2. Start-up script.
  3. Sneaker-net: This is a crusty network and I noticed that the agent push was not as successful as it should have been.

There are some basic reports for Vista and Windows 7 assessments.  I stress basic.  The same problems exist here.  But the reports gave me a template that I could work with.  I started off by creating a report that queries for the number of each of the different models of computer on the network.  That gives me the information I need to check hardware maximum capacities.  I then created a collection that contains all agent managed desktops and laptops.  I took the Windows 7 assessment report, cloned it, and rewrote the SQL query for the report.  I then ran that report against my new managed client computer collection.  It gives me the following for each computer:

  • Computer name
  • Computer model
  • CPU model, speed, and 64-bit support
  • Physical memory
  • Physical disk size

I’ve enough information there to plan everything I need.  I can dump it into Excel and work away to create my reports.  I can price hardware component upgrades and computer replacements.  I can plan the OS deployment.  It would have been nice to do this with MAP but unfortunately the basic nature of the reports and the lack of an agent (for circumstances such as those that I’ve encountered on this project) did not help.

ConfigMgr continues to rock!  Plus I was able to show it off to some of the folks at the site.

Using MAP in a Messy Network

I’ve been doing an assessment for a Windows 7 deployment in a network that’s not had any regular maintenance in a long time.  For example, there are 400+ computer accounts with around 100 real machines.  I can’t even use oldcmp to clean up because some of those “stale” account are associated with machines that are archived/stored for old projects that might need to be recovered.  I also have an issue where machines are not responding as expected to MAP, despite all the policies being in place.  Solution?  The Swiss Army Knife of systems management: System Center Configuration Manager.

I set up a ConfigMgr (licenses are there) and deployed an agent to all machines.  That had limited success as expected (see above).  I then set up a start up script to hit the machines when the reboot – which is not very often (it is a bit of a “wild garden” network).  The perk of this is that I get a client install that will audit machines are report back information, regardless of firewall, etc.

Over time the number of managed agents has doubled, giving me a good sample to work with.  I was able to run a report to get the computer names of all the desktop machines.  Now I took that CSV and converted it into a text file, each line having a computer name.  That’s perfect for a text file discovery in MAP.

I ran a discovery and assessment using that and got much better results than before.  It’s still not perfect and that’s because we are in the real world.  Many of the machines are offline, either out of the office or turned off.  Some machines haven’t been rebooted or powered up to get the ConfigMgr agent.  So there will be some sneaker net to take care of that. 

And that’s how I’ve done an assessment in a wild network that a simple MAP deployment would not have succeeded in.