I was interested in using Microsoft Windows Intune as a service at work.  A nice simple centralised service with an online presence is perfect for a company with lots of people on the road.  Plus it gives you access to Windows 7 Enterprise edition.  I think I have changed my mind about considering Intune now.  Here’s why.

The listed price (before tax) is $11 in the USA, per machine, per month.  Not bad.  Take the exchange rate into account and that should be just under €8/month.  I know the USD is weak so I could live with €9 for the price over the long term.  I saw no listed price for Ireland.  With MS online services, I’ve found you have to log in with your live ID to see the price for your region.  So I did that and the price was ….


That works out as $15.44 before tax.  In other words, Microsoft are charging 40% more for Intune, before tax, if you are living in Ireland.  Why exactly is that?  I remember in Vista days when the same currency symbol swapping was done and the exchange rate was blamed.  It was BS and everyone knew it.  I was genuinely and ahppily surprised when I saw a real conversion on the price of  Windows 7.  I cannot wait to hear why it’s happening now with Intune.

The product might be good, and the future excellent, but I’ll be damned if I’m going to put up with that sort of pricing.

I’ll be interested to hear what rates MS are charging for Intune in your region.  Post a comment to share.


One of the nice things about virtualising Microsoft software (on any platform) is that you can save money.  Licensing like Windows Datacenter, SMSD, SQL Datacenter, or ECI all give you various bundles to license the host and all running virtual machines on that host. 

Two years ago, you might have said that you’d save a tidy sum on licensing over a 2 or 3 year contract.  Now, we have servers where the sweet spot is 16 cores of processor and 192 GB of RAM.  Think about that; that’s up to 32 vCPUs of SQL (pending assessment, using the 2:1 ratio for SQL) Server VMs.  Licensing one two pCPUs could license all those VMs with per-processor licensing, dispensing with the need to count CALs!

In it’s just getting crazier.  The HP DL980 G7 has 64 pCPU cores.  That’s 128 up to vCPUs that you could license for SQL (using the 2:1 ratio for SQL).  And I just read about a SGI machine with 2048 pCPU cores and 16TB of RAM.  That sort of hardware scalability is surely just around the corner from normal B2B. 

And let’s not forget that CPUs are growing in core counts.  AMD have led the way with 12 core pCPUs.  Each of those gives you up to 24 SQL vCPUs.  Surely we’re going to see 16 core or 24 core CPUs in the near future.

Will Microsoft continue to license their software based on sockets, while others (IBM and Oracle) count cores?  Microsoft will lose money as CPUs grow in capacity.  That’s for certain.  I was told last week that VMware have shifted their licensing model away from per host licensing, acknowledging that hosts can have huge workloads.  They’re allegedly moving into a per-VM pricing structure.  Will Microsoft be far behind?

I have no idea what the future holds.  But some things seem certain to me.  Microsoft licensing never stays still for very long.  Microsoft licensing is a maze of complexity that even the experts argue over.  Microsoft will lose revenue as host/CPU capacities continue to grow unless they make a change.  And Microsoft is not in the business of losing money.


You need to be aware of a few things if you are deploying System Center Virtual Machine Manager (SCVMM) 2008 R2 at the moment.

First thing is that VMM 2012 is just around the corner.  The public beta launched yesterday.  It brings about some big changes.  If you are buying that license then I recommend that you tack on some Software Assurance to get the upgrade to SCVMM 2012 when it is released as RTM.

Next up is SQL Server support.  SQL Express has been supported up to now.  That limits you to an on-board 4GB database.  That’s not been an issue for most Hyper-V deployments.  The free license (as opposed to SQL Server Standard edition) was a real money saver and an “obvious” decision – one which I have made myself.

VMM 2012 will not support SQL Express.  You will need SQL Server 2008 R2 Standard (or higher edition).  Yup; you will have to spend that little bit more.  If you are doing the upgrade (after VMM 2012 RTM) then you can probably install SQL Server 2008 R2, detach the database from Express, and reattach the database in SQL Server 2008 R2 (to be verified).

An interesting scenario will be that VMM 2012 can be made highly available.  Some have deployed VMM as a HA VM (which I strongly dislike) to get this effect.  HA VMM will require a clustered file share (for the library) and HA SQL (for the VMM database).

So keep all that in mind if you are deploying VMM 2008 R2 now.


Formerly known as “Project Atlanta”, System Center Advisor (SCA) was talked about today at MMS 2011.  It basically is a cloud version of OpsMgr, capable of monitoring your machines.  Right now, it supports monitoring of x86 or x64 versions of:

  • Microsoft SQL Server 2008 or later
  • Windows Server 2008 or later

Those experienced with OpsMgr will know that an OpsMgr agent (which is what SCA uses BTW) uses Kerberos (AD) for authentication.  That won’t be possible here!  But OpsMgr does have a gateway.  SCA uses that Gateway functionality.  So, if you want to use SCA you have to install the SCA Gateway and that requires a machine running Windows Server 2008 (x86 or x64) or later.  Your agents authenticate with the gateway and the gateway authenticates with SCA in the cloud.

The architecture isn’t all that different to what has been possible with OpsMgr 2007 up to now.  And the firewall side of things is easy too!

You can access a web portal to monitor all of your resources on those supported platforms.  I guess more platforms will be added over time.

The setup is easy.  Log into the site with a Windows Live ID.  You download your unique gateway cert.  Install the gateway with the cert.  Deploy your agents!  You’re practically walked through the process:


This will be attractive to smaller companies who want some of the power of OpsMgr.  They might get that functionality without the outlay on hardware and consulting.  Some VPS hosted companies may like this.  It’s not enterprise ready; the limited platform support is an issue.  And we don’t really know how “live” it will be.  But it definitely is something worth keeping an eye on.

As a MS partner, it’s a bit worrying because it redirects business from the partner directly to MS.  It also doesn’t appear to have that partner model that is evident in Intune.  Speaking of which – I see no integration with Intune.  Maybe with time …


The public beta for System Center Virtual Machine Manager 2012 has been launched today at MMS 2012.  You can download it now.

This one is a game changer for Hyper-V administrators.  Cloud, service templates, host/cluster deployment, network/storage integration, XenServer support … VMM is getting as big as ConfigMgr!

Don’t expect it to be like going from VMM 2008 to VMM 2008 R2.  It’s a very different tool.  You’ll need to do some reading to get to know it – but it’s worth it!


One of the strengths of Hyper-V is one of its weaknesses: it supports a huge variety of hardware including hosts and storage.  Hyper-V clusters need to be treated like mainframes.  Any change, no matter how small you think it is, needs to be verified and tested (by somebody) before you apply it.  We typically advise people with a Hyper-V cluster not to update drivers, firmwares, etc, unless they have to.  And if they do, then they need to work with hardware vendors to ensure that they are tested with the build of Hyper-V and that the various updates are regression tested.  For example, blades with SAN cannot have just one firmware updated: an entire set of firmwares and drivers must be deployed.

And along comes SP1 for Windows Server 2008 R2.  All those integrated hotfixes and Dynamic Memory are really tempting, aren’t they?  Wouldn’t you just love to deploy it as a part of your standard host build?

Question: do your server and storage vendors support SP1 yet?  Are your drivers tested on SP1 by the h/w vendor?  Is your storage going to work if you deploy it?  Will your cluster validation (required for MS support) work after you install SP1?  Will you need firmware updates before you deploy the service pack?  Have you asked these questions yet?

Make sure that your storage and server vendors do support W2008 R2 SP1 and that your firmwares and drivers versions meet the requirements before you install the SP.  You don’t want to finish that last reboot and get a nasty surprise when you re-run the cluster validation or run a storage level backup of  your CSVs.


A few different things have happened in the past couple of weeks so here’s a roundup.

Internet Explorer 9 RTW
IE9 was release by Microsoft with the usual claims of standards compliance, quicker browsing, better security, and so in. I can’t say if all that’s true or not; I stopped using IE8/9 during the IE9 beta. I got tired of memory being eaten up, slow browsing, crashes, and all that jazz. I also didn’t
Like the direction that IE9 was going.

I’m now part of the majority of Europeans that are primarily using Firefox. For the first time in a very long time, IE is a minority browser in Europe.

MED-V is a part of MDOP, that package that is only available to Software Assurance customers. I’ve never looked at it. I meet a lot of IT consumers in Ireland because of my community involvement. I don’t know anyone that is using MDOP. I’ve only got so much time so I have to focus on what I work on or learn. Hence, I have not looked at MDOP.

Microsoft Legal Creating Trouble?
I’m listening to the Paul Thurrot podcast this morning. He reported that MS is trying to push through a bill in Washington State. It would allow MS to sue a company that uses the service of another company, where that other company uses pirated MS software. It would be limited to corporations making more than 50 million dollars. The effect would be ridiculous. Any corporation would have to audit the computers and license purchases of service providers. That would create all sorts of bureaucracy. As if MS licensing wasn’t bad enough?!

I see it causing trouble in the public cloud or hosting business. Open source Linux already rules the roost here. This effort by MS legal won’t help that.

Windows 8
Microsoft aren’t talking (directly) but there are plenty of leaks. Screenshots are out there if you look for them. Some scheduling information is also available. The M2 build is allegedly complete and bug detection/fixing is underway, and maybe M3 build is underway as well.

BTW, I reckon that PDC 2011 delegates might want to leave room in their bags for a slate PC (maybe an ARM tablet) with an install of Windows 8 on it.

MMS 2011
The conference to go to is underway. If you are there, I strongly recommend the SCVMM sessions. The keynotes will be streaming live online, starting today.


Floppy Drive

I recently started watching The Sopranos on the advice of some friends. The episode I just watched featured a young mobster called Christopher who was attempting to write a screenplay about mob life – probably not a way to ensure a long life. The final scene showed him ejecting a floppy drive from his laptop and emptying his diskette container into a bin. That dates the show, eh?

Having a diskette drive in my laptop used to be essential. I worked in labs and OS deployment and busing able to make DOS bootable drives was a part of the job. Even if the standard OS was NT 4 or Windows 2000, I used to have my laptop dual boot with Windows 95/8 or even ME. I kept boxes of floppies just in case.

Looking back on that, things are so much easier now with boot USB sticks and WDS. I can’t imaginge going back to those days.


New Facebook Page

I’ve just set up a Facebook page to provide links to posts in this blog.  Search for aidanfinn.com, hit “Like”, and you’ll get updated whenever I update this blog.


I think I’ve mentioned before that writing a book is hard work.  To be honest, when you’re going through the 3rd and 4th edit, you sometimes start to wonder if it’s all worth it or not. 

But then when you get positive feedback, sometimes by email or by Twitter, it can perk you up quite a bit.  Here’s a little sample of that for Mastering Hyper-V Deployment:

“… thank you for your awesome Hyper-V blog- it has really helped me get moving on Hyper-V. I purchased your book, Mastering Hyper-V Deployment earlier this week and found that to be even more valuable” – Paul

“… read it for the book review and I must say it is great” – Carsten

“…Great book” – Michael

“Handing out 16 copies of Aidan Finn’s Mastering Hyper-V Deployment book http://amzn.to/aKCQXj to the students of my #hyperv course” - @hvredevoort

Then there is the feedback on Amazon where Mastering Hyper-V Deployment is averaging 5 stars:

“Just got the book and reading half way through. A well written book with a lot of good explanation and diagram to assist user to understand the hyper v deployment. Keep up the good work” – Lai Yoong Seng

“The book has proven to be a big timesaver because it (1) cuts through the bureaucracy of the Microsoft-provided documentation and the hours researching product information on the web and (2) it covers details that will help me avoid problems later.  This is one of the few network admin books I have read cover-to-cover.” – S. Tsukuda

I found this book to be a very easy read and overall it had a great flow. Being an IT professional, I have read a lot of technical books and most are tough to read cover to cover. I had no issues reading through Mastering Hyper-V Deployment because Aidan’s style of writing is natural and he writes at a technical level that can translated by anyone, not just a Hyper-V expert. I highly recommend purchasing this book if you are planning to deploy Hyper-V R2 or have already done so.” - A. Bolt

“Best of all, you’ll get almost all the answers to the questions you’ve been thinking about. It’s all about details, but it’s always easy to get into it. You’ve been asking to yourself whether you should use snapshot on a VM running SQL ? the answers found from different sources on internet may be confusing you. In this book you’ll learn why not to use it or when you should use it and how to avoid any problem doing it among many other details to be aware of.” – Thomas Lally

“Appropriate for all Hyper-V users from the beginner to the expert, it goes beyond deployment and is definitely the administrator’s aid and if using guidance here your Hyper-V solution should remain in good shape.” – Virtualfat

“This is an excellent introduction to Hyper-V which is Microsoft’s Enterprise Software Solution. I particularly like the way the book is laid out, it is similar to a project plan to assist you if you were deploying your own Hyper-V project.  There is lots of very good information contained and this book is an asset to anyone who is planning a Hyper-V Deployment.” – Mr. J. Kane

One of the more interesting comments have been reported to me (from two independent sources) was from the Microsoft European HQ in Reading, UK.  Some of the Microsoft consultants there have stated that they thought Mastering Hyper-V Deployment was the best Hyper-V book they’ve read, including those from MS Press.  It would be an understatement to say that put a smile on my face!

Credit for the quality of Mastering Hyper-V Deployment must also be shared with the editors from Sybex, Hans Vredevoort (technical editor), and Patrick Lownds (co-author).

Last year was tough.  I was getting pretty tired of the editing process as we circled the end of Mastering Windows 7 Deployment.  I pushed through and eventually it was released a few weeks ago.  Today I got this nice message on Twitter from @miamizues

“Your co authored book on windows 7 deployment is our departments new bible, thank you”.

I was just a part of a big team of people who wrote, edited, and reviewed that book, but that was especially nice to hear.

Thank you to those concerned for taking the time to pass on or share the nice words.

And there are also plenty of online and in-person friends/colleagues who’ve said some nice things and supported me.  You know who you are and thank you!


Here are few things that you might want to keep in mind if you are embarking on a Hyper-V project:

  • Whether you are an administrator that will pick up responsibility, or a consultant doing the deployment, you need to understand the products that are being used. Learn about Hyper-V. Understand how networking works. Figure out the differences between Fixed and Dynamic VHD. Learn how Dynamic Memory adds and balloons memory in virtual machines. Learn about VMM, including things like templates, delegation, PRO, and so on. And clustering … you gotta know how to troubleshoot a Windows cluster.
  • Understand the applications that you are installing in virtual machines.  The old rules do not always apply.  Use your favourite search tool to understand the supported/recommended configurations for AD, SQL, SharePoint, Exchange, or System Center when installed in a VM on any virtualisation platform (TechNet).  And that applies to any application (Oracle screams out here).
  • Become a storage engineer. When I was a Windows admin, most of my problems usually stemmed from the network. As a virtualisation engineer, I have found that storage performance management and troubleshooting is critical. Key to this is understanding Hyper-V/Clustering CSV and redirected I/O.
  • Understand your backup requirements and how they impact storage/CSV and host networking design.. If you are using storage level backup then you need to consider storage purchase (hardware VSS provider that supports CSV is critically important), storage design, and virtual machine placement, to control the impact of redirected I/O.
  • Visit the TechNet page that lists updates to Windows Server 23008 R2 Hyper-V on a regular basis. http://technet.microsoft.com/en-us/library/ff394763(WS.10).aspx Almost every problem I’ve heard of someone having could have been avoided by applying the updates on here. Note that SP1 rolled up any updates prior to its release.
  • Be aware of management agents that you install on your hosts. For example, OpsMgr receives cumulative updates and management pack updates. Sometimes these can resolve stability or performance issues.
  • Watch out for bug fixes for WMI. I’ve seen a stability issue and a memory leak dealt with in the past couple of years that have either effected a cluster I “owned” or those of customers.
  • If you are installing AV on your Hyper-V hosts then read the supported configuration. http://support.microsoft.com/kb/961804
  • Don’t assume anything. Learn about what you are making a decision on. Whether that’s assuming that you understand how something works, or you assume that you understand the performance requirements of your exiting pre-conversion physical servers without using an assessment tool.
  • Treat your Hyper-V hosts/clusters like a mainframe. Your hosts will probably be running lots of mission critical virtual machines and applications. Host/cluster downtime doesn’t affect just a few machines; it could affect all business operations in the organization. That means you should have strict change control, limited administrative access, and so on. Don’t group your hosts with other Windows servers. Treat them as special. Have a set of know trusted driver versions. Don’t just update when a new version is released. Test, test, test. Then upgrade.
  • If you don’t know this stuff, then bring in an expert who does.  And don’t be afraid to ask questions, or admit when you’re wrong.

That’s not an exclusive list, but it’s a start.


Note: in this post I’ll be concentrating on IaaS (infrastructure as a service or VM hosting in the cloud).  SaaS and PaaS are slightly different conversations.

I was having a conversation on Twitter with someone that I respect yesterday afternoon about designing a private cloud.  He made an interesting comment that got me thinking.  A pure private cloud – which is, at its core, hosting of VMs in an internal or colo hosted environment – is a lot of work and maybe a business should consider using an existing public cloud.

And you know, that’s a very good point.  You might have gathered from yesterday’s post that there are a lot of unknowns when it comes to building a pure private cloud.   With all that bother and stress, why not eliminate all the effort and set up an account with something like an Amazon EC2, rackspace, or a local “boutique” public cloud hoster, and get instant access to a scalable, elastic computing environment.  On the upside, you get instant access, you don’t have the investment, you’ve none of the risk, and your business may even get to eliminate you from the payroll.  Say, what?!?!!?!?

There are definitely times when public cloud is the right option.  Obvious cases are when you’re putting together public facing applications.  But are there times when it’s not the right option?

A respected friend made an interesting point when discussing the cloud last year.  He asked if people were happy right now with their telephone or internet service providers.  Did they live up to SLAs?  How did they perform when things went wrong?    I think it’s fair to say that these operators tend to suck at support, as do most service providers when we have a phone number/email address as our contact points.  How exactly will a public cloud provider be any different?  Often they aren’t.  It doesn’t take much googling to find people who have that same experience with their hosters.  And hey – I include the “big boys” in this too.  I may have no data to back this up, but I bet you the smaller hoster will give you better effort at support because they will value your business more.

I think a big differentiator between internally and externally hosted infrastructure is the ability to get a response from support.  If a manager needs something done internally then they can shout, threaten, cajole, etc.  Stuff will get done when pressure is applied.  What about external hosting?  The rule of thumb that I’m using: the bigger the hoster, the less qualified and enabled the person on the other end of the phone/email will be.  I know from experience that “customer care” are minimum wage and don’t care.  You can escalate to threats and that leads to a “promise” of a supervisor call back (to get you off the line) or them hanging up.  That urgent backup recovery that you can get pushed through internally in minutes may take 24 hours afterall in a public cloud.

But there is an SLA there to protect you!  You might have 99.9% “guaranteed” up time from the hoster.  But that isn’t a guarantee.  It’s actually a promise that the hoster will refund you part/all of your payment for that month if they give you less that 99.9% up time.  That might be useless if your business is relying on a public cloud for all internal operations.  And the devil is in the details; how exactly is the SLA written in the contract and measured in reality?  Don’t make any assumptions.

You might get over that SLA issue by geo-cluster your resources across many data centres.  I know some hard core people who’ll insist that you should really geo-cluster your resources between different hosters in different locations!  That’s the only guarantee of getting better uptime. 

I’ve barked about the Patriot Act and the nature of the USA politicians quite a bit in the past.  Fact: the Patriot Act applies to all American owned data centres (USA, Ireland, middle east, Asia) despite what some sales & marketing people say.  If you need to comply with things like a European/Irish Data Protection Act then you need to stay clear of those data centres.  That also means figuring out if your hoster is colo hosted in one of those data centres.  Sure the risks are small – but they are real.  I was at a lecture last year where a solicitor (lawyer) stated so, even though he argued that the risk was small and was OK with that small risk.  I’d contradict by reminding people that the original draft of the Cyber Security Act (co written by Democrats and Republicans) wanted to give free access to all American hosted (anywhere) data to the US Department of Commerce.  That got eliminated but who knows if that one sneaks it’s way back again.  That would have given the US government free access to your businesses data.  And there are historical cases where government organisations have used their access to data (legal or otherwise) to assist native companies in competitive scenarios.  Compliance is complicated – usually requiring the legal folks to get involved.  You may have issues with data leaving your state/country at all because of industry regulations, even if there are equal data protection laws in the other state/country.  Laws are different everywhere and different industries have different rules.  Don’t assume anything.

Sounds like I’m really down on public cloud and all for the private cloud.  Not quite:

  • If you need an online presence then public cloud can give you a secure location abstraction and huge bandwidth availability.
  • If you choose your hoster carefully then you can be compliant with industry or state/national regulations.
  • A public cloud can give you instant access to an infrastructure with instant huge scalability.  You get none of the risk of designing a private cloud and none of the hassle/delays/capital investment associated with a private cloud.  Plus, an internal infrastructure will only have limited scalability unless you have capital investment funds to burn.
  • The finance folks might like the idea of a public cloud – so they can fire you/me or some of your colleagues.  Call it operating cost reduction or rationalisation.

There’s no one answer for everyone.  Some will go completely public.  They’re likely to be smaller organisations.  Some will go completely private.  And some will have a mix of both (hybrid or cross-premises cloud).  Anyway, that’s my rambling done with for the day.


I joined the tail end of a webcast about private cloud computing to be greeted by a demonstration of the Microsoft Assessment and Planning Toolkit in a virtualisation conversion scenario.  That got me to thinking, raised some questions, and brought back some memories.

Way back when I started working in hosting/virtualisation (and it was VMware 3.x BTW) I had started a thread on a forum with some question.  It was something storage sizing or planning but I forget exactly what.  A VMware consultant (and a respected expert) responded by saying that I should have done an assessment of the existing environment before designing anything.

And there’s the problem.  In a hosting environment, you have zero idea of what your sales people are going to sell, what your customers are going to do with their VMs, and what the application loads are going to be.  And that’s because the sales people and customers have no idea of those variables either.  You start out with a small cluster of hosts/storage, and a deployment/management system, and you grow the host/storage capacity as required.  There is nothing to assess or convert.  You build capacity, and the business consumes it as it requires it, usually without any input from you. 

And after designing/deploying my first private cloud (as small as it is for our internal usage) I’ve realised how similar the private cloud experience is to the hosting (public cloud or think VPS) experience.  I’ve built host/storage capacity, I’ve shared the ability for BI consultants/developers to deploy their own VMs, and I have no idea what they will install, use them for, or what loads there will be on CPU, storage, or network.  They will deploy VMs into the private cloud as they need them, they are empowered to install software as they require, and they’ll test/develop as they see fit, thus consuming resources in an unpredicatable manner.  I have nothing to assess or convert.  MAP, or any other assessment tool for that matter, is useless to me.

So there I saw a webcast where MAP was being presented, maybe for 5-10 minutes, at the end of a session on private cloud computing.  One of the actions was to get assessing.  LOL, in a true private cloud, the manager of that cloud hasn’t a clue what’s to come.

And here’s a scary bit: you cannot plan for application supported CPU ratios.  Things like SharePoint (1:1) and SQL (2:1) have certain vCPU:pCPU ratios (virtual CPU:physical core) that are recommended/supported (search on TechNet or see Mastering Hyper-V Deployment).

So what do you do, if you have nothing to assess?  How do you size your hosts and storage?  That is a very tough question and I think the answer will be different for everyone.  Here’s something to start with and you can modify it for yourself.


  1. Try to figure out how big your infrastructure might get in the medium/long term.  That will define how big your storage will need to be able to scale out to.
  2. Size your hosts.  Take purchase cost, operating costs (rack space, power, network, etc), licensing, and Hyper-V host sizing (384 VMs max per host, 1,000 VMs max per cluster, 12:1 vCPU:pCPU ratio) into account.  Find the sweet spot between many small hosts and fewer gigantic hosts.
  3. Try to figure out the sweet spot for SQL licensing.  Are you going per-CPU on the host (maybe requiring a dedicated SQL VM Hyper-V cluster), per CPU in the VM, or server/CAL?  Remember, if your “users” can install SQL for themselves then you lose a lot of control and may have to license per CPU on every host.
  4. Buy new models of equipment that are early in their availability windows.  It might not be a requirement to have 100% identical hardware across a Hyper-V cluster but it sure doesn’t hurt when it comes to standardisation for support and performance.  Buying last year’s model (e.g. HP G6) because it’s a little cheaper than this year’s (e.g. HP G7) is foolish; That G6 probably will only be manufactured for 18 months before stocks disappear and you probably bought it at the tail end of the life.
  5. Start with something small (a bit of storage with 2-3 hosts) to meet immediate demand and have capacity for growth.  You can add hosts, disks, and disk trays as required.  This is why I recommended buying the latest; now you can add new machines to the compute cluster or storage capacity that is identical to previously purchased equipment – well … you’ve increased the odds of it to be honest.
  6. Smaller environments might be ok with 1 Gbps networking.  Larger environments may need to consider fault tolerant 10 Gbps networking, allowing for later demand.
  7. You may find yourself revisiting step 1 when you’ve gone through the cycle because some new fact pops up that alters your decision making process.

To be honest, you aren’t sizing; You’re providing access to elastic capacity that the business can (and will) consume.  It’s like building a baseball field in Iowa.  You build it, and they will come.  And then you need to build another field, and another, and another.  The exception is that you know there are 9 active players per team in baseball.  You’ve no idea if your users will be deploying 10 * 10 GB RAM lightly used VMs or 100 * 1 GB RAM heavily used VMs on a host.

I worked in hosting with virtualisation for 3 years.  The not knowing wrecks your head.  The only way I really got to grips with things was to have in depth monitoring.  System Center Operations Manager gave me that.  Using PRO Tips for VMM integration, I also got my dynamic load balancing.  Now I at least knew how things behaved and I also had a trigger for buying new hardware.

Finally comes the bit that really will vex the IT Pro:  Cross-charging.  How the hell do you cross-charge for this stuff?  Using third party solutions, you can measure things like CPU usage, memory usage, storage usage, and bill for them.  Those are all very messy things to cost – you’d need a team of accountants for that.  SCVMM SSP 2.0 gives a simple cross charging system based on GB or RAM/storage that are reserved or used, as well as a charge for templates deployed (license).  Figuring out the costs of GB of RAM/storage and the cost of a license is easy. 

However, figuring out the cost of installed software (like SharePoint) is not; who’s to say if the user puts the VM into your directory or not, and if a ConfigMgr agent (or whatever) gets to audit it.  Sometimes you just gotta trust that they’re honest and their business unit takes care of things.


I want to send you over to a post on Working Hard in IT.  There you will read a completely valid argument about the need to plan and size.  I 100% agree with it … when there’s something to measure and convert.  So please do read that post if you are doing a traditional virtualisation deployment to convert your infrastructure.  If you read Mastering Hyper-V Deployment, you’ll see how much I stress that stuff too.  And it scares me that there are consultants who refuse to assess, often using the wet finger in the wind approach to design/sizing.


The next Private Cloud Academy event, co-sponsored by Microsoft and System Dynamics, is on next Friday 25th March, 2011.  At this free session, you’ll learn all about using System Center Data Protection Manager (DPM) 2010 to backup your Hyper-V compute cluster and the applications that run on it.  Once again, I am the presenter.

I’m going to spend maybe a 1/3 of the session talking about Hyper-V cluster design, focusing particularly the storage.  Cluster Shared Volume (CSV) storage level backup are convenient but there are things you need to be aware of when you design the compute cluster … or face the prospect of poor performance, blue screens of death, and a P45 (pink slip aka getting fired).  This affects Hyper-V when being backed up by anything, not just DPM 2010.

With that out of the way, I’ll move on to very demo-centric DPM content – I’m spending most of next week building the demo lab.  I’ll talk about backing up VMs and their applications, and the different approaches that you can take.  I’ll also be looking at how you can replicate DPM backup content to a secondary (DR) site, and how you can take advantage of this to get a relatively cheap DR replication solution.

Expect this session to last the usual 3-3.5 hours, starting at 09:30 sharp.  Note that the location has changed; we’ll be in the Auditorium in Building 3 in Sandyford.  You can register here.


Imagine this: you are running a pretty big Hyper-V environment, Microsoft releases a service pack that adds a great new feature like Dynamic Memory (DM), legacy OS’s will require the new ICs, and you really want to get DM up and running.  Just how will you get those ICs installed in all those VMs?

First you need to check your requirements for Dynamic Memory.  The good news is that any Windows Server 2008 R2 with SP1 VM will have the ICs.  But odds are that if you have a large farm then things aren’t all that simple for you.  Check out the Dynamic Memory Configuration Guide to see the guest requirements for each supported OS version and edition. 

OK, let’s have a look at a few options:

By Hand

Log into each VM, install the ICs, and reboot.  Yuk!  That’s only good in the smallest of environments or if you’re just testing out DM on one or two VMs.


VMM has the ability to install integration components into VMs.  The process goes like this:

  1. Shut down a number of VMs
  2. Select the now shut down VMs (CTRL + select)
  3. Right-click and select the option to install new integration components
  4. Power up the VMs

You’ll see the VM’s power up and power down during the installation process.  Now you’re done.


Here’s an unsupported option that will be fine in a large lab.  You can use the System Center Updates Publisher to inject updates into a WSUS server.  Grab the updates from a W2008 R2 SP1 Hyper-V server and inject them into the WSUS server.  Now you let Windows Update take care of your IC upgrade.

Configuration Manager

This is the one I like the most.  ConfigMgr is the IT megalomaniac’s dream come true.  It is a lot of things but at it’s heart is the ability to discover what machines are and distribute software to collections of machines that meet some criteria.  So for example, you can discover if a Windows machine is a Hyper-V VM and put it in a collection.  You can even categorise them.

You may notice that Windows Server 2008 with SP2 Web and Standard editions require a prerequisite update to get DM working

So, you can advertise the ICs to a collection of W2008 with SP2 standard and web editions, making that update a requirement.  The update gets installed, and then the ICs get installed.  All other OS’s: it’s just an update.  And of course, you just need to install SP1 on your W2008 R2 VMs.  As you may have noticed, I’[m not promoting the use of the updates function of ConfigMgr; I’m talking about the ability to distribute software.

I’ll be honest – I don’t know if the ConfigMgr method is supported or not (like the WSUS option) but it’s pretty tidy, and surely must be the most attractive of all in a large managed environment.  And because it’s a simple software distribution, I can’t see what the problem might be.


Subtitle: And How I Rolled It Out

The company I work for is an IT consulting firm that does all sorts from infrastructure, application (BI) consulting, and business solution development.  My team (infrastructure) provides client services and manages our internal IT.

Up to now, it pains me to say, it’s been all ESXi.  It was free and it just works.  But that free came with a price: no centralised management, and it was completely IT driven.

A business requirement came along that I knew I could sort out with VMM 2008 R2.  We also had a requirement for more virtualisation capacity.  Hyper-V to the rescue – and I also saw how I could introduce a private cloud using SCVMM SSP 2.0:

  • It would allow the various IT consumers in the business to deploy their own VMs without waiting for availability in my team.
  • We would spend less time doing repetitive lab deployments.
  • We could control VM sprawl with quotas based on real GB figures (we aren’t cross charging).

So Hyper-V, VMM, and SCVMM SSP 2.0 were all installed.  The VMM is a physical server with lots of SATA disk because I live in the library.  Generalised VHDs for various required OSs were created.  I created 3 hardware profiles.  From those I have 3 templates for each OS version/edition VHD.  For example, there’s a W2008 R2 VHD with 1GB RAM, 4 GB RAM, and 8 GB RAM templates.

SCVMM SSP 2.0 was installed in a VM.  I wanted to separate it from VMM to give me more modularity and flexibility.  Each team has a business unit and an infrastructure.   I’m creating those for the teams because … well … the forms are a little complicated and I’ll catch heat if I ask the team leaders to complete them (requiring billable time to figure them out first).  It’s just easier if I do that stuff.  All that remains for them is to manage their VMs.

All the templates are imported and available to all of the infrastructures.  Each business unit is capped by RAM and disk GB.

For each BU, I deployed a VM based on the team’s current requirements.  That gave me (1) a chance to test and (2) something to demonstrate with when showing them the SSP 2.0 portal. 

Deployed VMs get static IPs from an SSP 2.0 managed pool.  BGInfo displays that info on the console.  They can “KVM” into the VMs using the portal or they can RDP in.

BTW, I am using a signle AD group for the membership of each BU.  Tradtional security is best.  I temporarily use a non-admin domain user to provision each new BU and infrastructure.

A document is on the way but these are technical people.  I sat down with a rep from the first team earlier this afternoon and walked through the process of deploying, accessing, and destroying VMs.  After 5 minutes of a walk through, the first consultant was rocking and rolling.

Lessons learned?

  • I’m having some trouble with static IPs and W2003 R2.  That requires more investigation and work.  I’m thinking it’s an IC issue.
  • There’s no way to mount ISO files from the library in the SSP.  The solution I am thinking of is to reveal the original SSP from VMM.  It’s ready and the self-service user roles are created (without VM create rights).  That can be used to mount ISOs.  Twice the admin work required.
  • There is no way to change the spec of a deployed VM in SSP 2.0.  That is badly needed.  We’re making changes in VMM but SSP doesn’t see those changes, leading to …
  • I find myself diving into SQL to edit stuff that is not revealed in the portal.  For example – renaming a BU.  Or changing the spec of a VM isn’t reflected in SSP and therefore not in the quota usage.  That sucks.
  • If SSP and VMM lose contact with each other during a VM deployment then SSP considers the job failed, even if VMM continues with it without any issues.  SSP 2.0 needs an import feature (see next).
  • SCVMM SSP 2.0 SP 1.0 is needed badly. 
  • I did consider using the original SSP instead of SSP 2.0.  But the crude quota mechanism and the inability to assign static IPs was too much of a step down.

This morning I read about a crash issue with Service Pack 1 for Windows 7 on machines that came with the preinstalled OEM copy of Windows 7.  Nick Whittome posted a description and various workarounds on his Facebook page.

EDIT: there were some issues with the Facebook link so you can go to a post on TechNet instead.

Technorati Tags: ,

A question that is bouncing around now is: Should I plan on using RemoteFX for everyone?

Short answer: probably not.

Long answer: …

Let’s step back.  In 1996 you probably (or your colleagues/predecessors) gave everyone a laptop or PC.  Then we all started hearing about this thing called server based computing.  The big players were Citrix with something called WinFrame, based on Windows Server NT 3.51 (when it required real IT pros to network a PC or server).  Not long after that the Citrix/MS relationship changed and we had Terminal Services which could be extended by the Citrix solution.

That’s about when some sales & marketing people (we didn’t have bloggers back then, did we?), and yes, a few of us consultants, started shouting that the PC was dead … long live the server!

Things didn’t quite work out like that.  Instead, Terminal Services (and the rest) usually became a niche solution.  It was great for delivering awkward applications to end users, especially when they were remote to some server, working from home or in a branch office.  But the PC still dominated end user computing.

Not long after, I remember a rather large consultant colleague from Berkley who derided me for learning more about Windows.  Didn’t I know that the Penguin would rule the world?!?!?!  Hmm … anyway …

Then a few years ago we saw how server virtualisation was being modified (with the help of a broker) to take the remote client of server based computing and provide connectivity to centrally located VMs with desktop operating systems.  VDI hit the headlines.  I’ve swung back and forth on this one so many times that I feel like a politician in election season.  At first I loved the idea of VDI.  It gave us the benefits of Terminal Services without the complexity of application compatibility (application silos) while retaining individual user environments.  Then I hated it.  The costs are so high compared to PC computing and you actually need more management systems instead of less.  And now I’m kinda swinging back to liking it again.

This is because I think it fits nicely in as part of an overall strategy.  I can see most people needing PC’s.  But sometimes, VDI is the right solution when people need an individual working environment that won’t be interfered with and they need it to be centralised.  But sometimes remote desktop (terminal) services (RDS) is the right solution.  That’s because it gives that centralised environment but at very dense rates of user/server that just cannot be matched by VDI.  And guess what: sometimes you need PC, VDI and RDS all in the same infrastructure, just for different users.

But let’s get back on track.  What about RemoteFX?  Would every user not want it?  And what the hell is RemoteFX?

RemoteFX is a feature of Windows Server 2008 R2 with Service Pack 1.  In other words, it’s a few weeks old (after heavy public beta/RC testing).  It allows Hyper-V VDI hosts or RDS session hosts to take advantage of one or more (identical) graphics cards in the physical server to provide high definition graphics to remote desktop clients.  That solves a problem for some users who want to use those graphics intensive applications.  Without RemoteFX, the graphics suck as bitmaps are drawn on screen.  But RemoteFX adds the ability to leverage the GPUs, combined with a new channel, to smoothly stream the animation over the wire.  It also allows client-attached USB devices to be redirected to the user session without the need for drivers on the client.  Sounds great, eh?  Everyone on VDI or RDS should have it!  Or should they?

You can find the hardware requirements for RemoteFX on TechNet.  And this is where things start to get sticky.  A user with a single normal monitor will require up to 184 MB of video card RAM.  That doesn’t sound like much until you start to think about it.  I’ve done a little searching on the HP side.  The largest card that they support is a NVIDIA Quadro FX5800which has 4 GB RAM.  That means that a HP GPU can handle 22 users!  You can team the cards but you can only get so many into a host.  For example, one of the 3 or 4 servers that HP supports for RemoteFX is the 5U tall ML370 G6 (not your typical virtualisation or session host spec) and it only takes 2 cards.  That’s 44 users which is not all that much, especially when we consider large multi-core CPUs, huge RAM capacities, SAN storage, and Dynamic Memory.  I don’t think this is a failing of RemoteFX; I think this is just a case of applications needing video RAM.  This type of technology is still very early days and video card manufacturers are watching and waiting.

There are special rack kits that contain lots of video processor/RAM capacity that can be hooked onto servers.  One of my fellow MVPs is using one of these.  They work but they are expensive.

And then there’s the other requirement: network.  This stuff is designed to work with 1 Gbps LANs, not for WANs.

So back to where we started: Should I plan on using RemoteFX for everyone?  For most people the answer will be no.  There will be a very small number who will answer yes.  Think about it.  How many end users really do need the features of RemoteFX?  Not all that many.  Implement it for everyone and you’ll have more, bigger hosts, hosting fewer users.  You’ll also be limited to using it in the LAN.

I think we’re back to the horses for courses argument.  Maybe you’ll have something like the following or a variation of it (because there are lots of variations on this):

  • A lot of PCs/laptops on the main network
  • Some people using VDI with whatever broker suits them
  • Some GPU intensive applications being published to PCs/laptops/VDI via RDS session hosts
  • And a measure of App-V for RDS and ConfigMgr to take care of it all!

The PC is dead!  Long live the PC!  But what about the iPad? *running while I still can*


Follow Up on Microsoft Feedback

There’s extremely little I can share about what was discussed at the summit – and to be honest, I’m staying on the cautious side just in case.

However, I thought I should post a follow up on the feedback post from a little while ago.  The folks concerned are listening and acknowledge the feedback.  That is not BS.  Some was already known, and we had plenty of opportunity to discuss the rest.  I can’t say anything more than that.

For most of the product groups, MVP Summit 2011 ended yesterday.  Some PGs have pre- or post- sessions with their MVPs.  I’m heading north in about 1 hour (5am) with an MVP friend to try photograph some Bald Eagles, weather permitting.

It was good meeting up with MVPs I met last year, talked with via chats/email, or met for the first time, both from the VM PG and from other PGs and regions.  Hopefully I’ll get to do this again next year.


Ben Armstrong  (aka the Virtual PC Guy) has just finished a presentation at the MVP Summit and presented one little bit of non-NDA info that I can share (and I’m sure Ben will correct me if [there’s an if?] I get it wrong).

Most people (including me up to this morning) assume the following about how VMs connect to a vSwitch in Hyper-V networking:


We assume, thanks to the GUI making things easy for us, is that properties such as VLAN ID and VMQ, which we edit in the vNIC properties, are properties of the vNIC in the VM.  We assume then that the vNIC connects directly to the vSwitch.  However, it is not actually like that at all in Hyper-V.  Under the covers, things work like this:


In reality, the vNIC connects to a switch port.  This vSwitch Port is not a VM device at all.  And like in the physical world, the vSwitch Port is connected to the switch.  In Hyper-V some networking attributes (e.g. VLAN and VMQ) are not attributes of the vNIC but they’re attributes of the vSwitch Port.

What does this mean and why do you care?  You might have had a scenario where you’ve had to rescue a non-exported VM and want to import it onto a host.  You have some manipulation work to do to be able to do that first.  You get that done and import the VM.  But some of your network config is gone and you have to recreate it.  Why?  Well, that’s because those networking attributes were not attributes of the VM while it was running before, as you can see in the second diagram.

Get Adobe Flash player