What the hell is USV?  It’s simple; it’s using technologies to unbind user data from the PC.  You’re talking about features like roaming profiles, redirected folders and offline files.

Believe it or not, most companies I encounter have not done this.  For them, a PC repair is the timely process.  A PC upgrade is a potentially nasty piece of work to use USMT to capture a user state and restore it.

That’s why MS has released a Planning and Designing Guide for Windows User State Virtualization (USV).  Reading this, you can enjoy the tech that the rest of us have been using since the mid 1990’s.  Some of us stated using redirected folders and offline files back with W2003 and XP.  Admittedly, I disabled Offline Files when managing XP because it was a royal PITA (not a good thing).  Vista/Windows 7 appear to have solved that.

Getting the user state off of the PC is invaluable:

  • Windows upgrades are simple and quick.
  • PC repair which might take more than 10 minutes can be replaced by PC rebuild.
  • User data is centralized and easier to back up.
  • Those worried about regulators can do archiving.

There are a number of notable changes in the Service Pack 1 beta for Windows 7 and Windows Server 2008 R2.  You might not have heard it, but they do go beyond Hyper-V.  There is a document you can read with all the details.  Here’s the highlights for the server OS:

  • Hyper-V Dynamic Memory
  • RemoteFX
  • A new IP address enforcement feature that is not in the beta release.
  • Enhancements to scalability and high availability when using DirectAccess
  • Support for Managed Service Accounts (MSAs) in perimeter networks
  • Support for increased volume of authentication traffic on domain controllers connected to high-latency networks
  • Enhancements to Failover Clustering with Storage

Here are the improvements for the desktop OS:

  • Additional support for communication with third-party federation services
  • Improved HDMI audio device performance
  • Corrected behaviour when printing mixed-orientation XPS documents

Both desktop and server:

  • Change to behaviour of “Restore previous folders at logon” functionality
  • Enhanced support for additional identities in RRAS and IPsec
  • Support for Advanced Vector Extensions (AVX)

This morning I read an article on Network World that I thought I’d write about.  It reported a claim by the Burton Group (yes; them again) where it was claimed that:

  • You should virtualise Exchange
  • You should not use Hyper-V to do it: because it does not have ordered virtual machine start-ups.

Let’s take these two, one at a time.

Virtualise Exchange

You can imagine that I’m all for virtualising as much as is reasonable.  A recommendation to virtualise Exchange always needs to come with a disclaimer.  You know this already if you’re a regular reader: Microsoft does not support highly available Exchange databases on any highly available virtualisation platform.  That means no Exchange 2007 CCR on VMware HA/DRS/VMotion.  No Exchange 2010 DAGs on XenServer clusters.  It doesn’t matter what virtualisation product you use; you cannot mix Exchange clustering in virtual machines with virtualisation clustering.  I’ve already flogged this one so I’ll quit now.

Ordered Virtual Machine Start-up

This is the Burton Group’s answer to Charlton Heston’s corpse gripping his six-shooter (oh yes; I did go there!).  This is a tiny thing and the difference between what they prefer (in VMware) to what Hyper-V has is tiny.  The Burton Group’s preferred ordering mechanism for VM’s would be:

  1. VM1 starts up
  2. Wait for VM1, then start VM2 and VM3
  3. Wait for …. etc.

Microsoft went a different way.  You can specify (in seconds) how long a virtual machine should wait before starting up after a host powers up.

Here’s my thinking: The Burton Group would like you to avoid the virtualisation solution that can really change how IT works and go with something else because of one tiny feature.  Huh!  I love how great IT experts take care of their customers and readers ;-)  Hyper-V is not the complete solution.  It’s the facilitator for Dynamic IT and for Optimised IT.  System Center are the agents that make use of Hyper-V.  You can change how you deploy servers and applications.  You can change how you monitor them.  You can change how you back up your business.  You can change how you present user applications to the business.  You can do all of this from an integrated management solution that manages Hyper-V and your physical infrastructure.  So …. get all that versus pay between 2 and 5 times more for a virtualization solution with the ability to start up VM’s in a specific order.  I know which enterprise ready solution I’d go for.


You wanted 4 virtual CPU’s in a Hyper-V Linux virtual machine?  You wanted clock sync and host shutdown sync?  Now you got it!

Ben Armstrong has just blogged that the version 2.1 integration components (or services if you are a VMM head) are released.  Mike Sterling is the man in the know so you can read what he has blogged to get all the news.  BTW, I included this version of the IC’s in Mastering Hyper-V Deployment. *end shameless plug*

This release is a huge step forward in gaining acceptance for Hyper-V from the Linux admins because SLES and RHEL are really equal citizens on Hyper-V now.  Now we just need VMM to catch up ;-)


Thanks to being in the hosting business for the past 3 years and doing short term contracting before that, I’ve never had to deal with the nightmare that is Microsoft volume activation.  My new role requires I understand it, and it crops up plenty in an exam I’m preparing for.  KMS, MAK, MAK with VAMT are three activation methods that spring to mind.  KMS is what you’ll try to use in a large environment with more than 25 clients.  KMS clients must be on the network to reactivate every 180 days.  MAK with VAMT is recommended for up to 50 clients…. there’s a grey cross over area there!  MAK is recommended for smaller environments.

You can’t install KMS on W2008, but you can with a patch, but you have to activate Windows 7 with a W2008 R2 key, and you can’t activate Office 2010 with it, but you can with a W2003/W2008R2/Windows 7 KMS … you see where I’m going with all this?

Maybe volume activation needs a rethink?  Maybe it should be engineered to be as simple as Terminals Services (RDS) Licensing is.

You can read the Volume Activation Deployment Guide Windows 7 to get some help.  And remember that Office 2010 also requires activation.


“This step-by-step guide walks you through the process of setting up a working personal virtual desktop that uses RemoteFX in a test environment. Upon completion of this step-by-step guide, you will have a personal virtual desktop with RemoteFX assigned to a user account that can connect by using RD Web Access. You can then test and verify this functionality by connecting to the personal virtual desktop from RD Web Access as a standard user”


Matt McSpirit has posted about Vizioncore releasing a new and free management pack for monitoring VMware using Operations Manager 2007 R2.  That’ll shake things up a bit!


Microsoft exams are a funny beast.  I’ve worked in the hosting business for the last 3 years.  The only reason that hosting companies even bother with the MS partnership program is because it is a requirement to be at least a registered partner to get SPLA.  After that, it’s pretty pointless because MS is a competitor (Azure, etc) rather than being a partner to hosting companies.  So I didn’t really do anything to maintain my certification status other than complete my 2000-to-2003 MCSE upgrade a few years ago.

Now I’m working for a consulting company that is a partner and where the partnership is very important (naturally enough).  I’ve got to get certain exams and I’ve got to upgrade from 2003 MCSE.  I’ve also got to replace my dust-collecting elective exams from the 2000 generation.  I was looking through syllabus material yesterday and decided I’d sit the OpsMgr 2007 exam this morning.

I found the exam to be pretty easy 2.5 years of using OpsMgr every day including design, deployment, and troubleshooting prepared me perfectly.  Most of the exam was based on management pack management and customization, notifications, and a little backup/recovery.  Oddly enough, there was more material in the exam on certificate enabled agents than you’ll find in any whitepaper or technet page!  I’ve previously blogged about this subject (around 2 years ago). 

Now, most of us know what MS exam questions are like.  Don’t answer with real world solutions; instead you should answer with the marketing solutions.  And sometimes, there is a question that makes absolutely no sense at all.

For example, I had one question that gave me a scenario where an agent did not appear in a view. How would I troubleshoot it.  The answer was … to review the agent in the view where it wasn’t appearing in the first place!!!! I know I got the right answer because my exam score was 1000/1000.  I left a comment on the question to explain the silliness of the scenario.  I knew the answer was not really a real world answer.  I was only sure of the “answer” for this question because the other 3 options made no sense or weren’t options.  A struggling person familiar with agent deployment would have assumed that one of the other was was the answer because the real “answer” made no sense.  That’s quite unfair.

I struggled with this stuff when I originally started doing MS certification.  I’ve no problem admitting that I miserably failed my first ever exam: Windows NT 4.0 Workstation.  I answered questions based on what I knew, what I learned, and what was documented in the real world.  That experience drove me away from exams for quite a while.  After one or two 2000 exams, I learned what to look for.  There’s usually a key word or phrase in a question.  My problem is that I get wound up in an exam and speed read, missing that key word or phrase.  I learned to control this, catch the phrase and that would guide you to the answer.  But then there is the marketing question/answer.  Those are a struggle because sometimes one of the alternative and wrong answers is a stepping stone to a real solution.  But you have to ignore that.  Those are the questions I tick for review before ending the exam.  I’ve had times when I’ve gone over those 4 or 5 times, changing my mind over and over.

Anway, I’m considering ConfigMgr for my next exam as an elective replacement.  I also have to do the R2 virtualisation exam.  I haven’t really looked at VDI – can anyone explain to me why there is a full module on VDI in the R2 virtualisation exam when there is a dedicated VDI exam?  And I’ll have to find time to replace my AD design elective and do the 2 MCSE 2003 -> 2008 upgrade exams.  Ugh!


I just read about this attack.  It uses Siemens software to install a root kit.  The vulnerability starts with a static password that Siemens inserted. (I once worked in a bank where I am told MSBlaster got in via a Siemens phone engineer using the modem in their systems servers to dial out to the net).  The root kit then uses a stolen private certification key to pretend to be a RealTek driver so that it can install on 64-bit OS’s (Vista and later).  MS and RealTek have figured out a solution (requires your Windows Updates to be working.  Interesting stuff.

Technorati Tags: ,

Lots of out-loud thinking here ….

If you put a gun to my head right now and asked me to pick a hardware virtualization solution for VDI then I honestly wouldn’t pick Hyper-V.  I probably would go with VMware.  Don’t get me wrong; I still prefer Hyper-V/System Center for server virtual machines.  So why VMware for VDI?

  • I can manage it using Virtual Machine Manager.
  • It does have advanced memory management features.

The latter is important because I feel that:

  • Memory is a big expense for host servers and there’s a big difference between PC memory cost and data centre memory cost.
  • Memory is usually the bottleneck on low end virtualisation.

Windows Server 2008 R2 Service Pack 1 will change my mind when it RTM’s thanks to Dynamic Memory.  What will be my decision making process then, because we do have options.  You can always switch to Hyper-V then if you have to push out VMware (free ESXi) hosts now.

Will I want to make the VDI virtual machines highly available?

Some organizations will want to keep their desktop environment up and running, despite any scheduled or emergency maintenance.  This will obviously cost more money because it requires some form of shared storage.  Thin provisioning and deduplication will help reduce the costs here.  But maybe a software solution like that from DataCore is an option?

Clustering will also be able to balance workloads thanks to OpsMgr and VMM.

Standalone hosts will use cheaper internal disk and won’t require redundant hosts.

Will I have a dedicated VDI Cluster?

My thinking is that VDI should be isolated from server virtualisation.  This will increase hardware costs slightly.  But maybe I can reduce this by using more economic hardware.  Let’s face it, VDI virtual machines won’t have the same requirements as SQL VM’s.

What sort of disk will my VDI machines be placed on?

OK, let me start an argument here.  Let’s start with RAID:  I’m going RAID5.  My VDI machines will experience next to no change.  Data storage will be on file servers using file shares and redirected folders.  RAID5 is probably 40% cheaper than RAID10.

However, if I am dynamically deploying new VM’s very frequently (for business reasons) then RAID10 is probably required.  It’ll probably make new VM deployment up to 75% faster.

What type of disk?  I think SATA will do the trick.  It’s big and cheap.  I’m not so sure that I really would need 15K disk speeds.  Remember, the data is being stored on a file server.  I’m willing to change my mind on this one, though.

The host operating system & edition?

OK: if the Hyper-V host servers are part of the server virtual machine cluster then I go with Windows Server 2008 R2 Datacenter Edition, purely because I have to (for server VM Live Migration).

However, I prefer having a dedicated VDI cluster.  Here’s the tricky bit.  I don’t like Server Core (no GUI) because it’s a nightmare for hardware management and troubleshooting.  If I had to push a clustered host out now for VDI then I would use Windows Server 2008 Enterprise Edition.  That will give me a GUI, Failover Clustering, and Live Migration.

If I had time, then I would prepare an environment where I could deploy Hyper-V Server 2008 R2 from something like WDS or MDT.  That would allow me to treat a clustered host as a commodity.  If the OS breaks, then 5 minutes of troubleshooting, followed by a rebuild with no questions asked (use VMM maintenance mode to flush VM’s off if necessary).

Standalone hosts are trickier.  You cannot turn them into a commodity because of all the VM’s on them.  There’s a big time investment there.  They lose points for this.  This might force me into troubleshooting an OS (parent partition) issue if it happens (to be honest, I cannot think of one that I’ve had in 2 years of running Hyper-V).  That means a GUI.  If my host has 32GB or less of RAM then I choose W2008 R2 Standard Edition.  Otherwise I go with W2008 R2 Enterprise Edition.

I warned you that I was thinking out loud.  It’s not all that structured but this might help you ask some questions if thinking about what to do for VDI hosts.


The early bird registration for the virtualization conference is open.  PubForum is doing a second event in 2010, this time in Berlin.

It’s an economic event.  Don’t let the name fool you.  It might be fun but during the event it is serious stuff with some of the big names in virtualization speaking and sharing.

For example, I was at the Frankfurt event a couple of months ago.  I spoke for 2 hours on Hyper-V best practices on the Friday afternoon.  I had a one hour break where I was answering questions and even used RDS Gateway to demo System Center and Hyper-V.  Then I was back in and speaking for another hour on the newer add-ons to Hyper-V.

I strongly recommend attending if you can.  It’s conveniently timed with minimal impact on work.  It is very economic.  Yes, it is fun, but you will learn lots and have a chance to ask the experts the hard questions.


Some documentation has been published by Microsoft for DPM 2010:

Wonder why I post this stuff?  Because I can find it more easily on my blog than I can on the net.  I really do use my blog as my personal notebook.

Technorati Tags: ,

Microsoft has published some documentation for RemoteFX to go along with the Service Pack 1 beta.


This has been released for W2008 and W2008 R2 x64.  I didn’t find a 32-bit version (for W2008).  You can learn more about this solution in a series of articles discussing the beta.


I recently learned from Hans Vredevoort that it is actually possible to define anti-affinity for Hyper-V virtual machines on a cluster.  For example, you might want to force load-balanced virtual web servers to be on different nodes.  You can do this by running commands such as:

cluster.exe group “VirtualWebServer1” /prop AntiAffinityClassNames="NLBCluster1"

cluster.exe group “VirtualWebServer2” /prop AntiAffinityClassNames="NLBCluster1"

This will create an anti-affinity object called NLBCluster1 and try to prevent both of the virtual web servers from being on the same Hyper-V host server in the same cluster.  Sometimes a failover with reduced capacity will override this in order to keep the virtual machines running when there aren’t enough hosts left to meet demand. 


To be honest, even though my main interest in the MS world has been Hyper-V and associated technologies, I’ve avoided Remote Desktop Services VDI.  There might be lots of interest but I reckon the cost of it (hardware, licensing, more management systems rather than less) will scare most of that away (and this goes for all the vendors, not just MS).

RemoteFX has stirred up a lot of interest.  Here’s a link to a blog post talking about the requirements to get RemoteFX up and running.


Microsoft has released a hotfix rollup for the VMM 2008 R2 Admin Console.  It resolves two problems:

  • If a VM is configured to have 3 virtual processors, the SCVMM Admin Console crashes
  • When you remove a virtual hard disk from a virtual machine in System Center Virtual Machine Manager 2008 R2, the .vhd file on the Hyper-V server is deleted without warning

The rollup is available via Windows Update (WSUS/ConfigMgr … check your approved products) and can be manually downloaded.  The manual installation instructions are:

“To install this hotfix rollup package that can be downloaded from the Microsoft Update Catalog on the Virtual Machine Manager server, follow these steps:

  1. Extract the VmmClient32Update.cab or VmmClient64Update.cab file to a temporary directory.
  2. Open an elevated command prompt, type the following command for the 32-bit package, and then press ENTER to install the update:

    msiexec /update vmmClient32Update.msp BOOTSTRAPPED=1

    Note For the 64-bit package, type the following command:

    msiexec /update vmmClient64Update.msp BOOTSTRAPPED=1”


Back to Blogging Again

It has been nuts for the last 2-3 months.  The book has consumed every hour of almost every day.  The hard part is over, all that remains is the edit reviews which I tend to fly through.  I’ve not been able to keep up with the blogging as much as I used to.  I tried to keep up with the headline stuff but a lot of smaller things will have slipped by.  I’ll be trying to catch up and keep up … at least for a while.  There could be more work around the corner.  Plus I need to do some certification work.


I’ve just submitted the last of my content to Sybex for Mastering Hyper-V Deployment.  It’s been a long and tough road.  Early work started on the project in February.  I’ve been doing my normal day job and trying to squeeze in chapters in a rush schedule.  I’ve been working during the morning commute, at lunchtime, the evening commute, into the night, and at weekends.  My co-author is close to finishing his chapters on schedule.  I’ve been doing the first of the reviews as we’ve moved through the project.  I’m probably already a third of the way through the copy edits (2nd set of reviews).  After that comes the final set (I hope) of layout edits.  And then off it goes to the printers for release in November.  I can’t wait!


Microsoft terminated all support for Windows 2000 Server last week (13 July 2009).  That means you get no more bug fixes and no more security fixes.

You really should start looking at doing an upgrade for those machines, pending application support.  And give your application vendors a piece of your mind if they don’t yet have an upgrade path.

MS has provided some help in the Microsoft Assessment and Planning Toolkit 5.0.  It will assess Windows 2000 environments and produce reports/spreadsheets that you can use in planning for a migration to Windows Server 2008 R2 (no direct upgrade path available, even with hops between because W2008 R2 is 64-bit only).


I am writing the last of my chapters for Mastering Hyper-V deployment at the moment.  This one is going to focus on the role that Hyper-V can play in the small and medium business.  The first half of the chapter covers SBS 2008 (with a nod to BPOS, etc) and the second half covers SCE 2010.  The I’ve got the licesning and deployment side of things covered.  But I’d really like to hear from you if you deployed SBS 2008 on Hyper-V.

  • What were your concerns?
  • What unexpected advantages were there?
  • What unexpected disadvantages were there?
  • Were there any hiccups along the way?
  • Anything else that might be useful?

Ben Armstrong has posted links to lots of Dynamic Memory documentation on his blog.


I was looking for some official documentation on VMQ and TCP chimney for Windows Server 2008 R2 Hype-V this morning.  All I was finding were incomplete 3rd party blog posts.  My last gasp searches eventually brought me to a Microsoft document called "Networking Deployment Guide: Deploying High-Speed Networking Features" which goes into a good bit of detail.  It looks pretty good at first glance.


MS is sure making it hard to write a “current” book on Hyper-V virtualization.


A release candidate has appeared for a V2.0 VMM Self-Service Portal.

“VMMSSP (also referred to as the self-service portal) is a fully supported, partner-extensible solution built on top of Windows Server 2008 R2, Hyper-V, and System Center VMM. You can use it to pool, allocate, and manage resources to offer infrastructure as a service and to deliver the foundation for a private cloud platform inside your datacenter. VMMSSP includes a pre-built web-based user interface that has sections for both the datacenter managers and the business unit IT consumers, with role-based access control. VMMSSP also includes a dynamic provisioning engine. VMMSSP reduces the time needed to provision infrastructures and their components by offering business unit “on-boarding,” infrastructure request and change management. The VMMSSP package also includes detailed guidance on how to implement VMMSSP inside your environment.

Important: VMMSSP is not an upgrade to the existing VMM 2008 R2 self-service portal. You can choose to deploy and use one or both self-service portals depending on your requirements”.

My chapter on VMM was completed a while ago.  *sigh*

Service Pack 1 Beta

A public beta for Windows 7 and Windows Server 2008 R2 Service Pack 1 was launched today.  This includes RemoteFX and Dynamic Memory.

Ben Armstrong has blogged to confirm the supported guest operating systems for Dynamic Memory (as I posted here a while back based on the TechEd announcement and Ben’s TechEd presentation). 

Windows Azure Appliance

We also have had the announcement of Windows Azure Appliance coming at some point in the future.  This will be supported on specific hardware.  It’s hardly surprising that Dell was on the stage; Azure runs on Dell hardware, at least in Dublin.  EBay and Fujitsu were on stage. 

This product has been uber-secret, with nary a whisper.  Some of us expected a different announcement – a traditional VM hosting solution based on Azure.  I missed everything after the appliance announcement – the keynote was waaaay too long and I was well hammered cos I was playing the “Bob Muglia cloud drinking game” (I’m kidding – I didn’t have any booze with me).  I’m left wondering a few things:

  • Where does this leave Hyper-V?  MS’s pitch for hosting is that Azure is the only thing to develop on.  Is that true of the private cloud now?  Is MS killing Windows Server?
  • Has MS muddied up their offering, making it confusing for the purchaser?  Have they just frozen Hyper-V sales with the promise of something “better”?
  • Azure Appliance sounds like it might have a limited HCL.  Without knowing the architecture, are we straying into VMware territory?

Yeah, I’m being a bit cynical.  But I tend to approach new things from a critical point of view.  You can pretty much take it that I am then genuine if I claim to like something.

BTW, Steve Ballmer said that if you’re not interested in “The Cloud” then MS wants nothing to do with you.  Oh-Kay then!  I wonder if the shareholders are comfortable with him adlibbing?


This came in the mail overnight:

Deploy Windows 7 and Office 2010 quickly and reliably—while boosting user satisfaction

Microsoft® Deployment Toolkit (MDT) 2010 Update 1 is now available! Download MDT 2010 Update 1 at: http://go.microsoft.com/fwlink/?LinkId=159061

As you prepare to deploy Windows® 7, Office 2010, and Windows Server® 2008 R2, get a jump start with MDT 2010 Update 1. Use this Solution Accelerator to achieve efficient, cost-effective deployment of Windows 7, Office 2010, and Windows Server 2008 R2.

This latest release offers something for everyone. Benefits include:

For System Center Configuration Manager 2007 customers:

New “User Driven Installation” deployment method. An easy-to-use UDI Wizard allows users to initiate and customize operating system and application deployments to their PCs that are tailored to their individual needs.

Support for Configuration Manager R3 “Prestaged Media.” For those deploying Windows 7 and Office 2010 along with new PCs, a custom operating system image can easily be preloaded and then customized once deployed.

For Lite Touch Installation:

Support for Office 2010. Easily configure Office 2010 installation and deployment settings through the Deployment Workbench and integration with the Office Customization Tool.

Improved driver import process. All drivers are inspected during the import process to accurately determine what platforms they really support, avoiding common inaccuracies that can cause deployment issues.

For all existing customers:

A smooth and simple upgrade process. Installing MDT 2010 Update 1 will preserve your existing MDT configuration, with simple wizards to upgrade existing deployment shares and Configuration Manager installations.

Many small enhancements and bug fixes. Made in direct response to feedback received from customers and partners all around the world, MDT 2010 Update 1 is an indispensible upgrade for those currently using MDT (as well as a great starting point for those just starting).

Continued support for older products. MDT 2010 Update 1 still supports deployment of Windows XP, Windows Server 2003, Windows Vista®, Windows Server 2008, and Office 2007, for those customers who need to be able to support these products during the deployment of Windows 7 and Office 2010.

Next steps:

Download Microsoft Deployment Toolkit 2010: http://go.microsoft.com/fwlink/?LinkId=159061.

Learn more by visiting the MDT site on Microsoft TechNet: www.microsoft.com/mdt.

Get the latest news by visiting the Microsoft Deployment Toolkit Team blog: http://blogs.technet.com/msdeployment/default.aspx.

Provide us with feedback at satfdbk@microsoft.com.

If you have used a Solution Accelerator within your organization, please share your experience with us by completing this short survey: http://go.microsoft.com/fwlink/?LinkID=132579.


Microsoft Deployment Toolkit Team”

Get Adobe Flash player