2009
12.12

Last month I blogged about how a Hyper-V snapshot had caused some difficulties.  I hadn’t realised how much effect that unmerged snapshot had.

We run OpsMgr and user it not only for fault monitoring but also for performance monitoring.  I noticed that sometime after we upgraded to OpsMgr 2007 R2 2 of our agents stopped gathering performance stats.  I couldn’t see live performance information in the OpsMgr console nor in the reports (from before a certain date).  PerfMon on the servers worked perfectly.

I repaired the agents and then re-installed them by hand.  Reboots were done.  The agents still refused to gather performance statistics. This was probably back in August/September.

I opened a PSS call under our support program to get some help when I ran out of ideas.  The problem made no sense to the PSS engineers because fault monitoring was working fine.  The machines in question were healthy.  I gathered countless logs and did countless tests.  The call ended up getting escalated not just once, but twice.  A few weeks ago I did some SQL queries on behalf of a PSS engineer.  We could see that performance data stopped being stored in the OpsMgr reporting database some time after the upgrade.

Other agents were fine.  We started focusing on comparing working agents with the 2 non-working agents.  Everything checked out so now we started getting particularly paranoid about things like service packs and regional settings.  I really didn’t like that because we hadn’t had any problems with these machines until maybe a month after we upgraded to OpsMgr 2007 R2.

I was getting ready to give up yesterday afternoon.

I don’t know why I did it, but I went into the OpsMgr console to have a peek at some performance stats for another agent.  One of the non-working agents was still selected from previous tests a while ago.  Wait … I could see a graph for CPU utilisation.  The agent was working.  I checked more stats for disk and memory.  They worked.  I checked the other non-working agent.  It was working.  Huh! 

I fired up the reporting console and did reports on the non working machines for the last year.  I had a complete graph with no data gaps.  That’s strange.  I did a report on when I “knew” that data wasn’t being gathered.  I had complete graphs with correct looking numbers of data samples.

So it appears that data was being gathered but it wasn’t being processed correctly.  Even when I couldn’t see the data in reports, graphs or SQL queries, the data was there somewhere in a pre-processing stage, waiting to be added into the relevant tables.

OK, what had changed in the last month or so since I had tried one of these reports?  We had migrated from Windows Server 2008 Hyper-V to Windows Server 2008 R2 Hyper-V.  Could there be a change in the way that performance data was gathered in a VM?  Definitely not.  Had we any changes at the VM level?  That’s when I remembered the issue in that blog post.

When I moved the OpsMgr VM, Hyper-V had to merge a snapshot that we had deleted some time before hand.  It had been running with the AVHD (snapshot/checkpoint differential disk) for 4 months.  It started to affect performance of the VM so badly that TCP was having timeouts.  There were performance issues that were virtual storage related.  Could it be that this affected database operations of the VM?  Of course they would if they had reached the point of messing up TCP.

NOTE: I have only ever used snapshots in production in VM internals upgrade scenarios.  I usually delete the snapshot after success is checked and allow a merge to take place.  That means there should be no impact on performance as long as you do things in a timely manner.  Some how I must have forgotten to do that this time. 

So here’s what I suspect happened.  The OpsMgr agents actually worked perfectly.  The gathered the performance stats the entire time and sent them to OpsMgr.  I am guessing that OpsMgr caches the data for processing.  Due to the unmerged AVHD/snapshot performance issues, the data stopped being processed correctly and sat in that cache.  We know it didn’t make it to the point of being reportable because a direct SQL query showed a data gap.  The problem reared it’s ugly head around a month after the snapshot was taken.  The AVHD/snapshot was merged back in early November and that resolved the performance issue for this VM.  It also sorted out whatever hitch there was in performance processing for these agents.  The data that was cached somewhere got it’s way into the reporting database and live graphs suddenly appeared for the two machines in the OpsMgr console.  That’s the funny bit; it only affected these two agents.

MS PSS are still curious.  The engineer seems to accept the explanation I’ve given him but he’s still curious to dig around and confirm everything, maybe try to see if he can get details on what happened internally.  I’ve got to credit him for that; most support staff would just close the call and move on.

So once again:

Hyper-V Snapshots or VMM Checkpoints, i.e. AVHD differential disks should not be used in production.  They are a form of differential VHD that doesn’t perform well at all.  They really do affect performance and I’ve seen the proof of that.  In fact they affect functionality in the most unpredictable of ways due to their performance impact.  Use something like DPM instead for state captures via backup at the host level.  That’s an issue right now with the lack of CSV support in DPM.  If you really need that right now then have a look at 3rd party providers or wait until DPM 2010 is released (approx April 2010) until you do deploy CSV.

Technorati Tags: ,,
2009
12.12

MS Continues To Impress

Overnight I got an email from a quite senior Microsoft employee, responding to some criticism I recently have made.  It was further proof that MS listens; the person in question was looking to engage with me and listen to my criticisms.  I know, I’m only one person and not of any great significance.  But it is cool to see MS interact like this.

2009
12.11

It was announced today that Microsoft acquired a company called Opalis.  Opalis provides solutions to:

  • Cloud Bursting – automate public cloud provisioning to handle peak loads and prevent SLA violations
  • Cloud Cover – automate failover to public or private clouds
  • Private Cloud Operation – create and manage service driven, flexible capacity with automation
  • Sophisticated triggering – subscribe to external events to trigger workflow processes that add, reduce or fail-over to cloud resources according to policies and SLAs

I was wondering if this would be something that would be used solely in Azure.  But two things say “no” to me on that.  First is the System Center badge on the above site.  Second is the line “automate failover to public or private clouds”.  Think of Azure as a public cloud.  Think of a MS hosting partner running a Hyper-V based private cloud.  We already know that MS plans to add the ability to migrate VM’s to Azure from private clouds using VMM.  Now I guess they have technology to allow for an automated failover or DR plan, i.e. you can run your daily operations in one cloud and fail over to another cloud.

I can see the bursting and triggering tying in nicely with the OpsMgr/VMM integration provided by PRO tips, e.g. OpsMgr sees a bottleneck and Opalis technology in VMM triggers a new VM deployment to cater for the load.  When demand goes down then the burst VM’s are drained and withdrawn.  Sounds like a cool idea!

I wouldn’t expect to see this stuff appear for another 2 years.  We’ve just gotten VMM 2008 R2 and the Software Assurance cycle will next kick in around October/November 2011 (GA date, RTM being around August 2011).

Technorati Tags: ,
2009
12.11

I was talking to a few consultants last week and lots of the CIO’s they are meeting are talking about one thing right now: Virtual Desktop Infrastructure or VDI.  They’ve been hearing this term from many sources.  VMware has made a bit of a push on it, Citrix have made a huge push on it seeing their Presentation Server (or whatever the hell it’s called this week) getting squeezed out by MS, and MS has released Remote Desktop Services in Windows Server 2008 R2.  It seems these CIO’s want to talk about nothing else right now.

I can understand the thinking about VDI.  It can solve branch office issue by placing the desktop beside the data and server applications in the data centre.  Unlike Terminal Services a helpdesk engineer can mage changes to a VDI machine without change control.  Instead of PC’s you can use terminals that should be cheaper and should have no OS to manage.  It all sounds like costs should be cheaper and all that “nasty” PC management should disappear.  Right?

*Ahem* Not quite.

  • Branch Offices: Yes this is true.  By placing the VM, the user’s execution environment, in the data centre you speeds up access to data and services for remote users.  Let me ask a question here.  How much does sit cost to buy a PC?  Around €400 or thereabouts will do for a decent office PC.  It even comes with an OEM license for Windows.  How much does it cost for 2GB RAM in a server?  Around €200, not to mention the cost of the server chassis, the rack space, the power and the cooling.  How about storage?  A PC comes with a SATA disk.  A €250 GB SATA drive for a server is around €250.  It seems to me that we’ve already exceeded the up fronts.  I have done detailed breakdowns on this stuff at work to compare VDI with Terminal Services.  With VDI there is no memory or storage usage optimisation.  You get this with Terminal Services.  My opinion has changed over time.  Now I say if you want to do end user computing in the data centre then Terminal Services is probably the way to go.
  • Change Control: On a very basic VDI system, yes a helpdesk engineer an fix a problem for a end user without change control.  Terminal Services does absolutely require change control because a change to software on the server affects everyone.  However, if you are using pooled VDI or trash’n’burn VDI (VM invoked when a user logs in and destroyed when the log out) then there’s a good chance the problem returns when the user logs in again, thus requiring second or third level engineering.
  • Terminal Cheaper than PC’s: Hah!  I went out of my way at a recent Citrix VDI event here in Dublin to talk to one of the sponsors about terminals and their costs.  Their terminals were about the same cost as a PC or laptop depending on the form factor.
  • Terminals have less management than PC’s: Uh, wrong again.  There is still an operating system to manage on these machines and it’s one that has less elegant management solutions.  It still needs to be populated and controlled.  I’ve also been unable to get an answer from anyone on whether EasyPrint support is added into any of the terminals out there.  Without EasyPrint you either have awful cross-WAN printing experience or pay up for expensive 3rd party printing solutions.
  • Terminals cheaper part 2: The user still needs a copy of Vista or Windows 7 for their virtual machine where does that come from? You need to know that you cannot go out and use just any old Windows license in a VDI environment.  It has to be a special one called Virtual Enterprise Centralised Desktop (VECD).  This can only be purchased if you have software assurance on your desktop … uh … but we’re running terminals without a Windows Vista/7 license.  Yeah, ask your LAR about that one!  And we know SA adds around 33% to your costs every 2 to 3 years.  That PC with an OEM install of Windows 7 Professional or Ultimate is sounding pretty sweet right about now.
  • VDI is easier to manage: How do you manage a PC?  You have to put AV on it, you have to patch it, you have to deploy software to it, you have to report on license usage, you have to use group policy, etc.  That’s everything you also have to do with VDI using the exact same techniques and systems.  I see nothing so far about hardware management.  Let’s look at that.  You have to have 2 power sockets, a network socket and cabling, and every now and then one breaks and has to be replaced/repaired.  That sounds like everything you have to do with a terminal.  OK; the operating system on the machine?  I grant you that one.  A terminal has a built in OS.  A PC has to be installed but you can easily use MDT (network or media) to build PC’s with almost no effort and it’s free.  You also have ConfigMgr and WDS as alternative approaches.  WDS even allows people to build their own PC’s from an access controlled image.

For me, VDI is just too expensive to be an option right now.  Why do you think Microsoft hasn’t been singing from the heavens about Remote Desktop Services.  Sure, it’s a messy looking architecture but they know that the PC is here to stay for a long time yet.  The PC is relatively cheap to buy an own.  TCO?  Citrix have screamed about that one since the days of WinFrame and they haven’t managed to convert the world.  Sure, Citrix/Terminal Services is in most organisations but it’s more of an application deployment solution for remote users than a PC replacement solution.

And let’s not forget that the PC paradigm is changing.  It’s expected that the ownership of the business PC will change from the business to the end user.  In fact it’s already happening.  The business can still retain some sort of control and protect itself using things like NAP and port access control.

Feel free to post a comment on what you think about what’s going to happen.

Technorati Tags: ,
2009
12.10

How does Virtual Machine Manager know where to locate a virtual machine when it needs to migrate or you go to create one? 

If you don’t have VMM to manage your Hyper-V hosts then it becomes either 100% manual (when you manually migrate or create a VM) or 100% automated when there is a failover.  There is no in-system intelligence involved.

VMM does it very differently using Intelligent Placement.  The basic premise is that VMM is monitoring key resources on the Hyper-V hosts that it manages.  Using an algorithm that is either default or customised by you it will take those resources and know where to place a VM in one of a number of scenarios:

  • (Automated Placement) When a VM is created in the self-service console it is automatically place on a host by VMM based on the host ratings.
  • (Manual Placement) When you create a VM in the admin console it will recommend a host for you to choose. 
  • (Automated Placement) When there is a host failure VMM will use Intelligent Placement to move the VM to the highest rated host.
  • (Manual Placement) When OpsMgr and PRO tips initiate an alert, VMM will use Intelligent Placement to relocate VM’s to the host with the most available resources, i.e. the highest rated host.
  • (Automated Placement) When you drag and drop a VM to a host group the VM will be automatically placed on a host in that group based on host ratings.

You can alter how the Intelligent Placement algorithm works on your VMM server.  There are two basic models:

  1. Resource Maximisation: This is the model you take when you want VMM to make the very most out of each and every host.  VMM will try to place as many VM’s on a single host as is reasonable.
  2. Load Balancing: The goal here is to get the very best performance from your VM’s that you can.  VMM will locate VM’s in an effort to balance the resource utilisation across all hosts.

There are 4 basic resource types that will be utilised in the algorithm.  There is a slider to allow you to prioritise these resources when they are evaluated:

  1. CPU
  2. Memory (RAM)
  3. Disk I/O capacity
  4. Network capacity

To be honest I think most people will choose a load balancing model and will prioritise CPU.  Disk I/O and network capacity probably come next depending on where your bottlenecks are.  Those few going with the maximisation model will probably prioritise memory because it then likely becomes the bottleneck resource.

How do these host ratings get calculated?  VMM measures the resources of the host around every 10 minutes.  There are circumstances that change the available resources of a host and thus the rating of the host.  These are:

  • New Virtual Machine
  • Deploy Virtual Machine
  • Store Virtual Machine
  • Migrate Virtual Machine
  • Delete Virtual Machine
  • Virtual Machine Turned On
  • Virtual Machine Turned Off, Stopped, Paused, Saved State

The host is rated only when a VM is to be placed.  The gathered information is used to compare the host against the resources required by the new/moved virtual machine.  A rating is generated, anywhere from 0 stars to 5 stars in half star increments.  The host ratings do not involve comparing and contrasting hosts.  They simply show how suitable each host will be based on empirical data and an estimation of what resources that VM will require in the future.  In automated scenarios the host with the highest rating will be chosen.  In manual scenarios it’s up to the administrator to agree with or reject the recommendation.

A number of circumstances can cause a host’s rating to be zero stars, i.e. VMM believes the placement to be unsuitable:

  • There is not enough RAM available for the VM you want to place on a host.
  • There is not enough available storage for a VM, e.g. a Windows Server 2008 Hyper-V cluster does not have an available LUN for the VM.
  • The virtual network the VM is configured to use is not available on the host.
  • Some advanced VM configuration is not supported by the host, e.g. advanced networking or high availability.  You can still force a placement in this scenario by changing that setting when promoted by the wizard.

Intelligent placement is an estimation based on empirical data combined with the tuning of the algorithm.  It’s up to you to tune that algorithm to suit the VM’s, hosts and business requirements for your organisation.  VMM will then do it’s best by making recommendations to you when you move/create a VM or when an automated action must be performed by VMM.

Technorati Tags: ,
2009
12.10

I’ve just read a forum post where a VMware expert (a real one) has been reporting issues with the recent vCenter 4 and ESX 4 updates.  The latter one is scary; it can kill a host and lose some of your VM’s.  The problem happens on ESX if you manage the host using 3rd party agents.  VMware is advising that you remove the agents before an upgrade.  Another article is posted about the vCenter issue.

This is a bad play by VMware.  They’ve built up a loyal customer base but dodgy releases like this will get people interested in the possibilities of using VMM 2008 R2 to V2V migrate their VM’s onto a Hyper-V platform.

The expert in question has been advising people to stay clear of new VMware releases; let someone else test them on their own production environment.

Technorati Tags: ,,
2009
12.10

Microsoft updated this tool to include support for Windows Server 2008 R2 and VMM 2008 R2.  What does it do?

Roles.png

What it does is allow you to update VM’s that are offline and stored in your VM library.  A scheduled job runs, deploys the VM, updates it and stores it back in the library.

It does not work with templates unfortunately.  I have no need to store VM’s in the library but I do have a tidy little collection of templates stored away (MS just refers to them as VHD’s, templates are a different thing altogether in VMM-speak) and I have no choice but to update them by hand.

Version 2.1 of the tool now works with System Center Virtual Machine Manager 2008 R2, System Center Configuration Manager 2007 SP2, and Windows Server Update Services 3.0 SP2. The tool also supports updating the Windows® 7 and Windows Server® 2008 R2 operating systems.

2009
12.10

Was released by VMware last month.  It’s 814MB in size.  Hmm.

Technorati Tags:
2009
12.10

I’ve had nothing but problems with my broadband since switching to Vodafone Ireland.

Yesterday I wanted to get onto the USA ESTA site to apply for a travel visa for my trip next year.  No matter what I did I could not get onto the site.  I reset the router.  I swapped out the router.  I changed the Vodafone DNS settings to OpenDNS.  I verified OpenDNS was OK by getting to the site from our Data Centre where I have OpenDNS configured as the primary DNS server for some of our systems.  I tried getting onto the site from 4 different physical Windows machines in my house and a Windows XP VM (IE6).  All failed to load the page.  The only thing left is the Vodafone network.  That leaves me no choice but to open a call with the dreaded “customer service”.

I got through a maze of automated questions designed to encourage you to end the call without speaking to someone.  Eventually I spoke to Sean Hunt, a young man who clearly depended on a menu system and had no logical understanding of the failings I described.  His requests:

  • Can you reset the router?  I did that this morning and even replaced it with a non-Vodafone router.  No resolution.
  • Can you change the DNS to Vodafone settings? That’s what they were by default.  I even switched them to DNS settings that I verified were working OK. 

Sean didn’t know what to do now.  I asked if this could be escalated.  The answer was no.  Could I speak to his superior?  No, he could arrange a call back.  I know I’ll never get a call.  In this case it appears that Vodafone’s customer service is not set up to be able to figure out what to do when the problem is not inside the house – as it clearly isn’t in my situation.

Why the hell can’t an Irish Vodafone Ireland do customer service right?  This is the second such situation I’ve had with them in the last few months.  I’m getting very tired of this.

2009
12.10

A community technical preview (i.e. a pre-beta, probably buggy) release of MAP version 5.0 has been released on Connect by Microsoft.  MAP is a free set of tools and guidance on how to prepare for a set of technologies, e.g. Windows 7, Windows Server 2008 R2, Hyper-V, etc.  Version 5 adds:

  • Heterogeneous Server Environment Inventory for Technologies including Windows Server, Linux, UNIX and VMware
  • Ability to determine usage of deployed  System Center Configuration Manager, a member of the Core Client Access License (Core CAL) Suite.
  • Office 2010 Readiness Assessment.
Technorati Tags: ,
2009
12.10

2009; what an "interesting” year – and a I mean the Chinese “interesting”.  In Ireland we continued our triple recession.  The IT biz pretty much froze (died for some).  It rained pretty much non-stop.  And we got loads of new taxes.

2009 was a big year for IT Pros in terms of changes and new technologies.  Everyone predicted that virtualisation would be the big changing force this year.  They couldn’t get that one wrong.  Citrix, Microsoft and VMware all released new products on the world, all claiming to be the best.  The one thing we can say was that in a time of recession, anyone who did virtualisation right would save money, not up front but on the running and long term costs.  It added flexibility and allows IT to react more quickly.

Hardware sales suffered badly.  That really started in 2008.  Virtualisation means that we need fewer servers but we do change our storage methods.  As a result we saw prices rocket.  I’d estimate that HP added up to 50% to server costs in Ireland when they released the ProLiant G6 line up.

The cloud and Software-as-a-Service (SaaS) were everywhere.  Everyone’s an expert.  The one thing that is certain is that the majority of new ventures are selling their services online.  Hosting companies are changing their marketing.  Microsoft launched BPOS and Azure to try tap into this market and to keep themselves relevant as the sole software/OS vendor with a cloud “alternative”.  I say “alternative” because MS online services integrate with traditional on site installations.  That’s set to continue with the recent merger of the Server and Azure divisions.

IPv6 was not the tidal wave that was predicted by some.  ISP’s are far from ready for it so until then, it’s not going to be something we care to deal with.  Add in all the new terminology (for the sake of renaming – sounds familiar, eh?) and lack of clear widespread education make it scary for us IT Pro’s.  IPv4 shortages don’t seem to appear real to use in the western hemisphere.  I know that anyone seeking addresses just has a few loopholes to jump through and they get as many addresses as they want.

Server Core installations were a flop – at least locally here in Ireland and with international people I’ve spoken to.  The lack of manageability in the real world kills it.  Hardware management s/w requires a GUI and fixing things when they go wrong becomes a web search nightmare.  The reason we adopted Windows was ease of use.  Most folks I know (actually all of them I believe) who run Windows Server 2008 are running full installations now after dipping their toes in Server Core.

On the product side we can’t say anything without mentioning Windows 7.  Windows 7 is being referred to by many as what Vista should have been.  Ardent haters of Vista are loving Windows 7.  And finally we get a quiet admission by some MS folks that they got Vista wrong.  MS didn’t have effective 2 way communication with the community or their customers.  We know they value the opinions of the USA Fortune 500’s but even they didn’t widely adopt Vista – those headline whitepapers and announcements about big corporates adopting Vista are bull.  Corporates take forever to implement change.  What would have been more correct was that their software assurance entitled them to run Vista on every machine and a few IT/marketing people probably had it running.  Windows 7 is proving to be different.  I’m hearing stories of widespread implementation by international organisations.  And 64-bit computing on the desktop appears to really have arrived.  I think MS got it right by listening to everyone, not just the head in the clouds opinions of Fortune 500 Frankie.

Windows Server 2008 R2 also arrived to less of a fanfare.  There are two stories to Server 2008 R2, a clear evolution from the technically successful Windows Server 2008.  The first is “better together”.  Most of the new features included in the Ultimate/Enterprise (only) editions of Windows 7 are only available when you pair them with Windows Server 2008 R2.  The other big story for Windows Server 2008 R2 is virtualisation.  Hyper-V now includes Live Migration (aka VMotion) and Cluster Shared Volume (CSV, aka VMFS).  New improvements and hardware integration add better performance and increased scalability to theoretical levels.

Exchange 2010 came out with a pathetic whimper at TechEd Europe in Berlin in November at a keynote launch event that even the best spin-meisters in Microsoft couldn’t sell to us.  The keynote was dreadful and universally slammed by the delegates. Half the audience walk out by the midpoint.  Exchange 2010 was launched but it was easy to miss.  Apparently it’s pretty fantastic and there’s lots of early interest.  I think the real impact of it probably won’t be seen until June 2010 when Office 2010 is released.  It’ll probably be joined by SharePoint 2010.  I don’t see why MS aren’t getting these obvious timings sorted out.

What about 2010?  What’s the big story going to be?  Damned if I know.  My crystal ball has a crack in it after it fell through and shattered my Ouija board.  I think it bumped me on the head on the way down, hence the concussed rantings.

Virtualisation will continue to be a big story.  MS Partners are starting to accept Hyper-V as a viable platform thanks to Windows Server 2008 R2.  I talked with one reseller this week who are hardcore VMware resellers thanks mainly to their tight partnership with HP (who make a mint from VMware support contracts).  They’ve started to lose deals now because alternative providers are offering Hyper-V and lower costs.  I’m hearing more and more that service providers are expanding or introducing virtualisation, and particularly, Hyper-V skills.

Where Hyper-V goes, System Center goes.  That means Virtual Machine Manager (VMM) 2008 R2 and Operations Manager (OpsMgr) 2007 R2, both of which were released this year.  The danger here is that non-expert consultants will be deploying non-customised implementations and not handing over skills to the on-site staff.  System Center is the real difference maker for Microsoft virtualisation.  It not only manages the virtualisation but can take control of everything else.

Data Protection Manager 2010 is currently a beta release.  It appears to have evolved based on customer feedback.  It is also adding CSV support for Windows Server 2008 R2 Hyper-V clusters, something that is missing from most file system integrated management and security solutions right now.  I think the timing of the release is a bit late: expect it around April 2010.  Ideally MS should have released a CSV and Live Migration aware backup solution at the same time as the new version of Hyper-V.

We saw the first hints of Configuration Manager v.Next.  I would guess public betas will appear in the Spring of 2010 and it might make it out by TechEd Europe in November.  That’s purely a guess.

For me 2009 has been extremely busy.  Early in the year I was focusing on running the Irish Windows User Group.  We ran events or were involved in promoting events every month of the year and covered lots of material.  We finally found a sweet spot on when to time our events to get good numbers attending and we added a virtual audience by using LiveMeeting to webcast live and record the events (thanks GITCA!).  We helped promote the Microsoft Ireland TechDays Tour for IT Pros in the Spring and toured Galway, Cork, Dublin and Belfast.  I got to speak about Windows 7, Windows Server 2008 R2 and Hyper-V/VMM.  The speaking continued at user group events, Minasi Forum 2009 in Virginia Beach, PubForum in Dublin, the UK/IE MVP open day in Reading and I got to present at the MS Ireland community launch events for Windows 7, Windows Server 2008 R2 and Exchange 2010 in Galway, Cork, Belfast and Dublin.  That first Galway event was “fun”.  Everything fell apart in the morning for the afternoon event.  Lots of ingenuity, hard work and some seat-of-the-pants stuff got everything working in a great interactive event.  My lasting memory of Cork was pushing heavy cases full of PC’s and monitors and stacked with servers and iSCSI storage around a maze of a hotel, dodging stairs, walking miles, etc so that we could avoid stairs that separated two rooms that were 50 metres apart.

In the Summer I started work on my four chapters in Mastering Windows Server 2008 R2 for Sybex/Wiley.  That took a lot of time.  A lot of the material was originally written for two terminated Server 2008 books and had to be re-written to focus on Server 2008 R2 but also include Server 2008.  I had no idea how much editing and reviewing would follow.  I literally finished the last of it (that I know of) last week.  The book will be released in February, according to various sites that I’ve browsed.  The Mastering Windows Server books are usually the top selling server books so it’s a great honour that Mark Minasi asked me to be involved and that I get to be listed as an author.

In the summer I was renewed as an MVP (one of 4,200 of them globally, 12 in Ireland) by Microsoft.  My expertise was switched from Configuration Manager to Virtual Machine.  It was appropriate; I hadn’t worked with, and therefore spoken/written about ConfigMgr when I was granted my status for it.  I’d been all Hyper-V and VMM and that has continued.  I work with it and so I write about it and speak about it, sharing my experiences and real world insights.  Later in the year I was accepted as one of 140 members of STEP, a MS Springboard program.  That got me over to TechEd Europe in Berlin where I staffed the Springboard booth for four half days.  And more recently I’ve been added as a member of the System Center influencers program.

The book may have finished but I’m still flat out.  Work is busy.  After work I’m blogging (The blog went from 125,000 hits to over 250,000 hits this year and the RSS feeds are red hot), tweeting or organising stuff for the Windows User Group (with a focus lately on Windows 7 and Windows Server 2008 R2).  Add on a new project where I’m doing some technical reviewing on a virtualisation project and my time is well consumed.  I’ve had almost zero time behind my camera or to get out and about.

I hope 2009 has been a good one for you and hope 2010 will be too.

2009
12.09

Students, Interns and IT

We just had a government national budget today – I won’t be able to sit for a while.  I’ve been listening to the radio most of the day while working and I heard one good point from the talking heads.  One “expert” said that some of the new tax funds should be pooled for training of future new skills.  Here’s the scenario.

We have around 12% unemployment.  IT has gone into recession.  All sectors have been making staff redundant.  There’s nowhere for students to go after graduating college (what we call non-university third level institutions) or university.  There no where for re-educated working people to go to get that first piece of experience.  If our economy is to succeed then we’ll need these additional skills.  We can’t have a 205 year gap in skills that never developed.  I doubt Ireland is unique in this.

For any recent/near graduate who might read this I have some bad news for you.  Sure, you’ve gotten A’s in all your exams and your parents are proud and your lecturers called you a genius.  But …

You Know Nothing!

I’m sorry; I know that hurt but it’s true.  Your college education simply laid a foundation.  You probably learned about the OSI model in 5 different classes over 4 years like I did.  You probably learned about Token Ring like I did.  The reality is that you have few skills right now that a real business can use.  You’re simply putty that something will be made from.  For the first part of your career you will likely achieve little of consequence.  The same is true for anyone who got an MCSE (or whatever the hell that’s called these days!) on a re-education program.  You are what we call a paper-MCSE.  Lots of facts and answers for questions that have little real world use.  It’s a good start but unlike those TV adverts in the UK, you will not be earning £45,000 per year from day one.

Here’s how it normally goes.  You get a job where you answer phones, run cables, etc.  You’ll pick up a little as time goes by.  If you’re good enough then you’ll be delegated with more work, maybe a small project.  Then a larger piece of work and a role in a larger project.  It takes time; it certainly doesn’t happen overnight.

But this is where the problem is right now.  NO ONE is hiring.  There is next to no work out there.  I can’t imagine how bad it must be for this past years graduates.  They must feel awful after spending 4 years in classes, studying and doing continuous assessment and exams.  My class was mostly employed before we left our last exam – I got my job offer the week before our finals.  Graduates were in amazing shortage back then so we were thrown right in.  I was doing exams one week and the next I was porting code from Solaris to HP-UX in a team of 1 leader and 3 graduates.  I just can’t imagine your predicament. 

Anyway … back on topic.  The expert on the radio suggested we needed a national program for interns.  Company’s can’t afford the entire cost of staff.  The country can’t afford to have people sitting on the dole doing nothing and learning nothing.  What if we had a program where we met half way.  What if the person worked 20 hours a week and did work-related studying for half a week?  The government could reduce dole payments fairly and the company could have a part time staff member at a low cost.  The idea here is that eventually the recession will end.  Hopefully the company would then be in a position to hire and would want to take on the intern as a full time employee at full cost. 

I would add to the program: the program should be contingent on the person continuing to learn and achieve professional certification/passing grades in the training program.  That protects those who are investing in the intern, i.e. the tax payer.  The company should also be requiring the intern to be doing real work, i.e. not flattening boxes …. I’m not ruling out coffee making because we once had one poor chap who had to get us coffees and breakfast sandwiches every Friday morning for nearly a year.  It was a tradition in the department that the rest of us enjoyed :)

Who gains?

  • The country because an educated person is developed into a skilled person.  The person that we invested in throughout their education stays in the country and hopefully will become another taxpayer.  They’ll also add to the educated workforce pool adding to our attractiveness to inward investment.
  • The company because they’ve been able to take on skills and mould them.  That intern might just become a full time employee if the company can turn a corner.
  • Finally, the intern.  They get real world work, not just lessons from 20 year old IT books.  They’ll work with more skilled people in their field and learn from them.  Potentially they’ll gain professional certifications while working and can see the relevance in the questions and answers, making the neurons create relevant pathways.

I would like to see something come from the additional taxation we’ll be paying from midnight tonight such as this.  But I’m afraid that instead of investing in these graduates we’ll see continued wastage.

If you are still in college now and are interested in a career in IT then do a few things to give yourself a leg up:

  • Try to get a PC or laptop with as much disk and RAM as possible.  This will allow you to run some sort of virtualisation for labs.
  • Seek out your Microsoft representative for your college and get your hands on MSDN Academic.  That gives you access to all the MS licensing for test and development purposes.  It’s great for labs and learning.
  • Find out if your college has a professional certification program.  If you have the time then do what you can on the side.
  • Check out the library for certification prep books.  MS Press is a good start.
  • Yes, look at the Linux stuff too.  Learn about Cisco networking and firewalls (Todd Lamle is a good read).  If you’re a dev then learn about things like .NET, Silverlight, C# and Azure.
  • If you’re post college then see what the local government training agencies can do.  Bring in suggestions.  Fás in Ireland used to do MS training and exams.  I know they cut down the numbers of their exam centres a lot about 5 years ago which was a pity – even though I was working I did my exams in there because they usually had more openings and their office was close to where I was living.

Best of luck if you are in this situation and keep on learning.  I promise it works out for the best if you do – it worked for me when I was out of work for a while and it was what pushed me up to be a senior engineer instead of just another IT admin type.  Hopefully the same will happen for you in whatever field you want to work in.

2009
12.09

I am seriously impressed with the folks in Redmond tonight.  I brought up something in an MVP chat that I thought might be an issue.  Within 60 minutes a senior engineer who was involved called me up on the phone and gave me a clarification to make sure I could give accurate information on the subject in question.  I can’t imagine anything like that from any company other than Microsoft.

Thanks Carmen and Mike!

2009
12.09

Microsoft System Virtual Machine Manager (VMM) 2008 R2 includes the ability to do a P2V (physical-to-virtual) migration of Windows computers.  This is usually the last critical step in a normal virtualisation project – take those physical servers that an audit identified as being candidates to be converted into virtual machines.  The process scans the contents of the hard disk and converts them into VHD’s.  The machine specification is converted into a virtual machine configuration.

The first step in all of this begins really when you are doing a feasibility study or sizing your virtualisation hosts and storage.  You’ll run something like Microsoft’s MAP (Microsoft Assessment and Planning) toolkit.  Alternatively if you have already got Operations Manager 2007 deployed then you can install VMM 2008 R2 and wait a while before running the Virtualisation Candidates report.  That takes information from the continuous performance monitoring provided by OpsMgr.  Or you can just run individual performance reports from OpsMgr – but you need to be careful about seeing both the details and the big picture when it comes to a manual interpretation of the statistics.  And be careful about the process OpsMgr uses to store long term data.  Spikes or sudden drops may not be apparent by the data aggregation.

Once you have your Hyper-V 2008 R2 platform and VMM 2008 R2 tested, documented and in production then you can start your P2V process.

Here’s a list of the supported operating systems:

Operating System

VMM 2008

VMM 2008 R2

Microsoft Windows 2000 Server with Service Pack 4 (SP4) or later (offline P2V only)

Yes

Yes

Microsoft Windows 2000 Advanced Server SP4 or later (offline P2V only)

Yes

Yes

Windows XP Professional with Service Pack 2 (SP2) or later

Yes

Yes

Windows XP 64-Bit Edition SP2 or later

Yes

Yes

Windows Server 2003 Standard Edition (32-bit x86)

Yes (Requires SP1 or later.)

Yes (Requires SP2 or later.)

Windows Server 2003 Enterprise Edition (32-bit x86)

Yes (Requires SP1 or later.)

Yes (Requires SP2 or later.)

Windows Server 2003 Datacenter Edition (32-bit x86)

Yes (Requires SP1 or later.)

Yes (Requires SP2 or later.)

Windows Server 2003 x64 Standard Edition

Yes (Requires SP1 or later.)

Yes (Requires SP2 or later.)

Windows Server 2003 Enterprise x64 Edition

Yes (Requires SP1 or later.)

Yes (Requires SP2 or later.)

Windows Server 2003 Datacenter x64 Edition

Yes (Requires SP1 or later.)

Yes (Requires SP2 or later.)

Windows Server 2003 Web Edition

Yes

Yes

Windows Small Business Server 2003

Yes

Yes

Windows Vista with Service Pack 1 (SP1)

Yes

Yes

64-bit edition of Windows Vista with Service Pack 1 (SP1)

Yes

Yes

Windows Server 2008 Standard 32-Bit

Yes

Yes

Windows Server 2008 Enterprise 32-Bit

Yes

Yes

Windows Server 2008 Datacenter 32-Bit

Yes

Yes

64-bit edition of Windows Server 2008 Standard

Yes

Yes

64-bit edition of Windows Server 2008 Enterprise

Yes

Yes

64-bit edition of Windows Server 2008 Datacenter

Yes

Yes

Windows Web Server 2008

Yes

Yes

Windows 7

No

Yes

64-bit edition of Windows 7

No

Yes

64-bit edition of Windows Server 2008 R2 Standard

No

Yes

64-bit edition of Windows Server 2008 R2 Enterprise

No

Yes

64-bit edition of Windows Server 2008 R2 Datacenter

No

Yes

Windows Web Server 2008 R2

No

Yes

You can use the Microsoft Virtual Server 2005 Migration Toolkit (VSMT) or third-party solutions (cloning, e.g. s/w from Acronis is said to be a successful approach) for converting computers running Windows NT Server 4.0.

As if that isn’t complicate enough then you have to consider how you are going to do the P2V process.  There are two approaches:

  • Online: VMM will deploy an agent to the machine to be converted.  This is a temporary installation and does not require a license for the agent.  The agent scans the machine for suitability for an online conversion.  Upon success it will then use the Volume Shadow Copy Service (VSS) to grab file cleanly from the computer to create a new VHD for each disk in the machine.  VSS is used because things like OS files, Exchange files, SQL files, etc can be copied cleanly.  There’s a few catches with this.  (1) Not every version of (older) Windows has suitable VSS support.  VSS is a relatively new technology still.  (2) A P2V conversion is not instant.  It takes time, during which some files, particularly database files like Exchange or SQL will change after they have been copied.  That means the new VM won’t have all the data. (3) Not all server applications, e.g. MySQL or Oracle, have a VSS writer/engine.  They cannot be grabbed cleanly.  Once an online conversion is complete the source computer is left running.
  • Offline: With this process VMM deploys a boot image (Windows PE) to the machine to be converted.  The machine is reconfigured to boot from the boot image. The P2V job then runs.  The complication with this approach is that you must ensure that all the required drivers for the original physical machine must be in your boot image.  You can use the “Use storage and network drivers from the following location” option to supply additional drivers.  Because WinPE is used the physical machine must have at least 512MB RAM.

Should you use and online or offline conversion process?

Operating System on Source Computer

P2V (Online)

P2V (Offline)

Not Supported

Microsoft Windows 2000 Server Service Pack 4 (SP4)

 

X

 

The Windows Server 2003 operating systems with Service Pack 1 (SP1)

X

X

 

The Windows Server 2003 R2 Standard Edition operating system

X

X

 

The Windows XP operating systems with SP1

X

X

 

The Windows Server 2003 R2 Standard x64 Edition operating system

   

X

The Windows XP Professional x64 Edition operating system

   

X

The Windows Vista operating system

   

X

The Microsoft Windows NT Server 4.0 operating system

   

X

Again, you can use a cloning solution to work with those unsupported operating systems.

Here’s another basic rule of thumb:

  • Any machine with static data (web server) can be safely done with an online conversion.  The server stays operational and responsive to users.
  • Any machine with changing data (domain controller, Exchange, database, file, etc) should be converted with the offline approach to avoid data loss.  It does mean taking the server offline during an announced outage window.

Is there any preparation work you should do?  Yes.  Remove unwanted files.  Defrag the hard disk (schedule it in Windows Scheduled tasks, e.g. defrag C:).  Finally, remove any hardware integrated software, for example a HP server should have the HP ProLiant Support Pack from the server prior to conversion.  Failing to remove hardware integrated software will cause the new VM to blue screen or have failing services at start up.  You can do a safe mode boot and uninstall the relevant software after the P2V conversion.

When the process runs a new dynamic VHD is created by default for each physical hard disk.  You cannot reduce the size of these disks.  If you need them to be smaller then use a 3rd party solution to do this before the conversion.

When the job is complete VMM will add the Integration Components.

What about a strategy?

  • Identify virtualisation candidates.
  • Identify required drivers for offline conversions and add them to your VMM driver pool.
  • Prepare the physical computer, e.g. do a defrag and double check anti virus, etc.
  • Make sure all backups of the physical computer have worked OK and that you can recover from any disaster.
  • Maybe do an online conversion to test the process for the server in question.  Place the new VM on a test virtual network.  Make sure it boots up OK and performs OK.  This won’t affect the production physical server.
  • Perform final P2V preparations, e.g. uninstall hardware integrated software.
  • Perform a suitable conversion (probably offline)of the physical computer.  Leave it offline.  Bring the VM online and test it.
  • Put the new VM into production.
  • Make sure backups are working OK for the new VM.
  • Leave the physical server powered off for a pre-agreed timeframe before removing/recycling the physical computer.  You never know what will happen, e.g. require a
  • reversal of the process to V2P (not in VMM) of the server.

Notes:

  • FAT/FAT32 cannot be converted using Online P2V
  • You can do a P2V of virtual machines, however VMware users will want to use the V2V approach.
  • You cannot do a P2V of an in-place cluster.  However you can convert each cluster node and then create a new failover cluster.

Technorati Tags: ,,

2009
12.09

Microsoft has released a beta for sizing storage requirements for System Center Data Protection Manager 2010.  It’s 3 Excel spread sheets. 

“These DRAFT storage calculators are for use with those planning DPM 2010 (beta) deployments – with specific calculators for Hyper-V, SharePoint and Exchange environments”

Technorati Tags: ,
2009
12.09

Microsoft has announced that the Windows Server and Tools division will merge with Azure online services.  This means that future developments can be integrated.  We’ve already heard that VMM v.Next will allow you to migrate VM’s from your Hyper-V private could up to Azure.  And with bolt-ons we know that we can integrate an internal Active Directory with things like Exchange Hosted Services and BPOS.  It looks to me like MS will make this a more seamless approach, probably leveraging Active Directory Certificate Services.

Interesting times ahead!

Technorati Tags: ,
2009
12.08

Microsoft releases a new operating system and everyone wants to make a headline.  It happened 2 years ago and it’s happening again.

This time some people are claiming they’ve broken BitLocker.  Their attack vectors work two ways:

  1. Attack the machine while it’s running and a user is logged in.  That way they can scan the RAM for cached BitLocker keys.  If you have the machine while it’s logged in then you have access to the data.  Pointless.
  2. Gain access to the machine to attack the hardware.  Install something to capture the PIN as the machine boots up.  Then steal the machine or gain access to it again and use the captured data to access the hard disk data.

That last one would be a threat, admittedly.  It’s a far fetched one for laptops but it feasible.  I’m guessing that BitLocker with a Smart Card would beat that one assuming the smart card is not kept with the laptop.  We know how lazy people can be so – eek.  And potentially the latter approach is one to attack on-premises physical servers. 

I guess we’ll see.

Technorati Tags: ,
2009
12.08

I was re-installing the WSUS role on our security server (W2008 R2) today and hit this error as soon as the installation started:

“The update could not be found”.

It’s a bit of a weird one for a role installation.  I hadn’t the foggiest so I did a quick search and found the solution:

  • Delete the “WindowsUpdate” key from the registry at HKLMSoftwarePoliciesMicrosoftWindows.  I’d recommend you export this to a .reg file to be safe.
  • Restart the Windows Update service.

Now you can go ahead and install WSUS.

The problem and fix applies to previous versions of Windows.  The issue is that the installer is checking Windows Updates but it has found a circular reference.  You’ve uninstalled WSUS from the server and it is configured to update from itself.  How can it?  Make sure you do the install before GPO applies those settings again during an automatic refresh.

Technorati Tags: ,
2009
12.04

The Minasi Forum 2010 conference date and location (Virginia Beach, VA, USA) have been announced.  The conference will run from Sunday May 2nd until Wednesday May 5th at the Founders Inn resort.  More details on sessions and speakers will be released soon.

2009
12.04

I’m speaking tomorrow on behalf of my employers at the Greenhouse Business Camp for start up businesses tomorrow.  I’ve a short session on at 12:15 and I’ll be talking about the challenges of going online for a start-up business.

2009
12.04

This morning I spoke at my first Microsoft Springboard STEP event.  The subject was “Deploying Windows 7 and Windows Server 2008 R2” and featured WAIK/WSIM, Windows Deployment Services (WDS) and Microsoft Deployment Toolkit (MDT) 2010.  We had a nice turn out and apart from my XP VM acting a bit funny at the end, all went well.  It was very much a demo, demo, demo session.

I recorded the webcast.  You can see the entire thing, unedited, right here.  It will be available for 365 days from now.  And here is the slide deck:

Thanks again to the folks at Microsoft Ireland for organising the venue and for helping to spread the word and thanks too to all who came along or tuned in live.

2009
12.04

This morning I spoke at my first Microsoft Springboard STEP event.  The subject was “Deploying Windows 7 and Windows Server 2008 R2” and featured WAIK/WSIM, Windows Deployment Services (WDS) and Microsoft Deployment Toolkit (MDT) 2010.  We had a nice turn out and apart from my XP VM acting a bit funny at the end, all went well.  It was very much a demo, demo, demo session.

I recorded the webcast.  You can see the entire thing, unedited, right here.  It will be available for 365 days from now.

Thanks again to the folks at Microsoft Ireland for organising the venue and for helping to spread the word and thanks too to all who came along or tuned in live.

2009
12.03

I’ve been running a “security” server for years in different jobs.  It’s a server that runs several security roles, for example, SUS and then WSUS, antivirus, certificate services, etc.  Very often these are different servers, quite unnecessarily eating up resources and licenses.

In my current job, our security server started life as a x86 Windows Server 2003 1U rack server.  Not long after the launch of our Hyper-V based private cloud, I ran a VMM 2008 P2V job to convert that machine to be a virtual machine, freeing up the hardware for other purposes.  This was quite appropriate.  These sorts of servers are usually very lightweight. 

Earlier this year I decided to upgrade the machine to Windows Server 2008.  That was easy and safe.  I took a snapshot (knowing I had space on the LUN) and performed the upgrade.  Now it was running W2008 x86.  The upgrade went well.  If it hadn’t I could have easily applied and then deleted the snapshot to return the machine back to W2003.

I now faced a challenge now.  The next upgrade would be to Windows Server 2008 R2.  W2008 R2 is a 64-bit operating system and you cannot upgrade from 32-bit to 64-bit Windows.  There was only one choice – a rebuild.  Virtualisation made this so easy – and VMM 2008 R2 made it easier.

We have a Hyper-V lab server.  I use it to prep new images, test security updates, and to try out scenarios and solutions.  I deployed a VM running W2008 R2 Enterprise edition onto the host and configured the VLAN ID for our test network.  Enterprise edition would allow me to run customised certificates for OpsMgr usage.  Here I could specify the computer name to be the same as the machine I would eventually replace and prepare it identically to the original – excepting the operating system version and architecture.  On went SQL Express 2008 SP1, our antivirus and prepare those services.  Downloads, approvals, patching, etc were all done.  Meanwhile, the production server was still operating away with customers unaware it was to be replaced.

Eventually it was ready.  I powered it down.  I removed the OpsMgr agent from the original server and then used VMM to move that VM elsewhere.  I used VMM to move the new VM onto the desired host.  All that was required now was to change the VLAN id, boot it up, join it to our management network domain and deploy the OpsMgr agent.  10 minutes of service downtime in total to completely replace a server.   Not bad!  I went on to add Certificate Services after the domain join.

I’m leaving the original VM to one side just in case there’s a problem.  If so I can bring it back – but that would then require some ADSIEDIT surgery to remove the certificate services configuration.  So far, though, so good.

2009
12.03

Microsoft has released guidance on how to perform a bare metal or iron level recovery of W2008 using System Center Data Protection Manager Service Pack 1.

“This technical article outlines the steps of using DPM 2007 SP1 alongside the Windows Server Backup (WSB) utility to provide a supported bare metal recovery of Windows Server 2008.

System Center Data Protection Manager (DPM) 2007 is a key member of the Microsoft System Center family of management products designed to help IT professionals manage their Windows Server environments. DPM is the new standard for Windows Server backup and recovery – delivering continuous data protection for Microsoft applications, virtualization, file servers, and desktops using seamlessly integrated disk and tape media, as well as cloud repositories. DPM enables better backups with rapid and reliable recoveries for both the IT professional and the end-user. DPM helps significantly reduce the costs and complexities associated with data protection through advanced technology for enterprises of all sizes. Using complimentary technologies in addition to DPM’s actual software, DPM 2007 SP1 can perform a bare metal recovery (BMR) to restore an entire server without an operating system”.

Technorati Tags: ,,
2009
12.03

Microsoft has updated the Performance Tuning Guidelines document to include W2008 R2.  It covers all aspects of the server operating system but I’m going go focus on Hyper-V here.

The guidance for memory sizing for the host has not changed.  The first 1GB in a VM has a potential host overhead of 32MB.  Each additional 1GB has a potential host overhead of 8MB.  That means a 1GB VM potentially consumes 1056MB on the host, not 1024MB.  A 2GB VM potentially costs 2088MB on the host, not 2048MB.  And a 4GB VM potentially costs 4152MB, not 4096MB.

The memory savings for a Server Core installation are listed as 80MB.  That’s seriously not worth it in my opinion given the difficulty in managing it (3rd party software and hardware management) and troubleshooting it when things go wrong. “Using Server Core in the root partition leaves additional memory for the VMs to use (approximately 80 MB for commit charge on 64-bit Windows)”.

RAM is first allocated to VM’s.  “The physical server requires sufficient memory for the root and child partitions. Hyper-V first allocates the memory for child partitions, which should be sized based on the needs of the expected load for each VM. Having additional memory available allows the root to efficiently perform I/Os on behalf of the VMs and operations such as a VM snapshot”.

There is lots more on storage, I/O and network tuning in the virtualization section of the document.  Give it a read.

Get Adobe Flash player