A while ago, I asked for some feedback on Hyper-V and VMM.  Some of the strong feedback came on the Linux guest side.  In particular, the integration components:

  • The lack of shutdown integration.
  • Only 1 virtual CPU supported.
  • The lack of time synch between host and guest recently affected me.

Never fear, MS was ahead of me.  Ben Armstrong just let the public know that new integration components are in the works and you can download these new beta (test) IC’s from Connect now.  The IC’s are for the usual supported Linux distros (SLES and RHEL).  Supported is different to “it works”, i.e. these will probably work just as well on Ubuntu and CentOS but MS cannot support them.  They will also support all Hyper-V variants.  The new features are:

  • SMP support for up to 4 virtual CPU’s (Yay!)
  • Shutdown integration between host and guest, e.g. host shutdown or from VMM
  • Clock synch between host and guest (no more run away clock and NTP fixes)

Ben also says that the new functionality will be submitted to the Linux kernel.  Here’s hoping the Linux distros keep up to date.

Technorati Tags: ,,

Microsoft has said that they are seeing good results with Windows Server 2008 R2 and Hyper-V.  12 cores – I’m not surprised!!!!  Jeez, I’ve trouble making use of dual quad core CPU’s with the typical hosting virtual machine on our cluster.  I can’t imagine us needing 8 or 12 core CPU’s without some mad (and I mean laughing cuckoo smacking itself in the head with a frying pan mad) amounts of RAM.  But I guess that we are not the typical corporation.  The chips are also providing some new power efficiencies that sound like core parking.


Yesterday, MS Ireland held the local instance of the Virtualisation Summit that MS is running in many cities around the world.  It was keynoted by Ian Carlson, a senior program manager from Redmond (nice guy too).

The usual slide decks were presented, probably the first time many of the attendees (around 140 I think, standing room only) had seen them.  For those of us “on the inside” this can be a bit tiresome but that’s what happens when you attend every MS event going to get your free cup of coffee and pastry for brekkie!  The end of the morning session feature Gerry from Lakeland Dairies, an interesting case study because they make the most of System Center and use the Compellent SAN to replicate their VM’s across their campus for DR.  They are also a fine example of a company that had a plan and knew their requirements going into the project, allowing them to make good decisions.

After the break there was a split into desktop virtualization and server virtualisation.  *I must stop using Z’s in the American way – too much writing for Sybex*  Ronnie Dockery from MS and Citrix ran a breakout on desktop virtualisation and VDI.  Wilbour Craddock, a techie in the MS Ireland partner team, ran the server virtualisation breakout and went through a number of best practices and tips on a successful solution.  Maybe 60% went into the desktop room. 

I did the last 15 or so minutes in the server room, talking about our Hyper-V, OpsMgr, VMM and HP deployment at C Infinity.  I talked through the relevant bits of the infrastructure and had a cool snazzy animated slide deck to talk through how HP SIM, OpsMgr, VMM and highly available Hyper-V VM’s allowed for no interruption of service back in January when we detected a degraded memory board (via HP SIM agent and OpsMgr management pack), got the alert, used Live Migration to move VM’s from the host, HP (via RedStone) replaced the affected board within the 4 hour support response window and we continued on without missing a beat.  Some talk of PRO was also in there.  I also stressed how Hyper-V with System Center makes this a solution for applications, which is what the business really cares about – not NIC’s and memory boards.

I haven’t posted the slide deck – animations don’t work on Slideshare, and to be honest, my slides are nothing but cue cards for me to rattle on until someone rings a bell to shut me up.

I talked to a few people afterwards and the response to the morning was positive.  I think a lot of people either got a fresh view on hearing about the complete solution (it’s more than “just” hardware virtualisation) or were happier after hearing the experiences of two Irish customers using the suites – not just the usual “Here’s XYZ Giganto Corporation from the USA or Germany” that Irish customers cannot relate to.  MS Ireland does a great job on that.


Been A Little Quiet

*cough* I’ve been out of the country for a few days.  I was on a panel at Eurocloud UK and then I had the chance to go on a trip to Scotland to do some photography.  And last week, I picked up a cold while judging a photo competition *sneeze*  You’ll have to understand that when I get a chance to do something like the below, I’m going off the air :-)


That’s a Golden Eagle devouring a Mountain Hare.  I’ve some writing to do this week that will also consume some time.  Hmm, I’m peckish; time for really spicy pizza.


A press release was issued today by Microsoft.  It has a whole bunch of new statements on the MS front, including licensing, new features, and requirements changes.  The big ones are:

  • Hyper-V Dynamic Memory will be added in W2008 R2 SP1.
  • A new smoother VDI experience for VDI customers in W2008 R2 SP1.
  • VDI licensing for SA customers won’t require and additional license for PC clients.
  • XP Mode will no longer require CPU assisted virtualisation.

That last one was a pain in the butt when it came to Virtual PC for Windows 7.  You had to dig deep to find out if your Dell, HP, etc, machine had a supported CPU.  And manufacturers like Sony produced machines quite recently that hid the functionality if it was there.  This change by Microsoft removes the guess work.

No schedules were announced.  Check out the press release to see all of the announcement.

Credit to Mark Wilson (MVP) for making me aware of this.


One of the things I need to do at home is do some Hyper-V clustering.  As you can guess, I have not won the lottery so a C3000 blade chassis with LeftHand storage is not on the cards.  If I do get around to this it will be done on a shoestring.  Here’s what I am considering:

  • HP Microtower 3010 * 2: I checked and the 2.8GHz CPU has Intel-VT and DEP features.
  • Intel 1GB NIC * 6: I want 4 NIC’s per host.  The 3010 has 3 * PCIx1 slots.
  • A Netgear 16 port GB switch.
  • I’ll use an old PC as a iSCSI target.

I want Live Migration to work.  4 NIC’s (parent, private network, VM network and iSCSI) should do the trick.  The total price comes in at around €1500 retail including tax.

Any opinions on a better solution?


The after-work project I’m working on right now requires as many VM’s as I can throw at it.  I’ve got my previously mentioned Latitude E6500 laptop running W2008 R2 Hyper-V.  It’s also my domain controller and my VMM 2008 R2 server/library.  Not best practice but it’s fine for a domestic lab.

I need even more VM’s than I can run on there.  So I’ve got a HP ML370 G5 that was spare from work.  It’s got as much memory as I could scrape together and I put Windows Server 2008 R2 on it.  One problem: I do not have a wired house.  And I do not want to work beside the noisy server.  I’ll be using Office on my laptop for documentation and I can sit with that in my sitting room.  The server will stay upstairs in my office.  Just how will the communicate?

That’s easy.  I have an old Belkin 11G wifi NIC which I put into the ML370.  Windows detected it as a Broadcom.  That aint right but it works!  I’m going to set the server up as a member of my laptop’s domain.  That will allow me to put a VMM agent on there for remote management. 

My VM templates are small enough (dynamic VHD’s) but I probably might not want to copy them over wifi.  I might just configure the wired NIC’s with another subnet range and connect the machines with a hub/switch when I need to deploy stuff.  Or maybe I’ll copy the templates over to the server using a USB disk and set up a library share on the server for a faster local copy.  That might just work!


I was contacted last month by Eva Helen in Sanbolic to see if I’d be interested in learning more about their Melio FS product.  I knew little about it and was keen to learn.  So Eva got David Dupuis to give me a demo.  Dave just ran an hour long demo on LiveMeeting with me and I learned a lot.  I gotta say, I’m impressed with this solution.

There were two Sanbolic products that Dave focused on:

  • La Scala = Cluster Volume Manager used instead of disk manager.  It’s a shared volume manager.  It is aware of what nodes are attached to it. 
  • Melio = Cluster File System. 

La Scala

  • La Scala can mirror volumes across 2 SAN’s, allowing for total SAN failure.  Each server has two controllers or a dual channel HBA, one path going to each SAN.  1 write is converted to two writes on two paths.  In theory, there’s no noticeable performance hit for amazing fault tolerance.
  • On the fly volume expansion
  • Can use any block based shared storage iSCSI or fibre channel system
  • You can set up a task, e.g. expand disk, and review it before committing the transaction.
  • Windows ACL’s are integrated in the interface to control volume access rights.

I’ve got to say, the SAN mirroring is pretty amazing technology.  Note the performance will equal the slowest SAN.  It can take cheap storage solutions that might not even have controller/path fault tolerance and give them really high fault tolerance via redundant arrays and mirrored storage with an unperceivable performance hit due to the mirroring being done by simultaneous writes by 2 independent controller paths.


  • This is 64-bit symmetrical cluster file system.
  • There is no coordinator node, management server, metadata controller, etc, that manages the overall system.  So there’s no redirected I/O mode *cheers from Hyper-V admins everywhere*
  • Metadata is stored on the file system and every node in the cluster has equal access to this.  This is contrary to the CSV coordinator in W2008 R2 failover clustering.
  • QoS (quality of service) allows per process or per file/folder file system bandwidth guarantees.  This allows granular management of SAN traffic for the controlled resources.  In the Hyper-V context, you can guarantee certain VHD’s a percentage of the file system bandwidth.  You can also use wildcards, e.g. *.VHD.  This is another very nice feature.
  • There is a VSS provider.  This is similar to how SAN VSS providers would work.  Unlike CSV, there is no need for redirected I/O mode when you snap/backup the LUN. 
  • There is a bundled product called SILM that allows you to copy (via VSS) new/modified files to a specified LUN on a scheduled basis.
  • Backups solutions like BackupExec that recognise their VSS provider can use it to directly backup VM’s on the Melio file system.
  • MS supports this system, i.e. Failover Clustering and VMM 2008 R2.  For example, Live Migration uses the file system.  You’ll see no CSV or storage in Failover Clustering.  The Melio file system appears as a normal lettered drive on each node in the cluster.
  • By using advance exclusive lock detection mechanisms that CSV doesn’t have, Melio can give near raw disk performance to VHD.  They say they have faster (57%) VHD performance than CSV!
  • You can provide iSCSI accessed Melio file systems to VM’s.  You can license the product by host –> gives you 4 free VM licenses.
  • Melio isn’t restricted to just Hyper-V: web servers, SQL, file servers, etc.
  • Issues seen with things like AV on CSV aren’t likely here because there is no coordinator node.  All metadata is available to all nodes through the file system.  You need to be aware of scheduled scans: don’t have all nodes in the cluster doing redundant tasks.  The tip here: put a high percentage guarantee for *.VHD and the AV has been controlled.

It’s got to be said that you cannot think of this as some messy bolt on.  Sanbolic has a tight relationship with Microsoft.  That’s why you see their Melio file system being listed as a supported feature in VMM 2008 R2.  And that can only happen if it’s supported by Failover Clustering – VMM is pretty intolerant of unsupported configurations.

Overall, I’ve got to say that this is a solution I find quite interesting.  I’d have to give it serious consideration if I was designing a cluster from scratch and the mirroring option raises some new design alternatives.

My $64,000,000 question has probably been heard by the guys a bunch of times but it got a laugh: “when will Microsoft buy Sanbolic and have you invested a lot in the company share scheme?”.  Seriously though, you’d think this would be a quick and superb solution to get a powerful cluster file system that is way ahead of VMFS and more than “just” a virtualisation file system.

Thanks to the kind folks at Sanbolic for the demo.  It’s much appreciated!


I joined this late due to a phone conference.

This is a System Center Influencers briefing on Data Protection Manager (DPM) 2010.

The Aims

  • Single supported solution for Microsoft workloads
  • Single agent, no workload licensing
  • Enterprise scalability in the 2010 release

New Workload Additions

  • Cluster Shared Volume
  • Exchange 2010
  • SharePoint 2010

It supports OS’s going back to XP SP2.


  • Self-service end user restore from Explorer or Office
  • Self-service DBA restrore from within SQL
  • Auto protection of new databases
  • Protect 1000’s of databases per DPM server
  • Recover 2005 DB’s to SQL 2008
  • Auto protection of new content databases in SharePoint farms
  • Protect the farm, restore the document
  • Optimizations for the new and many Exchange architectures


  • CSV support
  • Item level recovery from within a VHD
  • Alternate host recovery

Client Protection

  • 1000 clients per DPM server
  • “User data only”.  Don’t protect the entire machine.
  • Uses VSS in Vista and Windows 7
  • Policy allows you to protect specific folders, so there’s no end user set up.
  • User can restore from local VSS while offline, or DPM while online.
  • While offline, the PC continues to make VSS copies and will sync them to DPM when it is online again.
Technorati Tags: ,,

Fujitsu has launched a bundle for SME’s (small/medium enterprises) that want to do Hyper-V virtualisation for the very first time.  They’ve called it the “My Very First Hyper-V”.  It includes, servers, external storage, Windows Server 2008 R2 and System Center Virtual Machine Manager 2008 R2 Work Group Edition.  A flyer can be found here.

I wonder if they’ll replace the VMM installation with System Center Essentials 2010 when it is released.  That would make sense to me seeing as it’s aimed at this market and it gives software management, health & performance monitoring and VMM functionality.


Today I was working with a customer who needed to grow their hosted presence with us due to performance and scaling requirements.  OpsMgr ProTips alerts made us aware of certain things that got the customer and us working.  A VMM library template machine was quickly deployed to meet the sudden requirements.  That got me thinking about how OpsMgr and VMM could be used in a large virtualised (and even physical) application environment to scale out and in as required.  All of this is just ideas.  I’m sure it’s possible, I just haven’t taken things to this extreme.


Let’s take the above crude example.  There are a number of web servers.  They’re all set up as dumb appliances with no content.  All the content and web configurations are on a pair of fault tolerance content servers.  The web servers are load balanced, maybe using appliances or maybe by reverse proxies.  It’s possible to quickly deploy these web servers from VM templates.  That’s because the deployed machines all have DHCP addresses and they store no content or website configuration data.

The next tier in the application is typically the application server.  This design is also built to be able to scale out or in.  There is a transaction queuing server.  It receives a job and then dispatches that job to some processing servers.  These transaction servers are all pretty dumb.  They have an application and know to receive workloads from the queuing server.  Again, they’re built from an image and have DHCP addresses.

All VM templates are stored in the VMM library.

All of this is monitored using Operations Manager.  Custom management packs have been written and distributed application monitoring is configured.  For example, average CPU and memory utilisation is  monitored across the web farm.  An alert will be triggered if this gets too high.  A low water mark is also configured to detect when demand is low.

The web site is monitored using a captured web/user perspective transaction.  Response times are monitored and this causes alerts if they exceed pre-agreed thresholds. 

The Queuing server’s queue is also monitored.  It should never exceed a certain level, i.e. there is more work than there are transaction servers to process it.  A low water mark is also configured, e.g. there is less work than there are transaction servers.

So now OpsMgr knows when we have more work than resources, and when we have more resources than we have work for.  This means we only need a mechanism to add VM’s when required and to remove VM’s when required.  And don’t forget those hosts!  You’ll need to be able to deploy hosts.  I’ll come back to that one later.

Deploying VM’s can be automated.  We know that we can save a PowerShell job into the library when we create a VM, etc.  Do that and you have your VM.  You can even use the GUIRunOnce option to append customisation scripts, e.g. naming of servers, installation of updates/software, etc.  Now you just need a trigger.  We have one.

When OpsMgr fires an alert it is possible to associate a recovery task with the alert.  For example, the average CPU/Memory across the web farm is too high.  Or maybe the response time across the farm is too slow.  Simple – the associated response is to run a PowerShell script to deploy a new web server.  10 minutes later and the web server is operational.  We already know it’s set to use DHCP so that’s networking sorted.  The configuration and the web content are stored off of the web server so that’s that sorted.  The load balancing needs to be updated – I’d guess some amendment to the end of the PowerShell script could take care of that.

The same goes for the queuing server.  Once the workloads exceed the processing power a new VM can be deployed within a few minutes and start taking on tasks.  They’re just dumb VM’s.  Again, the script would need to authorise the VM with the queuing process.

That’s the high water mark.  We know every business has highs and lows.  Do we want to waste Hyper-V host resources on idle VM’s?  Nope!  So when those low water marks are hit we need to remove VM’s.  That one’s a little more complex.  The PowerShell script here will probably need to be aware of the right VM to remove.  I’d think about this idea:  The deploy VM’s would update a file or a database table somewhere.  Thing of it like a buffer.  The oldest VM’ should then be the first one removed.  Why?  Because we Windows admins prefer newly built machines – they tend to be less faulty than ones that have been around a while.

With all that in place you can deploy VM’s to meet demands and remove VM’s when they are redundant to free up physical resources for other applications.

What about when you run out of Hyper-V server resources?  There most basic thing you need to do here is know that you need to buy hardware.  Few of us have it sitting around and we run on budgets and on JIT (just in time) principles.  Again, you’d need to do some clever management pack authoring (way beyond me to be honest) to detect how full your Hyper-V cluster was.  When you get to a trigger point, e.g. starting  to work on your second last host, you get an alert.  The resolution is buy a server and rack it.  You can then use whatever build mechanism you want to deploy the host.  The next bit might be an option if you do have servers sitting around and can trigger it using Wake-On-Lan.

ConfigMgr will run a job to deploy an operating system to the idle server.  It’s just a plain Windows Server installation image.  Thanks to task sequences and some basic Server Manager PowerShell cmdlets, you can install the Hyper-V role and the Failover Clustering feature after the image deployment.  A few reboots happen.  You can then add it to the Hyper-V cluster.  You can approach this one from other angles, e.g. add the host into VMM which triggers a Hyper-V installation.

Now that is optimisation and dynamic IT!  All that’s left is for the robots to rise – there’s barely a human to be seen in the process once its all implemented.  I guess your role would be to work on the next generation of betas and release candidates so you can upgrade all of this when the time comes.

I’ve not read much about Opalis (recently aquired by Microsoft) but I reckon it could play a big role in this sort of deployment.  Microsoft customers who are using System Management Suite CAL’s (SMSE/SMSD) will be able to use Opalis.  Integration packs for the other System Center products are on the way in Q3.


… we knew you so little!

Microsoft today announced the discontinuation of EBS as a product.  Sales will stop on June 30th.

It could be argued that EBS was a flop, unlike it’s “little” sibling SBS which is a raging success.  EBS seemed like a stretch looking back on it.  It tried to do things that medium/large enterprise administrators hate.  It tried to squeeze as much functionality as possible into a 3 or 4 server package.  Many of the potential companies who were in the target market already had invested in multi server architectures for one reason or another.  They preferred to pick and choose the components and deploy them as and when required.  The all-in-one-package bundle isn’t as suitable for these medium sized companies as it is for the SBS customer.

The real target market was always going to be small and the complexity in building this package was huge, especially when you consider all of the different product groups and time lines involved.


TechNet Wiki – Hyper-V

I talked about the TechNet Wiki recently which was announced by Keith Combs.  I won’t hold it against him for being a Cowboys fan ;-)  Ben Armstrong just blogged about the Hyper-V part of the wiki and you can see what he’s said there.  So I guess that means it’s “live” in some way, shape or form.  If you feel like you can document some facet of Hyper-V better than what has been done previously, or if you know of some tricks/work arounds, then please add them.

You can find the wiki here.  I’m not a big fan of the landing page because I’ve not really found a way to get into the wiki from it.  Maybe I’m dumb :-)


Folks, this summer the following products will be end of life, i.e. no support of any kind for the following products:

  • XP SP2 – upgrade to a newer service pack
  • Vista RTM – upgrade to a newer service pack
  • Windows 2000 – upgrade to Windows XP SP3 or later, Windows Vista SP1 or later, or Windows 7
  • Windows Sever 2000 – upgrade to Windows Server 2003/2003 R2/2008 or migrate to Windows Server 2008 R2.

Go to the Microsoft product life cycle site for precise details.

For the server replacement, I’d strongly consider you look at moving to an x64 server operating system.  Making the jump now will ease future upgrades.  A few notes:

  • Microsoft hates upgrades because they are messy.  Problems are inherited/created.
  • You cannot upgrade from x86 to x64 or vice versa.
  • You cannot upgrade from a full installation to a core installation.
  • You need the correct licensing for the server and the CAL’s.
  • Check application compatibility.
  • Test, test, test and verify with application/hardware vendors before making changes.

Redstone is one of Ireland’s leading enterprise hardware providers in Ireland – I’ll be open and admit that I’m a (happy) blade and storage customer.  They are running this event today in cooperation with HP Ireland.  The goodie bag will in no way influence me :)

Today’s event will focus on Data Protector, HP’s backup solution, and how it can be used in a virtualised environment.  The majority of the attendees are using EVA/VMware.  About 1/4 are using Hyper-V.  A couple are using Xen and a couple are using XP SAN.  No one here is using Lefthand.  About 1/5 are using Data Protector for their backups.

  • Virtualisation solves some problems but complicates backups.
  • We need to reduce backup costs – storage amounts.
  • We need to be able to reliably restore business critical data and secure sensitive data.

A common problem is that people rush head first into virtualisation without considering the strategy for backup.


  • VM level backup: The argument by the HP speaker is that this is resource intensive.
  • Host level backup: This “doesn’t” impact the performance of the host. Hmm.  There is an issue with recovered data consistency, e.g. is there Volume Shadow Copy integration to Windows VM’s?  SQL and Exchange don’t support this.

The speakers says Data Protector allows you to take both approaches to meet suitable requirements for each VM.

Data Protector 6.11 has VMware VCB and Hyper-V support.  The core product has a license.  It has the traditional bolt-on license approach.  Virtualisation requires an “Online Backup” license.  The Zero Downtime Backup allows integration into the snapshot features of your HP storage array.

Note: that’s probably the approach you’d go with for backup of a Hyper-V CSV due to the CSV coordinator/redirected I/O issue with host level backups – assuming this is supported by Data Protector.

For storage I/O intensive applications, Data Protector can take advantage of the ability to snapshot the targeted LUN’s.  You identify a LUN to backup, the SAN creates a copy, Data Protector backups up the copy while the primary continues to be used by the application/users.  This can be a partial copy for normal backup/recovery to save storage space/costs on the SAN.  You can do a full copy of the LUN for “instant recovery”, i.e. Data Protector restores file(s) from the copy of the LUN.  This requires additional per TB licensing.  The partial copy cannot do “instant recovery” because it links back to the original storage and isn’t completely independent.  There’s a cost for these two solutions so you save it for the mission critical, storage performance sensitive data/applications.  You can do this on a replicated partner SAN to do backups in your DR site instead of in the production site.  These solutions require the VSS integrations for the storage arrays.  Note that this isn’t for VM snapshots.

Zero Time Backup and Instant Recovery can be done in VMware if the VM uses raw device mapping (pass through disks).

Hyper-V Backup Methods

  • In VM agent
  • VSS system  provider snapshots
  • VSS hardware provider snapshots
  • Full restore of VM
  • Partial restore of files
  • Offline backups for VM’s
  • Zero downtime backup
  • Instant recovery

I would guess the last two require passthrough disks.  Might be a solution for SQL/Exchange VM’s.

Really, you will end up with a combination of backup methods across the data centre, depending on VM’s, applications, and backup/recovery times/impacts.

After coffee, we had some demos of VMware backups that didn’t go so well for the HP speaker.

In summary, Data Protector gives you some HP storage integrated backup options.  Be careful and ensure that servers, OS’s, and applications support the backup type being used.

Although HP and Microsoft have announced their “Forefront” virtualisation alliance, there’s still a lot of catch up going on with regards to Hyper-V knowledge and sharing.  Thanks to Redstone for organising this up in their scenic office in the Wicklow mountains – not exactly a bad place to be just after sunrise.

Technorati Tags: ,,,

Microsoft is holding an event on March 18th at 9am PST (-8 GMT) focusing on desktop virtualisation.  You can find more details and mark it in our calendar by visiting the official site.  As Jeff Wettlaufer put it:

“Looking at desktop virtualization including VDI? Thinking about migrating to Windows 7? Want savings, but unsure of the tradeoffs? Have more questions than answers on the topic?”.

Hopefully this session will answer those questions for you.

Technorati Tags: ,

Microsoft published a guide for implementing System Center Operations Manager 2007 R2:

“This guidance provides information on the implementation of System Center Operations Manager (SCOM) 2007 for the monitoring and management of Windows servers. It provides the information necessary to create an Operations Manager 2007 design, the procedures for installing and configuring the Operations Manager 2007 server roles and agents, and guidance for managing an Operations Manager 2007 solution”.


Microsoft did a webcast on March 1st aimed at VMware administrators/engineers/consultants who are interested in, or will be working with Hyper-V.

The fan-boys will be thinking negative thoughts and wishing me ill will now :)

Realistically, you need to start thinking of hardware virtualisation as being like hardware.  Some companies like HP, some like Dell, and some like Fujitsu – who really likes IBM?  I’m kidding; I don’t really care who likes IBM hardware.

This means that although a company may have a preference, they will have variations depending on circumstances.  For example, we’re told that VMware has a presence in every single Fortune 100 in the USA.  But do you think none of them are either using or considering Hyper-V as well?  There may be features that ESX offers that they use, but Hyper-V offers virtualisation at a greater price.  Bundle in System Center and you have a complete management solution rather than a point one.  With VMM you can manage both ESX (and ESXi) and Hyper-V.  Only the biggest of fan-boys will rule out Hyper-V making it’s way into some VMware sites to work along side it, just like you find a mix of server vendor types in some computer rooms.

The services industry is another interesting one.  This time last year, I could really think of one, maybe two, services companies in Ireland that I would call if I was in need of Hyper-V consulting skills.  Lots of them went to events, but they were all sticking to their VMware guns. It was probably a combination of internal evaluations and customer decision making that drove this.  But since last Summer, things shifted slightly.  Hyper-V is mentioned more as a skills requirement.  And thanks to the HP/Microsoft virtualisation alliance, HP resellers are starting to gather skills.  One of the major players in the Irish enterprise hardware space was laughing at Hyper-V a year ago.  Then they started to lose big virtualisation bids to the few companies going in with Hyper-V solutions.  CSV and Live Migration changed everything.  Customers now were happy to get the core features at a fraction of the price.

If you are a VMware person, give the webcast a watch.  Most of the criticisms of Hyper-V by fan-boys are usually based on lack of knowledge, e.g. the famous “9 things” post that was widely slammed for being ill-informed.


Microsoft published a bunch of guides for engineers and administrators who work with ConfigMgr 2007:

  • System Center Configuration Manager 2007 Deployment Guide: This guidance provides information on how to design and deploy a Configuration Manager infrastructure within a healthcare organization. It allows the healthcare organization to be confident that the Configuration Manager infrastructure being designed and deployed is using current best practice.
  • System Center Configuration Manager 2007 Operating System Deployment Guide: This guidance helps healthcare organizations when implementing and using the operating system deployment feature of Configuration Manager. This guidance provides the information required to quickly become familiar with the operating system deployment feature and understand the appropriate decisions that need to be made in order to deploy and use the solution. It also provides step-by-step guidance showing how to install and configure the required components, and also how to use the most common features.
  • System Center Configuration Manager 2007 Software Distribution Guide: This guidance provides the information required to quickly become familiar with the software distribution feature and understand the appropriate decisions that need to be made in order to deploy and use the solution. It also provides step-by-step guidance showing how to create the objects required within Configuration Manager to perform the software distribution.
  • System Center Configuration Manager Software Update Management Guide: This guidance provides the information required to quickly become familiar with the software update feature, and understand the appropriate decisions that need to be made in order to deploy and use the solution. It also provides step-by-step guidance showing how to install and configure the required components, and how to use the most common features.

Technorati Tags: ,


TechNet 2.0 Goes Live

Keith Combs just tweeted that TechNet V2.0 is live.  It’s got a whole new look to it. 


Microsoft has published guidance on how to size your OpsMgr 2007 R2 installations:

“The Operations Manager 2007 R2 Sizing Helper is an interactive document designed to assist you with planning & sizing deployments of Operations Manager 2007 R2. It helps you plan the correct amount of infrastructure needed for a new OpsMgr R2 deployment, removing the uncertainties in making IT hardware purchases and optimizes cost. A typical recommendation will include the recommended hardware specification for each server role, topology diagram and storage requirement. The Operations Manager Sizing Helper is most useful when used with the Operations Manager 2007 R2 Design Guide”.


Microsoft has published an Active Directory design guide

“This guidance provides general recommendations for the design, deployment and management of an Active Directory environment in a healthcare organization according to current best practices. The purpose of this guidance is to accelerate Active Directory design and deployment in a healthcare organization, and provide a framework for a more consistent network operating environment”.


I saw this one last night for myself and I’ve just seen a week-old post by Mike Briggs on the subject.  When you deploy KB978560 to your VMM 2008 R2 server, it will require an update to the agents.  You’ll see a yellow exclamation mark icon appear on your hosts.  When you check their status you’ll see that you must take manual action to resolve the issue.  Simply right-click on the managed hosts, update the agent, and provide any required credentials.  It takes a minute or two, then you’ll get your “issue” resolved. 

Be sure to put the hosts in maintenance mode in OpsMgr if you’re using it.  Otherwise you’ll get a bunch of alerts for every host you upgrade.

Technorati Tags:

Patrick Lownds, a fellow virtualisation MVP over in the UK, has provided a couple of useful links if you are running Hyper-V on HP equipment.  The first is a post on best practice guidance if you are running Hyper-V on a HP EVA SAN.  There is a whitepaper that goes through HP’s recommendations on this.  It was interesting to see they saw a fixed VHD’s get 7% more IOPS at 7% less latency than dynamic VHD’s.

The ProTips for HP are also available.  They’re not easy to find but Patrick provided me with a link.  The idea here is that HP SIM agents (which you should be installing, even if you don’t use the HP or other management software) detect hardware issues.  OpsMgr then picks up the alert and notifies VMM using the HP Pro Tips.  VMM can then take action, e.g. migrating VM’s from one host to another in the cluster.

Technorati Tags: ,,,

KB976002 describes what operating systems will receive a choice of Internet browser and how this process will work.  This will bring Microsoft into compliance with the much discussed demands of the European Union on this subject.  Affected OS’s are:

  • Windows XP Service Pack 2 and Windows XP Service Pack 3
  • All editions of Windows Vista
  • All editions of Windows 7
  • Future versions of the Windows client operating system that are released within the duration of the agreement with the European Commission

Some more information on the process can be found on Stealth Puppy.  I’ve not seen the update yet but it appears to be delivered by Windows Update.  If you don’t have Windows Update enabled then I guess you don’t get a choice.

If you are running tightly controlled corporate PC’s then you’ll be glad to hear that you can prevent the update from being deployed via WSUS/ConfigMgr/etc.  You can also use the registry, according to KB2019411 (and therefore group policy) to prevent the update from executing:

  • Key: HKLMSoftwareBrowserChoice
  • Value: Enable (REG_DWORD)
  • Possible settings: Enabled = 1, Disabled = 0
Get Adobe Flash player