Microsoft has posted my Windows Server 2012 R2 Hyper-V session on the Microsoft Ignite schedule builder.


Note that it should read “Windows Server 2012 R2”.

Currently, the day/time is January 1st at 12am. Yup, there will be fireworks and some auld lang syne. Please ignore the day/time and add the session to your builder if you are interested in the content. Hopefully a day/time will be fixed soon.


There is a term that I’ve heard for a while when talking to Microsoft program managers, and it has started to be used publicly by Microsoft staff. I read it on a post by Ben Armstrong:

If you are already on 10049 and have not yet enabled Hyper-V, you can either follow the above steps, or hang tight while we work on the next flight!

Rick Claus also used the term in the latest episode of the Ignite Countdown show.

Like all cloud services, in case you don’t know, this is what we’re doing with regards to flighting new things into it.

Microsoft’s Gabe Aul also explained the term in a Blogging Windows post on March 18th:

… we’ll have some weeks where we expect builds to flow out (we call them “flighting windows”) and some where we’ll hold back

And the term was also used by Aul when he explained the frequency of builds for Windows Insiders:

… we’d have a candidate build, and we’d flight that out broadly within MS to make sure we could find any gotchas …

So what are they talking about? You’ve probably heard that Windows 10, when it RTMs, isn’t “finished”. In fact, it’ll probably never be a finished product in the view of Microsoft until they release Windows 11 (if there is one). Microsoft will be updating this OS on a regular basis, adding new functionality. I know we’ve heard that sort of thing before, but it’s real this time. Windows Insiders are seeing it now, and the reality is that Microsoft development process was changed quite a bit after Windows 8.1 to make this possible. We know from TEE14 that the same happened to Windows Server to make it work more seamlessly with Azure.

This approach is taken from cloud computing and lightweight phone/tablet OSs:

  • You release a block of code that is developed and tested to a stable point.
  • There is a stack rank of additional features and changes that you wanted to implement but didn’t have the person-hours to complete.
  • You get feedback and that modifies the stack rank.
  • The market changes and more features are added to the stack rank
  • You code/test some new stuff over a short period and release it

This release is a flight and the process is flighting. It’s just another way of saying “release”. I guess, “release” in a devs mind is a big irregular event, whereas a flight is something that happens on a regular basis.

In the Microsoft world, we see flights all the time with Azure and quite frequently with SaaS such as O365 and Intune. Windows is moving this way too. The result is you get a regular improvement of the product instead of every 1,2, or 3 years. Microsoft can be more responsive to feedback and change. Consumers will love this. Businesses will get control over the updates, but I suspect, as we saw with the April 2014 update (AKA “Update 1”) that came into force in August 2014, there will be a support baseline update every now and then to ease the difficult for Microsoft on supporting Windows.

Technorati Tags: ,

It’s April Fool’s Day, and the new pricing system for Azure Backup comes into force today. Make of that what you want :D

I am not a fan of the new pricing system. I am all for costs coming down, but I can say from 8 months of selling Azure, complex pricing BLOCKS sales efforts by Microsoft partners. The new system isn’t just “price per GB” but it also includes the abstract notion of an “instance”.  A new blog post by Microsoft attempts to explain clearly what an instance is.

I’ve read it. I think I understand it. I know that no MSFT partner sales person will read it, our customers will call me, and when I explain it to them, I know that a sale will not happen. I’ve seen that trend with Azure too often (all but a handful of occasions) to know it’s not a once-off.

Anyway … enjoy the post by Microsoft.


The details of my session have been confirmed. The session is called “The Hidden Treasures of Windows Server 2012 R2 Hyper-V”, and the description is:

It’s one thing to hear about and see a great demo of a Hyper-V feature. But how do you put them into practice? This session takes you through some of those lesser-known elements of Hyper-V that have made for great demonstrations, introduces you to some of the lesser-known features, and shows you best practices, how to increase serviceability & uptime, and design/usage tips for making the most of your investment in Hyper-V.

Basically, there’s lots of stuff in Hyper-V that many folks don’t know exists. These features can make administration easier, reduce the time to get things done, and even give you more time at home. These are the hidden treasures of Hyper-V, and are there for everyone from the small biz to the large enterprise.

I went WS2012 R2 because:

  • That’s the Hyper-V that you can use in production now.
  • We’re a long way from the release of vNext.
  • There’s lots of value there that most aren’t aware of.
  • Plenty of excellent MSFT folks will be talking about vNext.

The session isn’t on the catalogue yet but I expect it to be there soon.


Welcome to the Azure Times! Or so it seems. Lots of Azure developments since I posted one of these news aggregations.

Windows Client



Office 365


Anyone working in “cloud computing” in Ireland had heard that the Irish government had launched a process to deploy a “private cloud” that would be engineered by external service providers, but owned and located by the Irish state. It sounded like the project from hell/heaven, with a list of pre-approved cloud vendors/services.

The Irish Times reports that this project has been cancelled, and instead, they’re going with a shared computing model based on a single Government-owned cloud.

In my opinion, this is the way forward. Now I wonder if Microsoft will pitch CPS at this :)


I’ve voted on a number of feedback items in Azure, mainly in backup, and I’m delighted to see that feedback having an impact.

I was presenting last months on Azure to partners in Northern Ireland when I was able to talk about an email I had received that morning that announced new features (seeding backup by disk, increased retention, and complex retention policies) that had been based on feedback.

Today, I got an email to confirm that another voted item, the ability to backup running VMs in Azure using Azure Backup, had been announced – I’m actually playing with it right now.


Feedback via this forum works. It is public and measured, and it’s much more effective than complaining to your local Microsoft reps (some of whom are less effective than others). So give Microsoft the feedback! Don’t just say “I want X”. Instead, say “I want X because it will allow Y and Z”; a full scenario description is what the program managers need to understand the request.

My tip: partners working with Open licensing need a centralized admin portal.


I really enjoyed presenting today on the next version of Hyper-V with Rick Claus (Microsoft) and Andrew Syrewicze (Hyper-V MVP). We had some tech glitches at the start and during the session, which always makes a session memorable Smile

We ran out of time at the end. Andy was the moderator but his ISP crapped out, so we didn’t get a chance to do Q&A properly.

If you have any questions then please either hit us on Twitter or post a comment below.

Thank you to Altaro for hosting this webinar! Make sure to check out their excellent backup products, which also features a free version.


Nothing will make a Hyper-V admin bald faster than storage issues. Whether it’s ODX on HP 3par or networking issues caused by Emulex, even if the blip is transient, it will crash your VMs. This all changes in vNext.

The next version of Hyper-V is more tolerant of storage issues. A VM will enter a paused state when the hypervisor detects an underlying storage issue. This will protect the VM from an unnecessary stoppage in the case of a transient issue. If the storage goes offline just for a few seconds, then the VM goes into a pause state for a few seconds, and there’s no stoppages, reboots, or database repairs.


I was forwarded an email today from a VMware distributor that informs VMware authorised partners that their prices are going up.

No customer buys software directly from the big software vendors. Typically the path is either:

  • Manufacturer > Distributor > Reseller > Customer
  • Manufacturer > Large account reseller > Large customer

Each link in the chain (or channel) makes a small percentage. There is a “price list” at the top of the chain, but that is often discounted. Discounts are applied to large deals, and that discount can vary depending on sales targets for the product, what is included in the deal (adding more can sometimes reduce the original price), the time in the sales cycle and the size of the deal. In the case of VMware, few ever pay the prices listed on their website.

This is the email sent out to VMware authorised partners:


VMware are reducing those discounts, giving VMware more earnings and reducing the profitability of VMware software to partners.

Do note, that any reseller that has a business plan to make profit from licensing needs to sell A LOT of licenses. Real profits for resellers come in services, not in s/w or tin.

Technorati Tags: ,

Dynamic Memory was added in W2008 R2 SP1 to allow Hyper-V to manage the assignment of memory to enabled virtual machines based on the guest OSs demands. Today in WS2012 R2, a VM boots up with a start-up amount of RAM, can grow, based on in-guest pressure and host availability, up to the maximum amount allowed by the host administrator, and shrink to a minimum amount.

But what if we need more flexibility? Not all workloads are suitable for Dynamic Memory. And in my estimation, only about half of those I encounter are using the feature.

The next version of Hyper-V includes hot memory resizing. This allows you to add and remove memory from a running virtual machine. The operation is done using normal add/remove administration tools. Some notes:

  • At this time you need a vNext guest OS
  • You cannot add more memory than is available on the host
  • Hyper-V cannot remove memory that is being used – you are warned about this if you try, and any free memory will be de-allocated.

This new feature will save a lot of downtime for virtualised services when administrators/operators need to change the memory for a production VM. I also wonder if it might lead to a new way of implementing Dynamic Memory.


Richard Campbell, the host of the RunAs Radio podcast, saw a tweet from me talking about Azure AD, and thought he should ask me back on to have a chat. I had been working on developing training materials on RemoteApp. The use-case that caught my interest was the ability to use RemoteApp to remove the effect of latency. We talk for half an hour about this.


The “design” I talk about in the podcast (recorded a few weeks ago) works, and I’ve presented using it. I’ve written some posts on Petri.com about my experiences:

In the design, virtual DCs, file server, and application servers run as VMs in an Azure network. RemoteApp publishes applications on another network. A VNET to VNET VPN connects the server and RA networks, enabling the RA session hosts to join the domain. Users log into RemoteApp, and then it’s all normal RDS at that point:

  • GPO applies
  • Log in script runs
  • Published applications have fast access to application servers
  • Users save data in the company’s Azure VMs

It’s a nice solution!


A Career Of T-Shirts

I was doing a major clean-out of the darker recesses of our house recently and found many nerd-shirt, most of which were thrown out. They brought back a lot of memories.


My first job out of college had me working as a UNIX developer … can you believe that!?!?! The project ended, and some of my Linux-to-Windows porting work lead me to being transferred to our budding Microsoft consulting team. And it was there that I got training and certification from Citrix on WinFrame (now XenApp). That was the start of my journey to here.


I spent most of my early days working with my employer’s brand of Intel servers (fridge-sized machines with 12 x 9 GB SCSI drives) and our storage system in the lab, setting up proof of concepts and demo labs.


I left there after 4 years to spread my wings. That was the start of my many years of working with HP hardware.


I lost my job a week after 9/11. The consulting company’s directors decided to re-launch as a “dot com”. I realized that my skills had not been developed and I struggled to find work. I was unemployed, and spent just about every waking hour getting a W2000 MCSE. A few weeks after that, I was employed again.


I -hated- that job. Actually, hate might not be a strong enough word. I was doing field engineering. It was an awful experience. And I moved on a few months later.


After some time contracting I got a job working for a German (but Irish headquartered) finance company, merging 9 international offices and upgrading them from Windows NT 4.0/Office 97 to Windows XP with a W2003 forest. I -loved- that job; the responsibility of designing and “owning” the global infrastructure (eventually 17 locations) was a rush. This was when I started working with virtualisation in work (Virtual Server and Virtual PC) and with pre-System Center (SMS and MOM) products, and was the start of my path to here.


Unfortunately, the directors (who ended up being chased by German prosecutors) decided to move IT to Stuttgart, while making us redundant. That ended up backfiring big-time – we were state of the art and the German consultants both hadn’t a clue and were extremely expensive. I wore the above t-shirt for my exit interview. There’s a story behind that which I won’t tell, but the German HR executive looked like she had shat herself when I walked into the room Smile


Oh yes, not only did I start off by programming on UNIX, but I worked happily on VMware before I switched to Hyper-V.


The work that I was doing at the time lead to me being of interest to the local Microsoft office. I started to get involved in the local community, I had been blogging for a couple of years, and I was asked to present at the Irish launch of W2008.


My community work grew and eventually led to me being awarded MVP status in … SCCM Smile A year later, I was switch to the Hyper-V expertise, reflecting what I was then working with and writing about.


Books were written, travel to events was done, and I staffed a booth at TechEd in Berlin. I wore one of the blue “plastic” Microsoft shirts. Just 5 minutes in one of these and you had black armpits all the way down to your waist.


A couple of years ago I signed up with the Petri IT Knowledgebase to write about Microsoft virtualization and then my role expanded to write op-eds and other article types on other things. It’s been fun to branch out a little, and reach a bigger audience. Now the site has added Paul Thurrott (under his own URL) and more staff to cover other areas, and that audience is growing.


Last year was a big one, career-wise. I recommended some new brands to distribute. One of those was DataON, and that has become quite a business for us, not just in Ireland, but across Europe. I took part in the TechEd North America Speaker Idol … and won a speaking slot at Ignite this year in Chicago. And I was awarded a speaking role at TechEd Europe, got one of the larger rooms, and came in as the 6th rate overall most effective speaker! And then I popped the question and got engaged at Christmas, finishing 2014 on a real high.


Have you ever built a Hyper-V virtual machine with 2 or more NICs, each on different networks, and struggled with assigning IP stacks in the guest OS? I sure have. When I was writing materials with virtual SOFS clusters, I often had 3-6 NICs per VM, each requiring an IPv4 stack for a different network. I needed to ensure that VMs were aligned and able to talk on those networks.

With modern physical hardware, we get a feature called Consistent Device Naming (CDN). This allows the BIOS to name the NIC and that’s how the NIC appears in the physical WS2012 (or later) install. Instead of the random “Local Area Connection” or “Ethernet” name, you get a predictable “Slot 1 1” or similar, based on the physical build of the server.

With Windows Server vNext, we are getting something similar, but not identical. This is not vCDN as some commentators have called it because it does require some work in the guest OS to enable the feature. Here’s how it works (all via PowerShell):

  1. You create a vNIC for a VM, and label that vNIC in the VM settings (you can actually do that now on WS2012 or later, as readers of WS2012 Hyper-V Installation and Configuration Guide might know!).
  2. Run a single cmdlet in the guest OS to instruct Windows Server vNext to name the connection after the adapter.

Armed with this feature, the days of disconnecting virtual NICs in Hyper-V Manager to rename them in the guest OS are numbered. Thankfully!


I am happy to say that I will be speaking at Microsoft Ignite, running in Chicago from 4th until the 8th of May, 2015.

I am not 100% sure yet, but it looks like I’ll be presenting the same session as I did in Barcelona. I’ll have to change it up a little Smile And I do not know the day/time/room yet.

I’m looking forward to seeing Chicago. I’ve been through O’Hare a lot, only getting as far as the inter-terminal shuttle. I might even try a deep dish pizza, even if Jon Stewart thinks that it’s the work of the devil.

Hopefully I’ll see you at my session,




Microsoft has corrected and changed the description of the new pricing for Azure Online Backup that comes into effect on April 1st. This is after the owners of the website royally screwed the pooch in February with a confusing and incorrect posting.

NOTE: I am redacting this post because no one is able to explain what a “protected instance” is. Until then, while Azure Backup is great technically, and could be cheap, I have no idea how much it will cost.

As with all posts regarding licensing or pricing on this site, I will not be answering questions. Ask your reseller, distributor or LSP – they’re the people you are paying so they are the people who can do the work.

With the new pricing you no longer pay (using North Europe pricing in Euros) €0.149 per GB stored in Azure per month. Instead the pricing is broken into 2 pieces:


Think of an instance as The block of data that has to be protected. I

The charge per instance depends on the size of the instance. Sigh!  This charge is based on the size of the protected instance. I do not know if this is based on data protected or the total amount of disk space in the instance. Another sigh!

  • Less than 50 GB:€3.7235 per instance
  • 50 GB to 500 GB: €7.447 per instance
  • Larger than 500 GB: Multiples of the 50-500 GB charge


Azure Online Backup will use Block Blob Storage. You can use either LRS (3 copies in 1 data center) or GRS (3 copies in 1 data center, and 3 async copies in another region) at a higher cost.



The end result is that for most customers, the pricing will come way down.

1 File Server with 30 GB on LRS Storage

  • 1 instance: €3.7235
  • Storage: 30 GB * €0.0179 (LRS) = €0.54

Total = €4.26

4 Machines with 80 GB each on LRS Storage

  • 4 instances: €7.447 * 4 = €29.788
  • Storage: 30 GB * 4 * €0.0179 (LRS) = €5.73

Total = €35.52

1 Machine with 1400 GB on GRS Storage

  • 3 instances (3 * 50-500 GB): €7.447 * 3 = €22.341
  • Storage: 1400 GB * €0.0358 (GRS) = €50.12

Total = €97.52


As I have told some people in Redmond, the added complexity to Azure Online Backup pricing is indicative of everything that is wrong with Azure pricing. The only blocker I’m seeing in Azure sales is that sales people cannot get their heads around the wildly varied and complicated pricing. I really don’t care what AWS does – I don’t work with AWS and what they do to limit their own sales is their issue. Microsoft needs to fix the pricing structure of Azure to grow it the way they want, and need, to.


You might have noticed a trend that there are a lot of features in the next version of Hyper-V that increase uptime. Some of this is by avoiding unnecessary failovers. Some of this is by reducing the need to shutdown a VM/service in order to do engineering. This is one of the latter.

Most of the time, we only need single-homed (single vNIC) VMs. But there are times where we want to assign a VM to multiple networks. If we want to do this right now on WS2012 R2, we have to shut down the VM, add the NIC, and start it up again.

With Hyper-V vNext we will be able to hot-add and hot-remove vNICs and this much requested feature will save administrators some time and grief from service owners.


One of the bedrocks of virtualization or a cloud is the storage that the virtual machines (and services) are placed on. Guaranteeing performance of storage is tricky –some niche storage manufacturers such as Tintrí (Irish for lightning) charge a premium for their products because they handle this via black box intelligent management.

In the Microsoft cloud, we have started to move towards software-defined storage based on the Scale-Out File Server (SOFS) with SMB 3.0 connectivity. This is based on commodity hardware, and with WS2012 R2, we currently have a very basic form of storage performance management. We can set:

  • Maximum IOPS per VHD/X: to cap storage performance
  • Minimum IOPS per VHD/X: not enforced, purely informational

This all changes with vNext where we get distributed storage QoS for SOFS deployments. No, you do not get this new feature with legacy storage system deployments.

A policy manager runs on the SOFS. Here you can set storage rules for:

  • Tenants
  • Virtual machines
  • Virtual hard disks

Using a new protocol, MS-SQOS, the SOFS passes storage rule information back to the relevant hosts. This is where rate limiters will enforce the rules according to the policies, set once, on the SOFS. No matter which host you move the VM to, the same rules apply.


The result is that you can:

  • Guarantee performance: Important in a service-centric world
  • Limit damage: Cap those bad boys that want everything to themselves
  • Create a price banding system: Similar to Azure, you can set price bands where there are different storage performance capabilities
  • Offer fairly balanced performance: Every machine gets a fair share of storage bandwidth

At this point, all management is via PowerShell, but we’ll have to wait and see what System Center brings forth for the larger installs.


I just had a conversation with a customer about a Hyper-V/System Center deployment that they are planning. They have multiple branch offices and they want to deploy 2 VMs to each, and manage the hosts (hardware included) and guest OS/services with System Center. The problem was: the cost of System Center – not a new story for SMEs, even larger ones, thanks to the death blow served by MSFT to System Center sales in the SME space in 2012.

This customer was looking at purchasing 1 Standard SML per site. The lack of density was increasing costs – using a centralized deployment with Datacenter SMLs would have been more cost effective. But they needed VMs for each site.

But I knew a trick:

Customers can use the license mobility benefits under Software Assurance to assign their System Center 2012 license to a Windows Server instance running on Azure. A System Center Standard license can be used to manage 2 VM instances; a System Center Datacenter license can be used to manage 8 VM instances.

What if the customer did this:

  • Deployed the VMs in Azure instead of on-premises Hyper-V
  • Shared the services via RemoteApp
  • Managed the guest OS and services using Datacenter SMLs, thus getting the cost/density benefits of the DC license.

As it turns out, the solution wasn’t going to work because the regional sites suffer from Irish rural broadband – that is, it sucks but not enough to download faster than a few MB, let alone upload.

But this is something to keep in mind for getting density benefits from a DC SML!


Most of us have dealt with some piece of infrastructure that is flapping, be it a switch port that’s causing issues, or a driver that’s causing a server to bug-check. These are disruptive issues. Cluster Compute Resiliency is a feature that prevents unwanted failovers when a host is having a transient issue. But what if that transient issue is repetitive? For example, what if a cluster keeps going into network-isolation and the VMs are therefore going offline too often?

If a clustered host goes into isolation too many times within a set time frame then the cluster will place this host into quarantine. The cluster will move virtual machines from a quarantined host, ideally using your pre-defined migration method (defaulting to Live Migration, but allowing you to set Quick Migration for a priority of VM or selected VMs).

The cluster will not place further VMs onto the quarantined host and this gives administrators time to fix whatever the root cause is of the transient issues.

This feature, along with Cluster Compute Resiliency are what I would call “maturity features”. They’re the sorts of features that make life easier … and might lead to fewer calls at 3am when a host misbehaves because the cluster is doing the remediation for you.


Quite bit of stuff to read since my last aggregation post on the 3rd.

Windows Server


Windows Client


Office 365




This post is dedicated to the person that refuses to upgrade from Windows Server 2003. I’m not targeting service providers and those who want to upgrade but face continued resistance. But if you are part of the problem, then please feel free to be offended. Please read it before you hurt your tired fingers writing a response.

I’m not going to pussy-foot around the issue. I couldn’t give a flying f**k if your delicate little feelings are dented. You are what’s wrong in our industry and I’ll welcome your departure.

Yes. You are professionally negligent. You’ve decided to put your customers,stockholders, and bosses at legal risk because you’re lazy.

You know that support is ending on July 14th 2015 for Windows Server 2003, Windows Server 2003 R2, SBS 2003 and SBS 2003 R2, but still you plan on not upgrading. Why?

You say that it still works? Sure, and so did this:


 Photo of Windows Server 2003 administrator telling the world that they won’t upgrade

You think you’ll still get security fixes? Microsoft is STOPPING support, just like they did for XP. Were you right then? No, because you are an idiot. So you work for some government agency and you’ll reach a deal with Microsoft? On behalf of the tax payers of your state, let me thank you for being a total bollocks – we’ll be paying at least $1 million for year one of support, and that doubles each year. We’ll be landed with more debt because your incompetent work-shy habits.

You think third parties like some yellow-pack anti-malware or some dodgy pay-per-fix third party will secure you? Let me give you my professional assessment of that premise: HAHAHAHAAHAHAHAHAH!

Maybe other vendors will continue supporting their software on W2003? That’s about as useful as a deity offering extended support for the extracted failed kidney of a donor patient. If Microsoft isn’t supporting W2003, etc, then how exactly is Honest Bob’s Backup going to support it for you? Who are they going to call when there’s a problem that they need assistance on? Are you really that naive?

Even regulators recognise that “end of support” is a terminal condition. VISA will be terminating business with anyone still using W2003 as part of the payment operation. You won’t be able to continue PCI compliance. Insurance companies will see that W2003 as a business risk that it outside the scope of the policy. And hackers will have an easy route to attack your network.

“Oh poor me – I have an LOB app that can’t be replaced and only runs on W2003”. Well; why don’t you upgrade everything else and isolate the crap out of that service? Allegedly, there is an organ rattling inside that skull of yours so you might want to shake the dust off and engage it!

I have zero sympathy for your excuses. I know some of you will protest my comments. Your excuses, not reasons, only highlight your negligence. You’ve had a decade and 4 opportunities to upgrade your server OS. You can switch to OPEX cloud systems (big 3 or local) to minimise costs. You could have up-skilled and deployed services that are included in the cost of licensing WS2012 R2 instead of spending your stockholders or tax payers funds on 3rd party solutions. Yeah, I don’t have many good things to say to you, the objector, because, to be quite honest, there is little good to be said about you as an IT professional.

This post was written by Aidan Finn and has no association with my employers, or any other firm I have associated with. If you’re upset, then go cry in a dark room where you won’t annoy anyone else.


I will be doing a webinar with Microsoft’s Rick Claus for Altaro on 26th of March, talking about some of the features in the next version of Windows Server Hyper-V. It should be a good one!


Register here.


Here’s the latest in the Microsoft world!


System Center


Office 365



One of the biggest blockers, in my personal opinion, to Azure IaaS adoption in the SME space is understanding how to price solutions. I don’t get questions about technology, features, cost, trust or any of that; instead, I get questions such as “how much will this cost me?”. Microsoft does not help themselves with a very complex pricing model – please don’t try to bring up AWS – Microsoft doesn’t sell AWS so I don’t get why they are relevant!

So I’ve started producing some videos for my employers. This one focuses on pricing solutions based on Azure virtual machines.

Get Adobe Flash player