Thanks (I think!!!) to John at MicroWarehouse (my employer) for sticking this on the company website:


I think he even Photoshop slimmed me Smile

Here’s the details of both my sessions:

The Hidden Treasures of Windows Server 2012 R2 Hyper-V

  • When: 5:00PM – 6:15PM, Tuesday, May 5th
  • Where: E451A
  • Session code: BRK3506

My first session is a 75 minute level 300 session focusing on lesser known features of the version of Hyper-V that you can deploy now, and leaves you in the best position to upgrade to vNext. Don’t worry if you’ve seen by TEE14 session; this one is 50% different with some very useful stuff that I’ve never presented on or blogged about before.

It’s one thing to hear about and see a great demo of a Hyper-V feature. But how do you put them into practice? This session takes you through some of those lesser-known elements of Hyper-V that have made for great demonstrations, introduces you to some of the lesser-known features, and shows you best practices, how to increase serviceability and uptime, and design/usage tips for making the most of your investment in Hyper-V.


End-to-End Azure Site Recovery Solutions for Small & Medium Enterprises

  • When: 12:05PM – 12:25PM, Thursday, May 7th
  • Where: EXPO: Lounge C Theater
  • Session Code: THR0903

My second session is 20 minutes on Azure DR solutions for SMEs in the community theatre. I’ve done lots of lab and proof-of-concept work with ASR in the SME space and this presentation focuses on the stuff that no one talks about – it’s easy to replicate VMs, but what about establishing services, accessing failed over VMs, and more?!?!?

In this session I will share some tips and lessons that I have learned from working with Azure Site Recovery services to provide a complete disaster recovery solution in Azure for Hyper-V virtual machines in a small/medium enterprise.


I’ve been really busy either preparing training, delivering training, on customer sites, or prepping my two sessions for Ignite. Here’s the roundup of recent Microsoft news for infrastructure IT pros:


Windows Server

Windows 10


Office 365




Here’s a reminder of the webinar by StarWind that I am co-presenting with Max Kolomyeytsev. We’ll be talking about offloading storage operations to a SAN using ODX for Wnidows Server & Hyper-V and VAAI for vSphere. It’s a great piece of functionality and there are some things to know before using it. The session starts at tomorrow at 19:00 UK/IE time, 20:00 CET, and 14:00 EST. Hopefully we’ll see you there!

Register here.

Technorati Tags: ,,,,

I recently co-presented a webinar by Altaro with Rick Claus (Microsoft) and Andrew Syrewicze (MVP) on what’s coming in the next version of Windows Server Hyper-V. Altaro has a recording of the webinar online. That page will be updated soon with a written Q&A from the ssession; we had A LOT of questions and Altaro asked me to write out responses which I did last Friday night. You can also download a PDF copy of the slides from the session.

Thank you to everyone that joined us. We had a great number of people tuned in – I was stunned when the folks at Altaro broke down the numbers. Hopefully, I’ll see some of you tomorrow night in the webinar I am co-presenting for StarWind on using ODX or VAAI to enhance storage performance for Hyper-V or vSphere respectively.


I already has a session called The Hidden Treasures of Windows Server 2012 R2 Hyper-V at Microsoft Ignite. And this week I found out that I was awarded a second session. This one will be a community/theatre session run at lunch time. It is called End-to-End Azure Site Recovery Solutions for Small & Medium Enterprises (session code THR0903):

In this session I will share some tips and lessons that I have learned from working with Azure Site Recovery services to provide a complete disaster recovery solution in Azure for Hyper-V virtual machines in a small/medium enterprise.

Personally, along with Azure AD, I think ASR is a hot “on ramp” feature of Azure because it extends existing investments and offers an affordable solution to an old business problem. I’ve been working with Azure for DR purposes in the SME space for a while and I’ve picked up quite a few tips from the ASR/HVR team, and I’ve learned a few things while working on customer site, all of which I aim to share in this session.

This community session isn’t on the session builder yet is on the Session Builder now, starting at 12:05 on Thursday May 7th in Expo:Lounge C Theater. It’s a short session so I won’t be impacting too much on your lunch break, and this is a session that will prep you for a nice post-conference briefing with your boss & colleagues upon your return on the following Monday :)

Technorati Tags: ,,,

Another thank you, this time to the folks that answered  this second survey that focused on Windows Server application servers no matter if they were physical, virtual , on Hyper-V or anything else.

In this survey I asked:

What percentage of your APPLICATION servers run with MinShell or Core UI? Consultants: Please answer with the most common customer scenario.

  • 0% – All of my servers have a FULL UI
  • 1-20%
  • 20-40%
  • 40-60% – Around half of my servers have MinShell or Core UI
  • 60-80%
  • 80-100% – All of my servers have MinShell or Core UI

In other words, I wanted to know what was the market penetration like for non-Full UI installations of Windows Server. I had a gut feeling, but I wanted to know for sure.

The Sample

I was worried about survey fatigue, and sure enough we had a drop from the amazing 425 responses of the previous survey. But we did have 242 responses:


Once again, we saw a great breakdown from all around the world with the USA representing 25% of the responses.

Once again I recognize that the sample is skewed. Anyone, like you, who reads a blog like this, follows influencers on social media, or regularly attends something like a TechNet/Ignite/community IT pro events is not a regular IT pro. You are more educated and are not 100% representative of the wider audience. I suspect that more of you are using non-Full UI options (Hyper-V Server, MinShell or Core) than in the wider market.

The Results

Here we go:


So the vast majority of people are not using any installations of MinShell or Core for their application servers. Nearly 15% have a few Core or MinShell installations and then we get into tiny percentages for the rest of the market.


We can see quite clearly, that despite the evangelizing by Microsoft, the market prefers to deploy valuable servers with a UI that allows management and troubleshooting – not to mention support by Microsoft.

Is there a regional skewing of the data? Yes, to some extent. The USA (25% of responses) has opted to deploy a Full UI slightly less than the rest of the world:


You can see the difference when we compare this to a selection of EU countries including: Great Britain, Germany, Austria, Ireland, The Netherlands, Sweden, Belgium, Denmark, Norway, Slovenia, France and Poland (53% of the survey).


FYI, the 4 responses that indicated that 80-100% of application servers were running MInShell or Core UI came from:

  • USA (2)
  • Germany (2)

My Opinion

I am slightly less hardline with Full VS Core/MinShell when it comes to application servers than I am with Hyper-V hosts. But, I am not in complete agreement with the Microsoft mantra of Core, Core, Core. I know that when it comes to most LOB apps, even large enterprises have loads of those awful single or dual server installations that right-minded admins dislike – if that’s what devs deploy then there’s little we can do about it. And those are exactly the machines that become sacred cows.

However, in large scale-out apps where servers can be stateless, I can see the benefits of using Core/MinShell … to a limited extent. To be honest, I think Nano would be better when it eventually makes it to a non-infrastructure role.

Your Opinion

What do you think? Post your comments below.

Technorati Tags: ,

And we’re back with a follow-up survey. The last time I asked you about your Hyper-V hosts and the results were very interesting. Now I want to know about your Windows Server application servers, be they physical, on VMware, Hyper-V, Azure, AWS, or any other platform. Note: I do not care about any hosts this time – just the application servers that are running Windows Server. Here is the survey:


As before, I’ll run the survey for a few days and then post the results.

Please share this post with colleagues and on social media so we can get a nice big sample from around the world.


Technorati Tags: ,

Thank you to the 424 (!) people who answered the survey that I started late on Friday afternoon and finished today (Tuesday morning). I asked one question:

What kind of UI installation do you use on Hyper-V hosts?

  • The FREE Hyper-V Server 2012 R2
  • Full UI
  • MinShell
  • Core

Before I get to the results …

The Survey

Me and some other MVPs used to do a much bigger annual survey. The work required by us was massive, and the amount of questions put people off. I kept this very simple. There were no “why’s” or further breakdowns of information. This lead to a bigger sample size.

The Sample

We got a pretty big sample size from all around the world, with results from the EU, USA and Canada, eastern Europe, Asia, Africa, the south Pacific, and south America. That’s amazing! Thank you to everyone who helped spread the word. We got a great sample in a very short period of time.


However (there’s always one of these with surveys!), I recognize that the sample is skewed. Anyone, like you, who reads a blog like this, follows influencers on social media, or regularly attends something like a TechNet/Ignite/community IT pro events is not a regular IT pro. You are more educated and are not 100% representative of the wider audience. I suspect that more of you are using non-Full UI options (Hyper-V Server, MinShell or Core) than in the wider market.

Also, some of you who answered this question are consultants or have more complex deployments with a mixture of installations. I asked you to submit your most common answer. So a consultant that selects X might have 15 customers with X, 5 with Y and 2 with Z.

The Results

So, here are the results:



70% of the overall sample chose the full UI for the management OS of their Hyper-V hosts. If we discount the choice of Hyper-V Server (they went that way for specific economic reasons and had no choice of UI) then the result changes.

Of those who had a choice of UI when deploying their hosts, 79% went with the Full UI, 5.5% went with MinShell, and 15% went with Server Core. These numbers aren’t much different to what we saw with W2008 R2, with the addition of MinShell taking share from Server Core. Despite everything Microsoft says, customers have chosen easier management and troubleshooting by leaving the UI on their hosts.


Is there a specific country bias? The biggest response came from the USA (111):

  • Core: 19.79%
  • MinShell: 4.17%
  • Full UI: 76.04%

In the USA, we find more people than average (but still a small minority) using Core and MinShell. Next I compared this to Great Britain, Germany, Austria, Ireland, The Netherlands, Sweden, Belgium, Denmark, Norway, Slovenia, France and Poland (not an entire European sample but a pretty large one from the top 20 responding countries, coming in at a total of 196 responses):

  • Core: 13.78%
  • MinShell: 4.08%
  • Full UI: 82.14%

It is very clear. The market has spoken and the market has said:

  • We like that we have the option to deploy Core or MinShell
  • But most of us want a Full UI

Those of you who selected Hyper-V Server did not waste your time. There are very specific and useful scenarios for this freely licensed product. And Microsoft loves to hear that their work in maintaining this SKU has a value in the market. To be honest, I expect this number (10.59%) to gradually grow over time as those without Software Assurance choose to opt into new Hyper-V features without upgrading their guest OS licensing.

My Opinion

I have had one opinion on this matter since I first tried a Core install for Hyper-V during the beta of Windows Server 2008. I would only ever deploy a Full UI. If (and it’s a huge IIF), I managed a HUGE cloud with HA infrastructure then I would deploy Nano Server on vNext. But in every other scenario, I would always choose a Full UI.

The arguments for Core are:

  • Smaller installation: Who cares if it’s 6GB or 16 GB? I can’t buy SD cards that small anymore, let alone hard disks!!!
  • Smaller attack footprint: You deserve all the bad that can happen if you read email or browse from your hosts.
  • Fewer patches: Only people who don’t work in the real world count patches. We in the real world count reboots, and there are no reductions. To be honest, this is irrelevant with Cluster Aware Updating (CAU).
  • More CPU: I’ve yet to see a host in person where CPU is over 33% average utilisation.
  • Less RAM: A few MB savings on a host with at least 64 GB (rare I see these anymore) isn’t going to be much benefit.
  • You should use PowerShell: Try using 3rd party management or troubleshooting isolated hosts with PowerShell. Even Microsoft support cannot do this.
  • Use System Center: Oh, child! You don’t get out much.
  • It stops admins from doing X: You’ve got other problems that need to be solved.
  • You can add the UI back: This person has not patched a Core install over several months and actually tried to re-add the UI – it is not reliable.

In my experience, and that of most people. servers are not cattle; they are not pets either; no – they are sacred cows (thank you for finding a good ending to that phrase, Didier). We cannot afford to just rebuild servers when things go wrong. They do need to be rescued and trouble needs to be fixed. Right now, the vast majority of problems I hear about are network card driver and firmware related. Try solving those with PowerShell or remote management. You need to be on the machine and solving these issues and you need a full UI. The unreliable HCL for Windows Server has lead to awful customer experiences on Broadcom (VMQ enabled and faulty) and Emulex NICs (taking nearly 12 months to acknowledge the VMQ issue on FCoE NICs).

Owning a host is like owning a car. Those who live in the mainstream have a better experience. Things work better. Those who try to find cheaper alternatives, dare to be different, find other sources … they’re the ones who call for roadside assistance more. I see this even in the Hyper-V MVP community … those who dare to be on the ragged edge of everything are the ones having all the issues. Those who stay a little more mainstream, even with the latest tech, are the ones who have a reliable infrastructure and can spend more time focusing on getting more value out of their systems.

Another survey will be coming soon. Please feel free to comment your opinions on the above and what you might like to see in a survey. Remember, surveys need closed answers with few options. Open questions are 100% useless in a survey.

What about Application Servers?

That’s the subject of my next survey.

Using This Data

Please feel free to use the results of the survey if:

  • You link back to this post
  • You may use 1 small quote from this post

Lots of folks that are using Window Server Technical Preview (from October 2014) were facing a ticking time bomb. The preview is set to expire on April 14th (tomorrow). Microsoft released a hotfix that will extend the life of the preview until the next preview is released in May.

Lot of folks have reported that this hotfix didn’t fix their issue. According to Microsoft:

  • If you are running Datacenter edition with a GUI then you need to activate the install with the key from here.
  • Sometime you will need to run SLMGR /ato to reactivate the installation.

My session now has a time slot: you can come listen to what you can do now with Hyper-V on Tuesday, May 5th, at 5:00 PM until 6:15 PM.



I have a one question survey for you:


If you are a consultant or have multiple answers then please select the most commonly deployed option. Don’t select your preferred option, but what is really used most often.

Please tweet, Facebook, LinkedIn, whatever, this survey to get as big a sample as you can. You’ll see the results as they go along after voting.

Technorati Tags: ,

I recently blogged on Petri.com how you can configure backup of Azure virtual machines. This is a superb addition to Azure, making it ready, in my opinion, for production VM hosting.

The Way it Was

Previous to the addition to this feature, there was no way to backup a running Azure virtual machine in Azure as a complete VM. There were some bad hacks:

  • Storage snapshots: You could shut down a VM and snapshot the the storage account. This sucked. I’m pretty sure it wasn’t supported.
  • In-VM backup: You could deploy an agent into a VM and backup files and folders. This sucked too. Microsoft tried to push DPM sales with this, requiring one Datacenter SML for every 8 VMs.

What we needed was what we could do on-premises with Hyper-V or vSphere; we needed a per-VM mechanism for backing up an entire VM, with the ability to quickly restore that VM compete with OS, applications, and data.

And that’s what Azure Backup for VMs gives us.

The Way it is Now

Now we can:

  • Discover VMs
  • Register VMs
  • Protect VMs with policy, with up to 1 backup per day and up to 4 weeks retention.

The backup of Azure VMs is managed from the Azure portal. You get logs as well. There is no need to install or manage anything in the guest OS. A backup extension is automatically added to the VM when you protect it.

The entire VM is backed up and can be restored. Note that in terms of pricing:

  • Each VM is an instance
  • The size of the instance is the size of the virtual disk, not the size of the contents. So a 127 GB VM with 50 GB of contents is 127 GB, falling into the 50-500 GB instance bracket. This is different to Hyper-V, where it is the physical size that is counted (including checkpoints).

If you want granular backup then you can also deploy the Azure Backup agent into the guest OS. Note that this requires another instance and you will only be able to backup files and folders with this additional backup, which is managed from the MARS agent in the guest OS.

Note: I have talked to one of the Azure Backup PMs and he told me that there is no support for VM Generation ID. That means that you should not, ever, in any scenario, restore a virtual domain controller if there is more than one DC (the one you want to restore) in that forest.


I decided, after experimenting with Azure websites, that I wanted to retain 100% control over my website hosting. My site (WordPress) is hosted in an A2 VM and I run MySQL in the VM. This gives me the flexibility to add more sites to the VM and re-use MySQL. I don’t have any of the limitations that the ClearDB MySQL hosting has in Azure.

I configured a daily backup to run and to retain 4 weeks of data. The first backup ran last night with no issues:

I also have installed the Azure Backup agent into the guest OS. There I run a script to export MySQL to a file, and I backup this file and the IIS website folder. So in the event of a screw-up, I have the ability to restore:

  • Individual website files
  • The MySQL databases
  • The entire VM
Technorati Tags: ,

It’s a simple enough operation (PowerShell) to move a virtual machine to a different subnet within a virtual network. But what if you want to move the virtual machine to a different virtual network? That’s a bit more complex to do because you cannot just lift the VM to another network.

Instead you will have to:

1: Delete the virtual machine, choosing to keep the attached disks. Doesn’t Azure Backup of Azure Virtual Machines sound like a good idea right now? Go do that first Smile


2: Create a new virtual machine from the existing disks. The above deletion process keeps the original disks, and deletes the meta data of the VM. You are now going to create new meta data using the remaining disks. This is like moving the hard drives from one broken server to a replacement server. Go into the wizard and instead of selecting a template, choose your old disk. Make sure you know which one it is first – the name of the old VM (FS1 in this case) is usually in the file name.


3: Complete the wizard and select the new virtual network. If this is a new network/application then you probably will have to create a new cloud service too.

4: Attach any data disks. If the old VM had any data disks then they’ll need to be reattached. Shutdown the VM and attach the disks.


Technorati Tags:

I use the Azure “Ibiza” and management portals for most of my Azure admin, but there are times when PowerShell makes more sense:

  • The feature is only available via PowerShell
  • I need to do a lot and don’t want to be doing progress bar admin

Today, I had an issue where no matter what cmdlet I ran, I got this error:

Your Azure credentials have not been set up or have expired

Very annoying. Some googling lead me to a solution:

  1. Remove-AzureAccount –Name <my UPN>
  2. Add-AzureAccount

I logged back into Azure via that last cmdlet and everything was fixed.

Technorati Tags: ,

Microsoft made two significant announcements yesterday, further innovating their platform for cloud deployments.

Hyper-V Containers

Last year Microsoft announced a partnership with Docker, a leader in application containerization. The concept is similar to Server App-V, the now deprecated service virtualization solution from Microsoft. Instead of having 1 OS per app, containers allow you to deploy multiple applications per OS. The OS is shared, and sets of binaries and libraries are shared between similar/common apps.

Hypervisor versus application containers

These containers are can be deployed on a physical machine OS or within the guest OS of a virtual machine. Right now, you can deployed Docker app containers onto Ubuntu VMs in Azure, that are managed from Windows.

Why would you do this? Because app containers are FAST to deploy. Mark Russinovich demonstrated a WordPress install being deployed in a second at TechEd last year. That’s incredible! How long does it take you to deploy a VM? File copies are quick enough, especially over SMB 3.0 Direct Access and Multichannel, but the OS specialisation and updates take quite a while, even with enhancements. And Azure is actually quite slow, compared to a modern Hyper-V install, at deploying VMs.

Microsoft use the phrase “at the speed of business” when discussing containers. They want devs and devops to be able to deploy applications quickly, without the need to wait for an OS. And it doesn’t hurt, either, that there are fewer OSs to manage, patch, and break.

Microsoft also announced, with their partnership with Docker, that Windows Server vNext would offer Windows Server Containers. This is a means of app containers that is native to Windows Server, all manageable via the Microsoft and Docker open source stack.

But there is a problem with containers; they share a common OS, and sets of libraries and binaries. Anyone who understands virtualization will know that this creates a vulnerability gateway … a means to a “breakout”. If one application container is successfully compromised then the OS is vulnerable. And that is a nice foothold for any attacker, especially when you are talking about publicly facing containers, such as those that might be in a public cloud.

And this is why Microsoft has offered a second container option in Windows Server vNext, based on the security boundaries of their hypervisor, Hyper-V.

Windows Server vNext offers Windows Containers and Hyper-V Containers

Hyper-V provides secure isolation for running each container, using the security of the hypervisor to create a boundary between each container. How this is accomplished has not been discussed publicly yet. We do know that Hyper-V containers will share the same management as Windows Server containers and that applications will be compatible with both.

Nano Server

It’s been a little while since a Microsoft employee leaked some details of Nano Server. There was a lot of speculation about Nano, most of which was wrong. Nano is a result of Microsoft’s, and their customers’, experiences in cloud computing:

  • Infrastructure and compute
  • Application hosting

Customers in these true cloud scenarios have the need for a smaller operating system and this is what Nano gives them. The OS is beyond Server Core. It’s not just Windows without the UI; it is Windows without the I (interface). There is no logon prompt and no remote desktop. This is a headless server installation option, that requires remote management via:

  • WMI
  • PowerShell
  • Desired State Configuration (DSC) – you deploy the OS and it configures itself from a template you host
  • RSAT (probably)
  • System Center (probably)

Microsoft also removed:

  • 32 bit support (WOW64) so Nano will run just 64-bit code
  • MSI meaning that you need a new way to deploy applications … hmm … where did we hear about that very recently *cough*
  • A number of default Server Core components

Nano is a stripped down OS, truly being incapable of doing anything until you add the functionality

The intended scenarios for Nano usage are in the cloud:

  • Hyper-V compute and storage (Scale-Out File Server)
  • “Born-in-the-cloud” applications, such as Windows Server containers and Hyper-V containers

In theory, a stripped down OS should speed up deployment, make install footprints smaller (we need non-OEM SD card installation support, Microsoft), reduce reboot times, reduce patching (pointless if I reboot just once per month), and reduce the number of bugs and zero day vulnerabilities.

Nano Server sounds exciting, right? But is it another Server Core? Core was exciting back in W2008. A lot of us tried it, and today, Core is used in a teeny tiny number of installs, despite some folks in Redmond thinking that (a) it’s the best install type and (b) it’s what customers are doing. They were and still are wrong. Core was a failure because:

  • Admins are not prepared to use it
  • The need to have on-console access

We have the ability add/remove a UI in WS2012 but that system is broken when you do all your updates. Not good.

As for troubleshooting, Microsoft says to treat your servers like cattle, not like pets. Hah! How many of you have all your applications running across dozens of load balanced servers? Even big enterprise deploys applications the same way as an SME: on one to a handful of valuable machines that cannot be lost. How can you really troubleshoot headless machines that are having networking issues?

On the compute/storage stack, almost every issue I see on Windows Server and Hyper-V is related to failures in certified drivers and firmwares, e.g. Emulex VMQ. Am I really expected to deploy a headless OS onto hardware where the HCL certification has the value of a bucket with a hole in it? If I was to deploy Nano, even in cloud-scale installations, then I would need a super-HCL that stress tests all of the hardware enhancements. And I would want ALL of those hardware offloads turned OFF by default so that I can verify functionality for myself, because clearly, neither Microsoft’s HCL testers nor the OEMs are capable of even the most basic test right now.


In my opinion, the entry of containers into Windows Server and Hyper-V is a huge deal for larger customers and cloud service providers. This is true innovation. As for Nano, I can see the potential for cloud-scale deployments, but I cannot trust the troubleshooting-incapable installation option until Microsoft gives the OEMs a serous beating around the head and turns off hardware offloads by default.


I would like to welcome a new sponsor on my site: 3CX (@3cx).

Who are 3CX?

3CX VoIP Phone System for Windows is an IP PBX / SIP proxy that completely replaces a traditional proprietary phone system. It uses standard SIP software or hardware phones, supports VoIP providers / SIP Trunks & phone lines and offers numerous benefits over a traditional PBX. The commercial editions offer enterprise grade support as well as a number of business features. A FREE edition is available. A demo license key allowing you to try all commercial features for two simultaneous lines will be sent to your email address.



There is also a nice list of awards on their banner page. And yes, there is support for Hyper-V Smile

Please pay 3CX a visit to check out what they might be able to do for you.


There’s a lot of stuff happening now. The Windows Server vNext Preview expires on April 15th and Microsoft is promising a fix … the next preview isn’t out until May (maybe with Ignite on?). There’s rumours of Windows vNext vNext. And there’s talk of open sourcing Windows – which I would hate. Here’s the rest of what’s going on:


Windows Server

Windows Client



On April 21st and 2pm ET (USA) or 7PM UK/IE, I will be co-hosting a StarWind Software webinar with Max Kolomyeytsev. I will be talking about using ODX in a Hyper-V scenario, and Max will talk about it (VAAI) from the vSphere perspective.


Register here.


Microsoft has posted my Windows Server 2012 R2 Hyper-V session on the Microsoft Ignite schedule builder.


Note that it should read “Windows Server 2012 R2”.

Currently, the day/time is January 1st at 12am. Yup, there will be fireworks and some auld lang syne. Please ignore the day/time and add the session to your builder if you are interested in the content. Hopefully a day/time will be fixed soon.

My session is on Tuesday May 5th at 5:00 pm – 6:15 pm.


There is a term that I’ve heard for a while when talking to Microsoft program managers, and it has started to be used publicly by Microsoft staff. I read it on a post by Ben Armstrong:

If you are already on 10049 and have not yet enabled Hyper-V, you can either follow the above steps, or hang tight while we work on the next flight!

Rick Claus also used the term in the latest episode of the Ignite Countdown show.

Like all cloud services, in case you don’t know, this is what we’re doing with regards to flighting new things into it.

Microsoft’s Gabe Aul also explained the term in a Blogging Windows post on March 18th:

… we’ll have some weeks where we expect builds to flow out (we call them “flighting windows”) and some where we’ll hold back

And the term was also used by Aul when he explained the frequency of builds for Windows Insiders:

… we’d have a candidate build, and we’d flight that out broadly within MS to make sure we could find any gotchas …

So what are they talking about? You’ve probably heard that Windows 10, when it RTMs, isn’t “finished”. In fact, it’ll probably never be a finished product in the view of Microsoft until they release Windows 11 (if there is one). Microsoft will be updating this OS on a regular basis, adding new functionality. I know we’ve heard that sort of thing before, but it’s real this time. Windows Insiders are seeing it now, and the reality is that Microsoft development process was changed quite a bit after Windows 8.1 to make this possible. We know from TEE14 that the same happened to Windows Server to make it work more seamlessly with Azure.

This approach is taken from cloud computing and lightweight phone/tablet OSs:

  • You release a block of code that is developed and tested to a stable point.
  • There is a stack rank of additional features and changes that you wanted to implement but didn’t have the person-hours to complete.
  • You get feedback and that modifies the stack rank.
  • The market changes and more features are added to the stack rank
  • You code/test some new stuff over a short period and release it

This release is a flight and the process is flighting. It’s just another way of saying “release”. I guess, “release” in a devs mind is a big irregular event, whereas a flight is something that happens on a regular basis.

In the Microsoft world, we see flights all the time with Azure and quite frequently with SaaS such as O365 and Intune. Windows is moving this way too. The result is you get a regular improvement of the product instead of every 1,2, or 3 years. Microsoft can be more responsive to feedback and change. Consumers will love this. Businesses will get control over the updates, but I suspect, as we saw with the April 2014 update (AKA “Update 1”) that came into force in August 2014, there will be a support baseline update every now and then to ease the difficult for Microsoft on supporting Windows.

Technorati Tags: ,

It’s April Fool’s Day, and the new pricing system for Azure Backup comes into force today. Make of that what you want :D

I am not a fan of the new pricing system. I am all for costs coming down, but I can say from 8 months of selling Azure, complex pricing BLOCKS sales efforts by Microsoft partners. The new system isn’t just “price per GB” but it also includes the abstract notion of an “instance”.  A new blog post by Microsoft attempts to explain clearly what an instance is.

I’ve read it. I think I understand it. I know that no MSFT partner sales person will read it, our customers will call me, and when I explain it to them, I know that a sale will not happen. I’ve seen that trend with Azure too often (all but a handful of occasions) to know it’s not a once-off.

Anyway … enjoy the post by Microsoft.


The details of my session have been confirmed. The session is called “The Hidden Treasures of Windows Server 2012 R2 Hyper-V”, and the description is:

It’s one thing to hear about and see a great demo of a Hyper-V feature. But how do you put them into practice? This session takes you through some of those lesser-known elements of Hyper-V that have made for great demonstrations, introduces you to some of the lesser-known features, and shows you best practices, how to increase serviceability & uptime, and design/usage tips for making the most of your investment in Hyper-V.

Basically, there’s lots of stuff in Hyper-V that many folks don’t know exists. These features can make administration easier, reduce the time to get things done, and even give you more time at home. These are the hidden treasures of Hyper-V, and are there for everyone from the small biz to the large enterprise.

I went WS2012 R2 because:

  • That’s the Hyper-V that you can use in production now.
  • We’re a long way from the release of vNext.
  • There’s lots of value there that most aren’t aware of.
  • Plenty of excellent MSFT folks will be talking about vNext.

The session isn’t on the catalogue yet but I expect it to be there soon.


Welcome to the Azure Times! Or so it seems. Lots of Azure developments since I posted one of these news aggregations.

Windows Client



Office 365


Anyone working in “cloud computing” in Ireland had heard that the Irish government had launched a process to deploy a “private cloud” that would be engineered by external service providers, but owned and located by the Irish state. It sounded like the project from hell/heaven, with a list of pre-approved cloud vendors/services.

The Irish Times reports that this project has been cancelled, and instead, they’re going with a shared computing model based on a single Government-owned cloud.

In my opinion, this is the way forward. Now I wonder if Microsoft will pitch CPS at this :)


I’ve voted on a number of feedback items in Azure, mainly in backup, and I’m delighted to see that feedback having an impact.

I was presenting last months on Azure to partners in Northern Ireland when I was able to talk about an email I had received that morning that announced new features (seeding backup by disk, increased retention, and complex retention policies) that had been based on feedback.

Today, I got an email to confirm that another voted item, the ability to backup running VMs in Azure using Azure Backup, had been announced – I’m actually playing with it right now.


Feedback via this forum works. It is public and measured, and it’s much more effective than complaining to your local Microsoft reps (some of whom are less effective than others). So give Microsoft the feedback! Don’t just say “I want X”. Instead, say “I want X because it will allow Y and Z”; a full scenario description is what the program managers need to understand the request.

My tip: partners working with Open licensing need a centralized admin portal.

Get Adobe Flash player