My Take On Windows Nano Server & Hyper-V Containers

Microsoft made two significant announcements yesterday, further innovating their platform for cloud deployments.

Hyper-V Containers

Last year Microsoft announced a partnership with Docker, a leader in application containerization. The concept is similar to Server App-V, the now deprecated service virtualization solution from Microsoft. Instead of having 1 OS per app, containers allow you to deploy multiple applications per OS. The OS is shared, and sets of binaries and libraries are shared between similar/common apps.

Hypervisor versus application containers

These containers are can be deployed on a physical machine OS or within the guest OS of a virtual machine. Right now, you can deployed Docker app containers onto Ubuntu VMs in Azure, that are managed from Windows.

Why would you do this? Because app containers are FAST to deploy. Mark Russinovich demonstrated a WordPress install being deployed in a second at TechEd last year. That’s incredible! How long does it take you to deploy a VM? File copies are quick enough, especially over SMB 3.0 Direct Access and Multichannel, but the OS specialisation and updates take quite a while, even with enhancements. And Azure is actually quite slow, compared to a modern Hyper-V install, at deploying VMs.

Microsoft use the phrase “at the speed of business” when discussing containers. They want devs and devops to be able to deploy applications quickly, without the need to wait for an OS. And it doesn’t hurt, either, that there are fewer OSs to manage, patch, and break.

Microsoft also announced, with their partnership with Docker, that Windows Server vNext would offer Windows Server Containers. This is a means of app containers that is native to Windows Server, all manageable via the Microsoft and Docker open source stack.

But there is a problem with containers; they share a common OS, and sets of libraries and binaries. Anyone who understands virtualization will know that this creates a vulnerability gateway … a means to a “breakout”. If one application container is successfully compromised then the OS is vulnerable. And that is a nice foothold for any attacker, especially when you are talking about publicly facing containers, such as those that might be in a public cloud.

And this is why Microsoft has offered a second container option in Windows Server vNext, based on the security boundaries of their hypervisor, Hyper-V.

Windows Server vNext offers Windows Containers and Hyper-V Containers

Hyper-V provides secure isolation for running each container, using the security of the hypervisor to create a boundary between each container. How this is accomplished has not been discussed publicly yet. We do know that Hyper-V containers will share the same management as Windows Server containers and that applications will be compatible with both.

Nano Server

It’s been a little while since a Microsoft employee leaked some details of Nano Server. There was a lot of speculation about Nano, most of which was wrong. Nano is a result of Microsoft’s, and their customers’, experiences in cloud computing:

  • Infrastructure and compute
  • Application hosting

Customers in these true cloud scenarios have the need for a smaller operating system and this is what Nano gives them. The OS is beyond Server Core. It’s not just Windows without the UI; it is Windows without the I (interface). There is no logon prompt and no remote desktop. This is a headless server installation option, that requires remote management via:

  • WMI
  • PowerShell
  • Desired State Configuration (DSC) – you deploy the OS and it configures itself from a template you host
  • RSAT (probably)
  • System Center (probably)

Microsoft also removed:

  • 32 bit support (WOW64) so Nano will run just 64-bit code
  • MSI meaning that you need a new way to deploy applications … hmm … where did we hear about that very recently *cough*
  • A number of default Server Core components

Nano is a stripped down OS, truly being incapable of doing anything until you add the functionality

The intended scenarios for Nano usage are in the cloud:

  • Hyper-V compute and storage (Scale-Out File Server)
  • “Born-in-the-cloud” applications, such as Windows Server containers and Hyper-V containers

In theory, a stripped down OS should speed up deployment, make install footprints smaller (we need non-OEM SD card installation support, Microsoft), reduce reboot times, reduce patching (pointless if I reboot just once per month), and reduce the number of bugs and zero day vulnerabilities.

Nano Server sounds exciting, right? But is it another Server Core? Core was exciting back in W2008. A lot of us tried it, and today, Core is used in a teeny tiny number of installs, despite some folks in Redmond thinking that (a) it’s the best install type and (b) it’s what customers are doing. They were and still are wrong. Core was a failure because:

  • Admins are not prepared to use it
  • The need to have on-console access

We have the ability add/remove a UI in WS2012 but that system is broken when you do all your updates. Not good.

As for troubleshooting, Microsoft says to treat your servers like cattle, not like pets. Hah! How many of you have all your applications running across dozens of load balanced servers? Even big enterprise deploys applications the same way as an SME: on one to a handful of valuable machines that cannot be lost. How can you really troubleshoot headless machines that are having networking issues?

On the compute/storage stack, almost every issue I see on Windows Server and Hyper-V is related to failures in certified drivers and firmwares, e.g. Emulex VMQ. Am I really expected to deploy a headless OS onto hardware where the HCL certification has the value of a bucket with a hole in it? If I was to deploy Nano, even in cloud-scale installations, then I would need a super-HCL that stress tests all of the hardware enhancements. And I would want ALL of those hardware offloads turned OFF by default so that I can verify functionality for myself, because clearly, neither Microsoft’s HCL testers nor the OEMs are capable of even the most basic test right now.

Summary

In my opinion, the entry of containers into Windows Server and Hyper-V is a huge deal for larger customers and cloud service providers. This is true innovation. As for Nano, I can see the potential for cloud-scale deployments, but I cannot trust the troubleshooting-incapable installation option until Microsoft gives the OEMs a serous beating around the head and turns off hardware offloads by default.

13 thoughts on “My Take On Windows Nano Server & Hyper-V Containers”

  1. Aidan,
    Can you expound on this statement more? My shop hasn’t tried pulling the UI yet, and I like to better understand what you mean.

    “We have the ability add/remove a UI in WS2012 but that system is broken when you do all your updates. Not good.”

    1. There is an issue when going from Core back to Full after patching has been done on WS2012. I have not seen it personally because I NEVER deploy Core, and I tell my customers to never deploy Core.

  2. Great post! I am actually a bit surprised about the statements on how Enterprises are not using Core (much), load-balancing and have those (love/hate) “valuable servers”. As always very refreshing to get a real-world take on the hype. Thanks!

  3. I have a question, at this moment you can’t put an .NET or any other Windows app into a Docker container?

  4. Great post Aidan. I was quite excited about the initial buzz around Nano (and to some degree still am), but you’ve grounded me. I agree, most of the issues you ever need to get knee deep and dirty with require console access. That said, Nano does not strike me as a deployment solution in a small to mid size datacentre scenario. It’s going to be more focused at larger scale out cloud environments that have proper Dev/QA/Prod processes in place.

    I would not be surprised to see them bring a light weight GUI to Nano similar to how VMware brought back SSH to ESXi early on it’s development iterations. Two things will almost always be necessary during the life cycle of a server; remote text access, and local console access.

    The light weight nature of Nano is _definitely_ appealing, much more so than core, but it will need a gui (similar to that on ESXi?) for real production grade use.

  5. Aidan,

    While I agree with a lot of your posts, and thought processes. I have to disagree with your anti-core opinion. Yes, there is a learning curve for using core, and it is not a fit for all use cases. We have been using it primarily for Hyper-V hosts, and SQL hosts, and as far as I’m aware, we have not run into any issues.

    As for the swapping back between Core and GUI, I’ve never tried, so I can’t speak to it. I generally just go with core, and leave it at core.

    I am anxiously waiting for Nano server to reach a functional state so that we can begin testing in house. I still haven’t decided if our Hyper-V hosts will continue to be Core, or Nano in the future, only testing will tell. I do agree that the lack of ability to troubleshoot a Nano install locally is troublesome.

  6. Aidan, I suspect the reason you cannot remove or add the GUI back properly once you have performed updates is that you have not configured Windows Update so that Features on Demand can utilize it via Group Policy. By default the settings in Windows do not utilize Windows Update, so FOD cannot grab the components your lacking due to updates because its not allowed to talk to Windows Update.

    1. It’s not impacted me at all because I do not use Core. But people I trust, who are REALLY good, have seen this issue, and have tried to work through it with MSFT with no joy.

  7. Back when I seriously considered core, I ran into this issue. The resolution I saw involved determining all the updates required and installing them. Extremely time consuming.. In our enterprise, full GUI is 99% with the other 1% being minimal gui.

  8. It seems that despite the feedback, MSFT still doesn’t get it. Witness: the “official” RSAT tool-equivalent pack for Nano only runs on an Azure instance.

    So, for those of us who, primarily for security reasons, own our own cloud and are completely self-contained, the only response is, “Nope.”

    Sad. I think the Hyper-V Nano is particularly promising, but we are not allowed to use any outside-hosted services so this just ain’t happening.

    Thanks again for the excellent post!

Leave a Reply to akirax Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.