I really enjoyed presenting today on the next version of Hyper-V with Rick Claus (Microsoft) and Andrew Syrewicze (Hyper-V MVP). We had some tech glitches at the start and during the session, which always makes a session memorable Smile

We ran out of time at the end. Andy was the moderator but his ISP crapped out, so we didn’t get a chance to do Q&A properly.

If you have any questions then please either hit us on Twitter or post a comment below.

Thank you to Altaro for hosting this webinar! Make sure to check out their excellent backup products, which also features a free version.


Nothing will make a Hyper-V admin bald faster than storage issues. Whether it’s ODX on HP 3par or networking issues caused by Emulex, even if the blip is transient, it will crash your VMs. This all changes in vNext.

The next version of Hyper-V is more tolerant of storage issues. A VM will enter a paused state when the hypervisor detects an underlying storage issue. This will protect the VM from an unnecessary stoppage in the case of a transient issue. If the storage goes offline just for a few seconds, then the VM goes into a pause state for a few seconds, and there’s no stoppages, reboots, or database repairs.


I was forwarded an email today from a VMware distributor that informs VMware authorised partners that their prices are going up.

No customer buys software directly from the big software vendors. Typically the path is either:

  • Manufacturer > Distributor > Reseller > Customer
  • Manufacturer > Large account reseller > Large customer

Each link in the chain (or channel) makes a small percentage. There is a “price list” at the top of the chain, but that is often discounted. Discounts are applied to large deals, and that discount can vary depending on sales targets for the product, what is included in the deal (adding more can sometimes reduce the original price), the time in the sales cycle and the size of the deal. In the case of VMware, few ever pay the prices listed on their website.

This is the email sent out to VMware authorised partners:


VMware are reducing those discounts, giving VMware more earnings and reducing the profitability of VMware software to partners.

Do note, that any reseller that has a business plan to make profit from licensing needs to sell A LOT of licenses. Real profits for resellers come in services, not in s/w or tin.

Technorati Tags: ,

Dynamic Memory was added in W2008 R2 SP1 to allow Hyper-V to manage the assignment of memory to enabled virtual machines based on the guest OSs demands. Today in WS2012 R2, a VM boots up with a start-up amount of RAM, can grow, based on in-guest pressure and host availability, up to the maximum amount allowed by the host administrator, and shrink to a minimum amount.

But what if we need more flexibility? Not all workloads are suitable for Dynamic Memory. And in my estimation, only about half of those I encounter are using the feature.

The next version of Hyper-V includes hot memory resizing. This allows you to add and remove memory from a running virtual machine. The operation is done using normal add/remove administration tools. Some notes:

  • At this time you need a vNext guest OS
  • You cannot add more memory than is available on the host
  • Hyper-V cannot remove memory that is being used – you are warned about this if you try, and any free memory will be de-allocated.

This new feature will save a lot of downtime for virtualised services when administrators/operators need to change the memory for a production VM. I also wonder if it might lead to a new way of implementing Dynamic Memory.


Richard Campbell, the host of the RunAs Radio podcast, saw a tweet from me talking about Azure AD, and thought he should ask me back on to have a chat. I had been working on developing training materials on RemoteApp. The use-case that caught my interest was the ability to use RemoteApp to remove the effect of latency. We talk for half an hour about this.


The “design” I talk about in the podcast (recorded a few weeks ago) works, and I’ve presented using it. I’ve written some posts on Petri.com about my experiences:

In the design, virtual DCs, file server, and application servers run as VMs in an Azure network. RemoteApp publishes applications on another network. A VNET to VNET VPN connects the server and RA networks, enabling the RA session hosts to join the domain. Users log into RemoteApp, and then it’s all normal RDS at that point:

  • GPO applies
  • Log in script runs
  • Published applications have fast access to application servers
  • Users save data in the company’s Azure VMs

It’s a nice solution!


A Career Of T-Shirts

I was doing a major clean-out of the darker recesses of our house recently and found many nerd-shirt, most of which were thrown out. They brought back a lot of memories.


My first job out of college had me working as a UNIX developer … can you believe that!?!?! The project ended, and some of my Linux-to-Windows porting work lead me to being transferred to our budding Microsoft consulting team. And it was there that I got training and certification from Citrix on WinFrame (now XenApp). That was the start of my journey to here.


I spent most of my early days working with my employer’s brand of Intel servers (fridge-sized machines with 12 x 9 GB SCSI drives) and our storage system in the lab, setting up proof of concepts and demo labs.


I left there after 4 years to spread my wings. That was the start of my many years of working with HP hardware.


I lost my job a week after 9/11. The consulting company’s directors decided to re-launch as a “dot com”. I realized that my skills had not been developed and I struggled to find work. I was unemployed, and spent just about every waking hour getting a W2000 MCSE. A few weeks after that, I was employed again.


I -hated- that job. Actually, hate might not be a strong enough word. I was doing field engineering. It was an awful experience. And I moved on a few months later.


After some time contracting I got a job working for a German (but Irish headquartered) finance company, merging 9 international offices and upgrading them from Windows NT 4.0/Office 97 to Windows XP with a W2003 forest. I -loved- that job; the responsibility of designing and “owning” the global infrastructure (eventually 17 locations) was a rush. This was when I started working with virtualisation in work (Virtual Server and Virtual PC) and with pre-System Center (SMS and MOM) products, and was the start of my path to here.


Unfortunately, the directors (who ended up being chased by German prosecutors) decided to move IT to Stuttgart, while making us redundant. That ended up backfiring big-time – we were state of the art and the German consultants both hadn’t a clue and were extremely expensive. I wore the above t-shirt for my exit interview. There’s a story behind that which I won’t tell, but the German HR executive looked like she had shat herself when I walked into the room Smile


Oh yes, not only did I start off by programming on UNIX, but I worked happily on VMware before I switched to Hyper-V.


The work that I was doing at the time lead to me being of interest to the local Microsoft office. I started to get involved in the local community, I had been blogging for a couple of years, and I was asked to present at the Irish launch of W2008.


My community work grew and eventually led to me being awarded MVP status in … SCCM Smile A year later, I was switch to the Hyper-V expertise, reflecting what I was then working with and writing about.


Books were written, travel to events was done, and I staffed a booth at TechEd in Berlin. I wore one of the blue “plastic” Microsoft shirts. Just 5 minutes in one of these and you had black armpits all the way down to your waist.


A couple of years ago I signed up with the Petri IT Knowledgebase to write about Microsoft virtualization and then my role expanded to write op-eds and other article types on other things. It’s been fun to branch out a little, and reach a bigger audience. Now the site has added Paul Thurrott (under his own URL) and more staff to cover other areas, and that audience is growing.


Last year was a big one, career-wise. I recommended some new brands to distribute. One of those was DataON, and that has become quite a business for us, not just in Ireland, but across Europe. I took part in the TechEd North America Speaker Idol … and won a speaking slot at Ignite this year in Chicago. And I was awarded a speaking role at TechEd Europe, got one of the larger rooms, and came in as the 6th rate overall most effective speaker! And then I popped the question and got engaged at Christmas, finishing 2014 on a real high.


Have you ever built a Hyper-V virtual machine with 2 or more NICs, each on different networks, and struggled with assigning IP stacks in the guest OS? I sure have. When I was writing materials with virtual SOFS clusters, I often had 3-6 NICs per VM, each requiring an IPv4 stack for a different network. I needed to ensure that VMs were aligned and able to talk on those networks.

With modern physical hardware, we get a feature called Consistent Device Naming (CDN). This allows the BIOS to name the NIC and that’s how the NIC appears in the physical WS2012 (or later) install. Instead of the random “Local Area Connection” or “Ethernet” name, you get a predictable “Slot 1 1” or similar, based on the physical build of the server.

With Windows Server vNext, we are getting something similar, but not identical. This is not vCDN as some commentators have called it because it does require some work in the guest OS to enable the feature. Here’s how it works (all via PowerShell):

  1. You create a vNIC for a VM, and label that vNIC in the VM settings (you can actually do that now on WS2012 or later, as readers of WS2012 Hyper-V Installation and Configuration Guide might know!).
  2. Run a single cmdlet in the guest OS to instruct Windows Server vNext to name the connection after the adapter.

Armed with this feature, the days of disconnecting virtual NICs in Hyper-V Manager to rename them in the guest OS are numbered. Thankfully!


I am happy to say that I will be speaking at Microsoft Ignite, running in Chicago from 4th until the 8th of May, 2015.

I am not 100% sure yet, but it looks like I’ll be presenting the same session as I did in Barcelona. I’ll have to change it up a little Smile And I do not know the day/time/room yet.

I’m looking forward to seeing Chicago. I’ve been through O’Hare a lot, only getting as far as the inter-terminal shuttle. I might even try a deep dish pizza, even if Jon Stewart thinks that it’s the work of the devil.

Hopefully I’ll see you at my session,




Microsoft has corrected and changed the description of the new pricing for Azure Online Backup that comes into effect on April 1st. This is after the owners of the website royally screwed the pooch in February with a confusing and incorrect posting.

NOTE: I am redacting this post because no one is able to explain what a “protected instance” is. Until then, while Azure Backup is great technically, and could be cheap, I have no idea how much it will cost.

As with all posts regarding licensing or pricing on this site, I will not be answering questions. Ask your reseller, distributor or LSP – they’re the people you are paying so they are the people who can do the work.

With the new pricing you no longer pay (using North Europe pricing in Euros) €0.149 per GB stored in Azure per month. Instead the pricing is broken into 2 pieces:


Think of an instance as The block of data that has to be protected. I

The charge per instance depends on the size of the instance. Sigh!  This charge is based on the size of the protected instance. I do not know if this is based on data protected or the total amount of disk space in the instance. Another sigh!

  • Less than 50 GB:€3.7235 per instance
  • 50 GB to 500 GB: €7.447 per instance
  • Larger than 500 GB: Multiples of the 50-500 GB charge


Azure Online Backup will use Block Blob Storage. You can use either LRS (3 copies in 1 data center) or GRS (3 copies in 1 data center, and 3 async copies in another region) at a higher cost.



The end result is that for most customers, the pricing will come way down.

1 File Server with 30 GB on LRS Storage

  • 1 instance: €3.7235
  • Storage: 30 GB * €0.0179 (LRS) = €0.54

Total = €4.26

4 Machines with 80 GB each on LRS Storage

  • 4 instances: €7.447 * 4 = €29.788
  • Storage: 30 GB * 4 * €0.0179 (LRS) = €5.73

Total = €35.52

1 Machine with 1400 GB on GRS Storage

  • 3 instances (3 * 50-500 GB): €7.447 * 3 = €22.341
  • Storage: 1400 GB * €0.0358 (GRS) = €50.12

Total = €97.52


As I have told some people in Redmond, the added complexity to Azure Online Backup pricing is indicative of everything that is wrong with Azure pricing. The only blocker I’m seeing in Azure sales is that sales people cannot get their heads around the wildly varied and complicated pricing. I really don’t care what AWS does – I don’t work with AWS and what they do to limit their own sales is their issue. Microsoft needs to fix the pricing structure of Azure to grow it the way they want, and need, to.


You might have noticed a trend that there are a lot of features in the next version of Hyper-V that increase uptime. Some of this is by avoiding unnecessary failovers. Some of this is by reducing the need to shutdown a VM/service in order to do engineering. This is one of the latter.

Most of the time, we only need single-homed (single vNIC) VMs. But there are times where we want to assign a VM to multiple networks. If we want to do this right now on WS2012 R2, we have to shut down the VM, add the NIC, and start it up again.

With Hyper-V vNext we will be able to hot-add and hot-remove vNICs and this much requested feature will save administrators some time and grief from service owners.


One of the bedrocks of virtualization or a cloud is the storage that the virtual machines (and services) are placed on. Guaranteeing performance of storage is tricky –some niche storage manufacturers such as Tintrí (Irish for lightning) charge a premium for their products because they handle this via black box intelligent management.

In the Microsoft cloud, we have started to move towards software-defined storage based on the Scale-Out File Server (SOFS) with SMB 3.0 connectivity. This is based on commodity hardware, and with WS2012 R2, we currently have a very basic form of storage performance management. We can set:

  • Maximum IOPS per VHD/X: to cap storage performance
  • Minimum IOPS per VHD/X: not enforced, purely informational

This all changes with vNext where we get distributed storage QoS for SOFS deployments. No, you do not get this new feature with legacy storage system deployments.

A policy manager runs on the SOFS. Here you can set storage rules for:

  • Tenants
  • Virtual machines
  • Virtual hard disks

Using a new protocol, MS-SQOS, the SOFS passes storage rule information back to the relevant hosts. This is where rate limiters will enforce the rules according to the policies, set once, on the SOFS. No matter which host you move the VM to, the same rules apply.


The result is that you can:

  • Guarantee performance: Important in a service-centric world
  • Limit damage: Cap those bad boys that want everything to themselves
  • Create a price banding system: Similar to Azure, you can set price bands where there are different storage performance capabilities
  • Offer fairly balanced performance: Every machine gets a fair share of storage bandwidth

At this point, all management is via PowerShell, but we’ll have to wait and see what System Center brings forth for the larger installs.


I just had a conversation with a customer about a Hyper-V/System Center deployment that they are planning. They have multiple branch offices and they want to deploy 2 VMs to each, and manage the hosts (hardware included) and guest OS/services with System Center. The problem was: the cost of System Center – not a new story for SMEs, even larger ones, thanks to the death blow served by MSFT to System Center sales in the SME space in 2012.

This customer was looking at purchasing 1 Standard SML per site. The lack of density was increasing costs – using a centralized deployment with Datacenter SMLs would have been more cost effective. But they needed VMs for each site.

But I knew a trick:

Customers can use the license mobility benefits under Software Assurance to assign their System Center 2012 license to a Windows Server instance running on Azure. A System Center Standard license can be used to manage 2 VM instances; a System Center Datacenter license can be used to manage 8 VM instances.

What if the customer did this:

  • Deployed the VMs in Azure instead of on-premises Hyper-V
  • Shared the services via RemoteApp
  • Managed the guest OS and services using Datacenter SMLs, thus getting the cost/density benefits of the DC license.

As it turns out, the solution wasn’t going to work because the regional sites suffer from Irish rural broadband – that is, it sucks but not enough to download faster than a few MB, let alone upload.

But this is something to keep in mind for getting density benefits from a DC SML!


Most of us have dealt with some piece of infrastructure that is flapping, be it a switch port that’s causing issues, or a driver that’s causing a server to bug-check. These are disruptive issues. Cluster Compute Resiliency is a feature that prevents unwanted failovers when a host is having a transient issue. But what if that transient issue is repetitive? For example, what if a cluster keeps going into network-isolation and the VMs are therefore going offline too often?

If a clustered host goes into isolation too many times within a set time frame then the cluster will place this host into quarantine. The cluster will move virtual machines from a quarantined host, ideally using your pre-defined migration method (defaulting to Live Migration, but allowing you to set Quick Migration for a priority of VM or selected VMs).

The cluster will not place further VMs onto the quarantined host and this gives administrators time to fix whatever the root cause is of the transient issues.

This feature, along with Cluster Compute Resiliency are what I would call “maturity features”. They’re the sorts of features that make life easier … and might lead to fewer calls at 3am when a host misbehaves because the cluster is doing the remediation for you.


Quite bit of stuff to read since my last aggregation post on the 3rd.

Windows Server


Windows Client


Office 365




This post is dedicated to the person that refuses to upgrade from Windows Server 2003. I’m not targeting service providers and those who want to upgrade but face continued resistance. But if you are part of the problem, then please feel free to be offended. Please read it before you hurt your tired fingers writing a response.

I’m not going to pussy-foot around the issue. I couldn’t give a flying f**k if your delicate little feelings are dented. You are what’s wrong in our industry and I’ll welcome your departure.

Yes. You are professionally negligent. You’ve decided to put your customers,stockholders, and bosses at legal risk because you’re lazy.

You know that support is ending on July 14th 2015 for Windows Server 2003, Windows Server 2003 R2, SBS 2003 and SBS 2003 R2, but still you plan on not upgrading. Why?

You say that it still works? Sure, and so did this:


 Photo of Windows Server 2003 administrator telling the world that they won’t upgrade

You think you’ll still get security fixes? Microsoft is STOPPING support, just like they did for XP. Were you right then? No, because you are an idiot. So you work for some government agency and you’ll reach a deal with Microsoft? On behalf of the tax payers of your state, let me thank you for being a total bollocks – we’ll be paying at least $1 million for year one of support, and that doubles each year. We’ll be landed with more debt because your incompetent work-shy habits.

You think third parties like some yellow-pack anti-malware or some dodgy pay-per-fix third party will secure you? Let me give you my professional assessment of that premise: HAHAHAHAAHAHAHAHAH!

Maybe other vendors will continue supporting their software on W2003? That’s about as useful as a deity offering extended support for the extracted failed kidney of a donor patient. If Microsoft isn’t supporting W2003, etc, then how exactly is Honest Bob’s Backup going to support it for you? Who are they going to call when there’s a problem that they need assistance on? Are you really that naive?

Even regulators recognise that “end of support” is a terminal condition. VISA will be terminating business with anyone still using W2003 as part of the payment operation. You won’t be able to continue PCI compliance. Insurance companies will see that W2003 as a business risk that it outside the scope of the policy. And hackers will have an easy route to attack your network.

“Oh poor me – I have an LOB app that can’t be replaced and only runs on W2003”. Well; why don’t you upgrade everything else and isolate the crap out of that service? Allegedly, there is an organ rattling inside that skull of yours so you might want to shake the dust off and engage it!

I have zero sympathy for your excuses. I know some of you will protest my comments. Your excuses, not reasons, only highlight your negligence. You’ve had a decade and 4 opportunities to upgrade your server OS. You can switch to OPEX cloud systems (big 3 or local) to minimise costs. You could have up-skilled and deployed services that are included in the cost of licensing WS2012 R2 instead of spending your stockholders or tax payers funds on 3rd party solutions. Yeah, I don’t have many good things to say to you, the objector, because, to be quite honest, there is little good to be said about you as an IT professional.

This post was written by Aidan Finn and has no association with my employers, or any other firm I have associated with. If you’re upset, then go cry in a dark room where you won’t annoy anyone else.


I will be doing a webinar with Microsoft’s Rick Claus for Altaro on 26th of March, talking about some of the features in the next version of Windows Server Hyper-V. It should be a good one!


Register here.


Here’s the latest in the Microsoft world!


System Center


Office 365



One of the biggest blockers, in my personal opinion, to Azure IaaS adoption in the SME space is understanding how to price solutions. I don’t get questions about technology, features, cost, trust or any of that; instead, I get questions such as “how much will this cost me?”. Microsoft does not help themselves with a very complex pricing model – please don’t try to bring up AWS – Microsoft doesn’t sell AWS so I don’t get why they are relevant!

So I’ve started producing some videos for my employers. This one focuses on pricing solutions based on Azure virtual machines.


Microsoft sent out emails last night to inform Azure customers that the pricing of Azure Online Backup is changing.

Currently, you get 5 GB free and then pay €0.149/month (rounded to €0.15) in North Europe for each additional 1 GB.

On April 1st, the pricing structure changes:


So, 5 GB free. Then for each machine you backup, you pay at least €7.447, with an additional charge of €7.447 for each additional 500 GB protected on that machine. And that DOES NOT COVER the cost of storage consumed in Azure. You have to pay for that too (GB/month and transactions).

So how much will that be? I have no frickin’ idea. There is no indication what kind of storage or what resiliency is required.

It might be Block Blobs running at €0.0179/GB (LRS) or €0.0358/GB (GRS). But who knows because Microsoft didn’t bother documenting it!

That leads me to an issue. The biggest blocker I’ve seen in the adoption of Azure in the SME space is not cost, technical complexity, or trust. The biggest issue is that few people understand how to price a solution in Azure. If you’re deploying a VM you need the VM/hour cost, storage space, storage transactions, egress data, and probably a gateway. Is there a single place that says all that on the Azure portal? No. What Microsoft has is isolated islands of incomplete information on the Azure website, and a blizzard of pricing in their Excel-based pricing “tools”.

If Microsoft is serious about Azure adoption, then they need to get real about helping customers understand how to price tools. Azure Online Backup was the tool I was starting to get traction with in the SME/partner space. I can see this new announcement introducing uncertainty. This change needs to be changed … fast … and not go through the Sinofskian feedback model.

Grade: F. Must try harder.


Earlier today I produced a video for my employers to discuss the role of Microsoft Azure infrastructure-as-a-service (IaaS) in the SME/SMB market. In the video I talk a little about what Azure is, the economic sense of a service like Azure for these businesses, how the Open licensing scheme works, and then I talk about 3 of the core services and some of the scenarios that apply.


In today’s cloudy link aggregation I have news on Windows Server (2003 end of life to Azure), Private Cloud bugs, Azure, and Office 365.

Windows Server

System Center


Office 365


I HATE auto-playing video adverts. They’re loud, they interrupt what I do want to watch & listen to, and they are usually inappropriate. And worse: they are appearing EVERYWHERE.

I use the Chrome browser for my general stuff (IE for the Microsoft stuff). Thankfully, it’s not too hard to selectively disable video on those sites that cause offense, such as The Verge, CIO.com, and TheJournal.ie.

In Chrome, open the content settings by browsing to chrome://chrome/settings/content.

Scroll down the Content Settings dialog until you find Plug-Ins. I like to let plug-ins run automatically and manage the painful exceptions. Click Manage Exceptions.


Enter in the URL of the site that you are browsing that is running the offending advert (plug-in). You can use wildcards here, such as [*.]cio.com. You can allow, block (totally) or ask (block but allow you to start) any plug-ins on that site.


Back in the Content Settings dialog you also have the option to manage particular plug-ins. Maybe something is installed that you’d like to block. You can do that there by disabling the plug-in. You can also allow some plug-ins to always run.

But I’m of the preference of punishing those sites that put this shit on my screen and speakers, like The Verge, CIO.com and TheJournal.ie.

And for those of you who want to block video ads in IE or Firefox


Here is the latest news in the world of Microsoft infrastructure:


Windows Server

System Center


Office 365



Two years ago, if you’d asked me which direction I would expand into from Hyper-V, it wouldn’t have been into Azure. But, things change. Back in 2007, I believe that I blogged that I wouldn’t work with Hyper-V and would be sticking with VMware. Then a year later I’m working with Hyper-V, blogging about it, and eventually evangelizing about it too!

But what got me to change my mind about switching to Hyper-V? It was System Center. I was a fan of System Center and I saw the potential of Microsoft big picture thinking for the data centre. How times have changed. In recent years, I have moved more and more away from using System Center. While I still love the potential power of the suite, it has become less and less relevant for me and my customers. Microsoft saw to that back in 2012 when they changed the licensing of System Center. Other things, such as increased complexity of installation and maintenance (hiding necessary upgrade steps while pushing automated upgrades via Windows Update) makes owning System Center more of a complexity than it should be. And meanwhile, the Windows Server group has made the automation of System Center less necessary by giving us PowerShell. The market of System Center has shrunk to a relatively small number of very large sales. And that doesn’t include my market here in Ireland.

Unlike many of my fellow MVPs, who are gravitating to the small amount of but large profit System Center work that is out there, I’m moving in my own direction. The writing is on the wall. The cloud is here, real, and relevant to businesses of ALL sizes. I’ve been adding Microsoft Azure IaaS to my arsenal of Hyper-V, clustering, and Windows Server storage/networking skills over the past year or so. Once again, it appears that I’m swimming in a small pool but I’ve been there before; I swam in the Hyper-V puddle that became an ocean.

There’s so much to Azure and it’s growing and evolving at an incredible pace. It’s not an alien technology. There is the fact that Azure is based on Hyper-V (WS2012 to be exact). But Azure compliments on-premises deployments too. Need off-site backup? Want an affordable DR site? Need burst compute/storage capacity? Azure does all that … and much more … with or without System Center, for SMEs, large enterprises, and hosting companies.

I’ve been running a lot of Microsoft partner training locally since last August. I’ve been doing quite a bit of Azure writing for Petri.com. Expect to see some of that appear here too. Oh!, before you ask, yes, I will still centre on Hyper-V and I’ll continue to talk about the new stuff when the time is right Smile


I am not writing a WS2012 R2 Hyper-V book, but some of my Hyper-V MVP colleagues have been busy writing. I haven’t read these books, but the authors are more than qualified and greatly respected in the Hyper-V MVP community.

Hyper-V Security

By Eric Siron & Andy Syrewicze

Available on Amazon.com and Amazon.co.uk


Keeping systems safe and secure is a new challenge for Hyper-V Administrators. As critical data and systems are transitioned from traditional hardware installations into hypervisor guests, it becomes essential to know how to defend your virtual operating systems from intruders and hackers.

Hyper-V Security is a rapid guide on how to defend your virtual environment from attack.

This book takes you step by step through your architecture, showing you practical security solutions to apply in every area. After the basics, you’ll learn methods to secure your hosts, delegate security through the web portal, and reduce malware threats.

Hyper-V Best Practices

By Benedict Berger

Available on Amazon.com and Amazon.co.uk


Hyper-V Server and Windows Server 2012 R2 with Hyper-V provide best in class virtualization capabilities. Hyper-V is a Windows-based, very cost-effective virtualization solution with easy-to-use and well-known administrative consoles.

With an example-oriented approach, this book covers all the different guides and suggestions to configure Hyper-V and provides readers with real-world proven solutions. After applying the concepts shown in this book, your Hyper-V setup will run on a stable and validated platform.

The book begins with setting up single and multiple High Availability systems. It then takes you through all the typical infrastructure components such as storage and network, and its necessary processes such as backup and disaster recovery for optimal configuration. The book does not only show you what to do and how to plan the different scenarios, but it also provides in-depth configuration options. These scalable and automated configurations are then optimized via performance tuning and central management.

Get Adobe Flash player