2013
06.04

I’m going to do my best (no guarantees – I only have one body and pair of ears/eyes and NDA stuff is hard to track!) to update this page with a listing of each new WS2012 R2 Hyper-V and Hyper-V Server 2012 R2 (and related) feature as it is revealed by Microsoft (starting with TechEd North America 2013).  Note, that the features of WS2012 can be found here.

This list was last updated on 05/September/2013.

 

3rd party Software Defined Networking Is supported by the extensibility of the virtual switch.
Automatic Guest Activation Customers running WS2012 R2 Datacenter can automatically activate their WS2012 R2 guests without using KMS. Works with OEM and volume licenses. Great for multi-tenant clouds.
Azure Compatibility Azure is running the same Hyper-V as on-premise deployments, giving you VM mobility from private cloud, to hosted cloud, to Microsoft Azure.
Built-In NVGRE Gateway A multi-tenant aware NVGRE gateway role is available in WS2012 R2. Offers site-site VPN, NAT for Internet access, and VM Network to physical network gateway.
Clustering: Configurable GUM Mode Global Update Manager (GUM) is responsible for synchronizing cluster resource updates.  With Hyper-V enabled, all nodes must receive and process an update before it is committed to avoid inconsistencies.
Clustering: Larger CSV Cache Percentage WS2012 allows a maximum of 20% RAM to be allocated to CSV Cache.  This is 80% in WS2012 R2.
Clustering: CSV Load Balancing CSV ownership (coordinators) will be automatically load balanced across nodes in the cluster.
Clustering: CSV & ReFS ReFS is supported on CSV.  Probably still not preferable over NTFS for most deployments, but it is CHKDSK free!
Clustering: Dynamic Witness The votes of cluster nodes are automatically changed as required by the cluster configuration.  Enabled by default.  This can be used to break 50/50 votes when a witness fails.
Clustering: Hyper-V Cluster Heartbeat Clusters running Hyper-V have a longer heartbeat to avoid needless VM failovers on latent/contended networks. SameSubnetThreshold is 10 (normally 5) and CrossSubnetThreshold is 20 (normally 5).
Clustering: Improved logging Much more information is recorded during host add/remove operations.
Clustering: Pause action Pausing a node no longer will use Quick Migration for “low” priority VMs by default; Live Migration is used as expected by most people. You can raise the threshold to force Quick Migration if you want to.
Clustering: Proactive Server Service Health Detection The health of a destination host will be verified before moving a VM to another host.
Clustering: Protected Networks Virtual NICs are marked as being on protected networks by default. If a virtual NICs’ virtual switch becomes disconnected then the cluster will Live Migrate that VM to another host with a healthy identical virtual switch.
Clustering: Virtual Machine Drain on Host Shutdown Shutting down a host will cause all virtual machines to Live Migrate to other hosts in the cluster.
Compressed Live Migration Using only idle CPU resources on the host, Hyper-V can compress Live Migration to make it quicker. Could provide up to 2x migrations on 1 GbE networks.
Cross-Version Live Migration You can perform a Live Migration from WS2012 to WS2012 R2. This is one-way, and enables zero-downtime upgrades from a WS2012 host/cluster to a WS2012 R2 host/cluster.
Dynamic Mode NIC Teaming In addition to Hyper-V Port Mode and Address Hashing. Uses “flowlets” to give fine-grained inbound and outbound traffic.
Enhanced Session Mode The old Connect limited KVM access to a VM. Now Connect can use Remote Desktop that is routed via the Hyper-V stack, even without network connection to the VM. Copy/paste and USB redirection are supported. Disabled on servers and enabled by Client Hyper-V by default.
Generation 2 VM A G2 virtual machine is a VM with no legacy “hardware”. It uses UEFI boot, has no emulated devices, boots from SCSI, and can PXE boot from synthetic NIC. You cannot convert from G1 VM (UEFI I am guessing).
HNV Diagnostics A new PoSH cmdlet enables an operator to diagnose VM connectivity in a VM Network without network access to that VM.
HNV: Dynamic Learning of CAs Hyper-V Network Virtualization can learn the IPs of VM Network VMs. Enables guest DHCP and guest clustering in the VM Network.
HNV: NIC Teaming Inbound and outbound traffic can traverse more than one team member in a NIC team for link aggregation.
HNV: NVGRE Task Offloads A new type of physical NIC will offload NVGRE de- and encapsulation from the host processor.
HNV: Virtual Switch extensions The HNV filter has been included in the Hyper-V Virtual Switch. This enables 3rd party extensions to work with HNV CAs and PAs.
Hyper-V Replica Extended Replication You can configure a VM in Site A to replicate to Site B, and then replicate it from Site B to Site C.
Hyper-V Replica Finer Grained Interval controls You can change the replication interval from the default 5 minutes to every 30 seconds or every 15 minutes.
IPAM IP Address Management was extended in WS2012 R2 to do management of physical and virtual networking with built-in integration into SCVMM 2012 R2.
Linux Dynamic Memory All features of Dynamic Memory are supported on WS2012 R2 hosts with the up to date Linux Integration Services.
Linux Kdump/kexec Allows you to create kernel dumps of Linux VMs.
Linux Live VM backup You can backup a running Linux VM with no pause, with file system “freeze”, giving file system consistency. Linux does not have VSS.
Linux Specification of Memory Mapped I/O (MMIO) gap Provides fine grained control over available RAM for virtual appliance manufacturers.
Linux Non-Maskable Interrupt (NMI) Allows delivery of manually triggered interrupts to Linux virtual machines running on Hyper-V.
Linux Video Driver A Synthetic Frame Buffer driver for Linux guest OSs will provide improved performance and mouse support.
Live Resizing of VHDX You can expand or shrink (if there’s un-partitioned space) a VHDX attached to a running VM. It must be SCSI attached.  This applies to Windows and Linux.
Live Virtual Machine Cloning You can clone a running virtual machine. Useful for testing and diagnostics.
Remote Live Monitoring Remote monitoring of VM network traffic made easier with Message Analyzer.
Service Provider Foundation (SPF) The SPF is used to provide an API in-front of SCVMM. It is required for the Windows Azure Pack. A hosting company can share their infrastructure with clients, who can interact with SPF via on-premise System Center – App Controller.
Shared VHDX Up to 8 VMs can share a VHDX (on shared storage like CSV/SMB) to create guest clusters. Appears like a shared SAS drive.
SMB Live Migration This feature uses SMB to perform Live Migration over 10 GbE or faster networks. It uses SMB Multichannel if there are multiple Live Migration networks. SMB Direct is used if RDMA is available.  SMB Multichannel gives the fastest VM movement possible, and SMB Direct offloads the work from the CPU. Now moving that 1 TB RAM VM doesn’t seem so scary!
SMB 3.0: Automatic rebalancing of Scale-Out File Server clients SMB clients of the scalable and continuously available active/active SOFS are rebalanced across nodes after the initial connection. Tracking is done per-share for better alignment of server/CSV ownership.
SMB 3.0: Bandwidth controls QoS just sees SMB 3.0. New filters for default, live migration, and virtual machine allow you to manage bandwidth over converged networks.
SMB 3.0: Improved RDMA performance Improves performance for small I/O workloads such as OLTP running in a VM. Very noticeable on 40/56 Gbps networks.
SMB 3.0: Multiple SMB instances on SOFS The Scale-Out File Server has an additional SMB instance for CSV management, improving scalability and overall reliability. Default instance handles SMB clients.
Storage Spaces: Tiered Storage You can mix 1 tier of SSD with 1 tier of HDD to get a blend of expensive extreme speed and economic capacity.  You define how much (if any) SSD and how much HDD a virtual disk will take from the pool.  Data is promoted/demoted in the tiers at 1am by default.  You can pin entire files to a tier.
Storage Spaces: Parallelized Restore Instead of using slow host spare disks in a pool, you can use the cumulative write IOPS of the pool to restore virtual disk fault tolerance over the remaining healthy disks. The replacement disk is seen as new blank capacity.
Storage Spaces: Write-Back Cache Hyper-V is write-through, avoiding controller caches on writes.  With tiered storage, you get Write-Back Cache.  The SSD tier can absorb spikes in write activity.  Supported with CSV.
Storage QoS You can set an IOPS limit on individual virtual hard disks to avoid one disk consuming all resources, or to price-band your tenants. Minimum alerts will notify you if virtual hard disks cannot get enough storage bandwidth.
System Center alignment System Center and Windows Server were developed together and will be released very closely together.
Network Diagnostics New PowerShell tools for testing the networking of VMs, including Get-VMNetworkAdapter, Test-NetConnection, Test-VMNetworkAdapter,a nd Ping -P.
VDI & Deduplication WS2012 R2 can be enabled in VDI scenarios (only) where the VMs are stored on dedicated (only) WS2012 R2 storage servers.
Virtual Machine Exports You can export a VM with snapshots/checkpoints
Virtual Switch Extended Port ACLs ACLs now include the socket port number.  You can now configure stateful rules that are unidirectional and provide a timeout parameter. Compatibility with Hyper-V Network Virtualization.
vRSS Virtual Receive Side Scaling leverages DVMQ on the host NIC to enable a VM to use more than 1 vCPU to process traffic. Improves network scalability of a VM.
Windows Azure Pack This was previously called Windows Azure Services for Windows Server, and is sometimes called “Katal”. This is based on the source code of the Azure IaaS portal, and allows companies (such as hosting companies) to provide a self-service portal (with additional cloud traits) for their cloud.

 

Technorati Tags: ,,,,

 

8 comments so far

Add Your Comment
  1. Most of the new Linux Hyper-V features has been sitting on Patchwork for a number of months.

    • But require WS2012 R2 Hyper-V for support.

  2. Aiden,

    Great summary all the new features! Very useful en helpful. Keep up the good work.

    Greetz,

    Peter

  3. Thank you!

  4. Thanks for the write-up Aidan, Great summary!

  5. Awesome!

  6. Aidan, I am getting into Hyper-V clustering more and more and can’t find any books or online tools that help me calculate capacity/scaling questions like “How many VM’s per node?” or “How many nodes are typical for 200, 500, 1000, 5000 users?” “At what thresholds will I outgrow a SAS SAN and need to migrate to iSCSI/FC alternative?” “What resource monitoring or tests can I run to be sure an app is going to be happy when converted to a virtual environment?” “How can I make sure one wild sharepoint query on a VM isn’t going to overwhelm it’s neighbors?”…etc.

    Does your book(s) cover these topics, or are they easy enough to answer here, or are there some online resources I can dive into?

    I’m not looking to hear something like, yes you can run 15 VMs on a 2 node sas cluster. Ideally the answer is more like, “Here are the numbers to watch, here are the areas to be most vigilant, here’s how we know we need more nodes, here’s why your storage isn’t performing as you had hoped…etc”

    • Microsoft Assessment and Planning Toolkit.

Get Adobe Flash player