A Deep Dive Into Hyper-V Networking

See-Mong Tan and Pankaj Garg are the speakers.

Apparently Windows Server 8 is the most cloud optimised operating system yet. I did not know that.

Customers want availability despite faults, and predictiability of performance, when dealing with networking. Admins want scalability and density VS customer wanting performance. Customers want specialisation with lots of choice, fore firewalls, monitoring, and physical fabric integration.

Windows Server 8 gives us:
– Reliability
– Security
– Predicatabiltiy
– Scalability
– Extensibility
– … all with managability

Reliability:
Windows Server 8 gives us NIC teaming to protect against NIC or network path failure. Personal experience is that the latter is much more common, e.g. switch failure.

LBFO provider sits on top of the bound physical NICs (using IM MUX and virtual miniport). The Hyper-V Extensible Switch sits on top of that. You use the LBFO Admin Gui (via LBFO Configuration DLL) to configure the team.

– Multiple modes: Switch dependent and Switch independent
– Hasing modes: port and 4-tuple
– Active/Active and Active/Passive

Windows Server 8 provides security features to host multi tenant workloads in a hybrid cloud. You run multiple virtual networks on a physical network,. Each virtual network has the illusion that it is running as a physical fabric, the only physical network … just like a VM thinks it is the entire piece of physical hardware – that’s the analogy that MSFT is using. You decouple the virtual or tenant networks from the physical network. This is where the IP address virtualisation appears to live too. Other features:

– Port ACLs: allow you to do ACLs on IP range or MAC address … like firewall rules. And can do metering with them.
– PVLAN: Bind VMs to one uplink
– DHCP Guard: Ban VMs from being DHCP servers – very useful in cloud where users have local admin rights … users are stupid and destructive.

QoS provides predictable performance in a multi-tenant environment. You can do maximum and minumim and/or absolute vs weight.

Demo of QoS maximum bandwidth:
He runs a PSH script to implement a bandwidth rate limiting cap on some badly behaving VMs to limit their impact on the physical network. Set-VMNeworkAdapter -VMname VM1 -MaximumBandwidth 1250000.

Scalability:
Performance features mean more efficient cloud operations. Also get reduced power usage.

SR-IOV
Single Route I/O Virtualisation is a PCI group hardware technology. A NIC has features that can be assigned to a VM. WIthout it, vthe virtual swithc does routing, VLAN filtering, and data copy of incoming data to the VM, which then has to process the packet. Lots of CPU. SR-IOV bypasses the Hyper-V switch and sends the packet direct to the VM Virtual Function. This requires a SR-IOV NIC. You can Live Migrate a VM from a host with SR-IOV to a host withou SR-IOV. Apparently, VMware cannot do this. SR-IOV is a property of the virtual switch, and a property of the VM vNIC (tick boxes). The VM actually uses the driver of the SR-IOV NIC. We are shown a demo of a Live Migration to a non SR-IOV non-clustered host, with no missed pings.

D-VMQ is Dynamic Virtual Machine Queue
If the CPU is processing VM network traffic then you can use this to dynamically span processing VM n/w traffic across more than one CPU. It will automatically scale up and scale down the CPU utilisation based on demand. Static VMQ is limiting in high tide. No VMQ is limited to single CPU.

Receive Side Coalescing (RSC) allows a VM to receive live packets. IPsec Task Offload means a VM performs really well when running IPsec (CPU eater). There’s a call to action for NIC and Server vendors to support thiese features.

Extensibility:
The idea here is that partners can develop those specialised features that MSFT cannot do.

Partners can extend the Hyper-V extensible switch with their own features. There’s a set of APIs for them to use. Switch vendors should extend to provide unified management of physical and virtual switches.

Managability:
Features without management is useless. Windows Server 8 designed to manage large clouds. Metering allows chargeback, e.g. on network usage. Metrics are stored with the VM and are persistent after a VM move or migration.

PowerShell for Hyper-V. Unified tracing for network troubleshooting: trace packets from VM, to switch, though the vendor and onto the network. Port Mirroring: standard switch feature redirect switch traffic to analyse.

And this is where I need to wrap up … the session is about to end anyway.

2 thoughts on “A Deep Dive Into Hyper-V Networking”

  1. What roles need to be added to Windows 8 to accomplish live migration between two like servers? Is SR-IOV required to accomplish live migration between two servers. Can the two servers be on a non-domain network and accomplish live migration?

    1. Jake, all you need is Windows Server 8, Hyper-V enabled, domain joined, and a 1 GbE connection. Nothing beyond that to get it to work. What’s best practice? I don’t know that -yet-.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.