2012
08.10

The Performance Tuning Guidelines for Windows Server 2012 document is available and I’m reviewing and commenting on notable text in it.

These are very advanced controls and should not be touched without reasonable consideration, planning, and understanding.  Don’t go assuming anything, or playing with this stuff in a production environment.  Don’t blame the settings if it all goes wrong, go look in a mirror instead.  Ok, that’s the formalities out of the way.

Microsoft says that:

The virtualization stack balances storage I/O streams from different virtual machines so that each virtual machine has similar I/O response times when the system’s I/O bandwidth is saturated

When they talk about “bandwidth” they are talking about the ability to throughput data, e.g. networking or storage.

We can manipulate the balance of that throughput in congestion scenarios, giving contending virtual machines a better shot at getting their network or storage throughput through the stack.  In other words, some VMs are hogging network/storage IO and you want to give everyone a slice so they can work too.

The registry controls for storage can be found at HKLMSystemCurrentControlSetServicesStorVsp.  The registry controls for networking can be found at HKLMSystemCurrentControlSetServicesVMSwitch.

There are three REG_DWORD registry values to control I/O balancing:

  • IOBalance_Enabled: Is the balancer enabled/disabled.  Enabled = 1 or any non-zero value.  Disabled = 0.  It is enabled by default for storage IO balancing.  It is disabled by default for network IO balancing because there is a significant CPU overhead for the network function.
  • IOBalance_KeepHwBusyLatencyTarget_Microseconds:  This setting is a latency value.  The default is 83 ms for storage and 2 ms for networking.  If VMs hit this level of latency then the balancer kicks in to give all VMs a better slice or quantum.  If 83 ms for storage or 2 ms for networking is too high a latency value for you to start balancing, then you can reduce the settings. Be careful; some storage is deigned to be latent but give massive throughput.  And reducing the value too much can reduce throughput while increasing balance between VMs: putting through fewer large blocks is faster than swapping between lots of small blocks.
  • IOBalance_AllowedPercentOverheadDueToFlowSwitching:
  • This controls how much work the balancer issues from a virtual machine before switching to another virtual machine. This setting is primarily for storage where finely interleaving I/Os from different virtual machines can increase the number of disk seeks. The default is 8 percent for both storage and networking.

Like I said, these are advanced controls.  Don’t go screwing around unless you have identified that your channels are congested and need better balancing.  Don’t go assuming anything – and certainly don’t come a calling on me if you have cos I will tell you “I told you so”.  And if you are using them, tune them like a racing car: understand, tweak 1, test & monitor, repeat until improved, and then move on to setting 2.

No Comment.

Add Your Comment

Get Adobe Flash player