2011
12.20

Any software designer/engineer needs to be aware of how Non-Uniform Memory Access (NUMA) impacts the performance of services that will run on that hardware.  This goes double for virtualisation administrators, and here’s why.

NUMA is a hardware design feature that divides CPUs and memory in a physical server into NUMA nodes.  You get the best performance when a process uses memory and CPU from within the same NUMA node.  When a process requires more memory, but the current NUMA node is full, then it’ll get memory from another NUMA node and that comes at a performance cost to that process, and possibly all other processes on that physical server.

And that’s why virtualisation engineers need to be aware of this.  In Hyper-V we have Dynamic Memory.  In VMware, there are other techs that do similar (but work differently) things to add memory to a VM under the covers.  When there’s contention in a NUMA node, a VM will be given additional memory from a different NUMA node – and then performance will drop.

When present this topic, NUMA causes a lot of confusion.  Microsoft gave us a rather badly out-dated formula for calculating NUMA node sizes.  It’s actually a hardware specification (that all OS’s and hypervisors have to deal with) so the only really accurate way to determine NUMA node layouts is via PerfMon or chatting to the hardware vendor.  In the meantime, I stumbled across this fantastic article by Benjamin Athawes.  Benjamin explains NUMA superbly and talks about how to determine what’s in your hardware.

In the Hyper-V world, we can disable NUMA node spanning in the host settings.  That’s thanks to how Dynamic Memory works – there isn’t an over-commitment that must be lived up to by the hypervisor.  If we see lots of spanning that impacts performance, then we have choices:

  • Reconsider hardware specs to increase the size of NUMA nodes: if there is a lot of consistent NUMA node spanning that is required to supply badly needed memory to VMs
  • Disable NUMA node spanning to prevent this: when NUMA nodes are normally big enough, but occasionally VMs span NUMA nodes and negatively impact performance

In Windows 8 Hyper-V, guests will have will get a new feature where the guest OS can be NUMA node aware.  That’s really requires because we’re jumping to 32 vCPU support which will likely span many NUMA nodes.  With this feature, guest OS processes/memory can be scheduled to take NUMA node placement into account.

No Comment.

Add Your Comment

Get Adobe Flash player