Comparing 3 CPU Types in Hyper-V Assessment Hardware Sizing

Measure twice and cut once.

I’m assisting with a very large Hyper-V sizing process at the moment.  It’s a rare one where CPU appears to be the bottleneck instead of RAM.  As such, I’m spending some time comparing the traits and sizing of different CPUs.  Before the real assessment starts, I’ve fired up a small lab just to do a few comparisons between:

  • 2 * AMD Opteron 6180 12 core CPUs
  • 2 * Intel Xeon X5690 6 core CPUs
  • 2 * Intel E7 Xeon E7-4870 10 Core CPUs

The positives for AMD, they have the plus of having more cores (logical processors) with a lower price.  The positives for Intel are that they have 2 threads of execution per logical processor, but that does come at a higher cost.  Who wins?  I’ll let MAP 6.0 decide that:

I came up with 3 server specifications, each using one of the above processor configurations.  I assessed 4 virtual machines and then ran the MAP 6.0 Server Consolidation Wizard to see how much of the host hardware would be utilised by the VMs.  The results were:

2 * AMD Opteron 6180 12 core CPUs

image

2 * Intel Xeon X5690 6 core CPUs

Not surprisingly, the 12 core AMD CPU beats the Intel 6 core CPU.  But the margin is very small.  Those 2 threads of execution per logical processor gives Intel more BHP per core.

image

2 * Intel E7 Xeon E7-4870 10 Core CPUs

This is Intel’s latest CPU.  With it, the VMs are using 2.25% less of the CPU than the AMD 12 core CPU, and 2.36% less than the Intel 6 core CPU.

image

I’m wondering if this CPU going to have the same hardware microcode issues that were associated with Nehalem and Westmere CPUs when running Hyper-V.

Conclusions

I’m not recommending a CPU based on this tiny virtual lab.  What I actually aimed to illustrate was that the sizing feature of the assessment can be used with different hardware profiles to find the right host specification for your environment.  In my real world example, I’ll be doing a week-long performance gathering during what the customer believes will be a busy period, followed by sizing with multiple different host specifications, combined with application support statements (from the discovery) to rule out invalid candidates, and maybe even breaking Hyper-V up into several clusters with different hardware specs.

What you can learn from this post is that you shouldn’t assume anything.  When you assume, then assume that you are wrong. 

And remember, this is a software tool.  It will give us an estimation of physical host utilisation compared to what is measured.  It won’t be perfect, but it’s better than the usual “we know your/our requirements”, “here’s the usual spec for this size site” or “wet finger in the air” because it is scientific.  These other approaches are no better than waiting to see if a rodent sees it’s own shadow when it comes out of a hole.

Remember to add some spare host capacity:

  • Host fault tolerance
  • Future growth & free space for spikes

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.