Created:
6/17/2009 2:43:28 AM

Author:
Przemek Radzikowski

permalink [Permalink]





International Careers & Jobs - An international employment directory, reviewing world-wide top job sites




  • Home  ›
  • Articles  ›
  • Benchmarking Hyper-V on Windows Server 2008 R2 x64



| More

Benchmarking Hyper-V on Windows Server 2008 R2 x64

This article presents relative benchmark results for Windows Server 2008 R2 x64 after the Hyper-V role is installed and how the guest virtual machines perform in relation to the host depending on various logical CPU combinations exposed to the virtual machine.


 

 

Introduction

Some time back when this whole virtualization thing was just starting out, I ran a number of performance benchmarks regarding ESX 2.5 benchmark and Virtual Server 2005.  Back then, Microsoft was just in its infancy in relation to virtualization, however it quickly became apparent that there was a fight brewing on the horizon.

With Hyper-V mania in full swing and many projects under our belt, we find ourselves in a situation where we need to advise our clients on what type of configuration to choose in order to maximize virtual machine throughput and per host density.

These benchmarks help us to understand how the Windows Server 2008 R2 x64 Hyper-V role behaves under different logical CPU configurations and what effects these have on performance aspects of the virtual environment.

Why did I decide to run these Hyper-V performance benchmarks

As with most things, curiosity got the better of me.  Particularly, since I hadn’t seen anything else on the Internet of any value, I’m always curious to see how performance is affected on the host machine when a new service or in this case, a Hypervisor is installed.

It’s been a long time since Microsoft released its first virtualization platform, namely Virtual PC; this was quickly followed with Virtual Server 2003 and then Virtual Server 2005.  With each product release Microsoft was thrown closer to the virtualization mainstream, at that stage the industry was dominated by VMware.

Although VMware’s market share has been eroded by newcomers in recent years, then as is the case now, the costs associated with deploying virtualization throughout the enterprise proved to be prohibitive for most organizations.  Microsoft made the industry take notice, when it announced that it would not charge anything for Virtual Server.  This move alone proved to be the starting point of its massive virtualization momentum.

Almost six years on, Microsoft released Hyper-V; it comes standard with Windows Server 2008 and Windows Server 2008 R2 x64.  More so, it utilizes a hypervisor style interface between the host hardware and the virtual machines.  The hypervisor layer was what interested me the most; in particular, the fact that when you enable the Hyper-V role within Windows Server 2008, the host OS also issues requests through the hypervisor in order to interact with the physical hardware.  In a way, the host OS is acting as a pseudo virtual machine.

This concerned me quite a bit since I tend to run the server version (Windows Server 2008 R2 x64) on my laptop with the Hyper-V role enabled.  Benchmarking this type of a configuration would give me more information on the performance of the hypervisor itself in relation to the bare metal host machine.

In addition to my selfish curiosity, this is particularly useful knowledge for organizations which install server roles without understanding the full ramifications of their actions, such as performance degradation, increases in memory usage and increased exploitation footprint.  All these play a major role in shaping educated decisions for the deployment of infrastructure systems.

Hardware used for Hyper-V performance benchmark results

For benchmarking purposes it really doesn’t matter what hardware is used because in our case we are only interested in how the virtual machines perform in relation to the host.  However, listing the hardware that was used in this set of benchmarks may help some put a face to the story (so to speak).

Component Description

System

  • Model HP ProLiant ML350 G5
  • BIOS HP D21 (11/02/2008)
  • Bus(es) X-Bus PCI PCIe IMB USB i2c/SMBus
  • Multi-Processor (MP) Support 2 Processor(s)
  • Multi-Processor Advanced PIC (APIC) Yes

Processor

  • 2x Intel(R) Xeon(R) CPU E5410 @ 2.33GHz
  • Speed 2.33GHz
  • Cores per Processor 4 Unit(s)
  • Threads per Core 1 Unit(s)
  • Type Quad-Core
  • Integrated Data Cache 4x 32kB, Synchronous, Write-Thru, 8-way, 64 byte line size
  • L2 On-board Cache 2x 6MB, ECC, Synchronous, ATC, 24-way, 64 byte line size, 2 threads sharing

Memory

  • 11 GB DDR2

Storage Devices

  • HP LOGICAL VOLUME 146.8GB (RAID, 15000rpm, NCQ) 137GB (C:) (I:)
  • HP LOGICAL VOLUME 146.8GB (RAID, 15000rpm, NCQ) 137GB (V:)
  • HP Smart Array E200i Controller

Software used for Hyper-V performance benchmark results

Although there are many benchmarking packages out there which stress the system to a high degree, I was only interested in benchmarks that I would run on any native hardware to get a general picture.  After all, I’m going to be running applications which are supposed to operate in a Hyper-V virtualized environment just as they would on the bare hardware.

I was after a benchmarking product which would allow me to benchmark most of the important categories with an easy export mechanism such as XML.  Due to its simplicity and variety of tests, I decided to use SiSoftware Sandra 2009.

Understandably, some of you may think that these results are skewed because the benchmarking software is designed for physical servers, and as such it does not understand a virtualized environment or special virtualization or even Hyper-V considerations.  The truth however, is that as long as we are comparing benchmarks within the same category, for instance, virtual machine with virtual machine - and only change the physical characteristics of either the physical server or logical components, these results are still valid.  And since we are only interested in a relative and not quantitative comparison, these results are in fact quite valid.

What was benchmarked

The following table lists all the metrics which were collected from the host and virtual machines during the course of the tests.

Category of Measure Unit of Measure Unit Description Description of Test

Processor Arithmetic

Aggregate Arithmetic Performance

GOPS

Giga Operations Per Second Aggregate Arithmetic Performance (GOPS)
Dhrystone ALU

GIPS

Giga Instructions Per Second

Dhrystone ALU (GIPS)

Whetstone iSSE3

GFLOPS

Giga Floating Point Operations Per Second

Whetstone iSSE3 (GFLOPS)

Processor Multi-Media

Aggregate Multi-Media

MPixel/s

Mega Pixels Per Second

Aggregate Multi-Media (MPixel/s)

Multi-Media Int x16 iSSE4.1

MPixel/s

Mega Pixels Per Second

Multi-Media Int x16 iSSE4.1 (MPixel/s)

Multi-Media Float x8 iSSE2

MPixel/s

Mega Pixels Per Second

Multi-Media Float x8 iSSE2 (MPixel/s)

Multi-Media Double x4 iSSE2

MPixel/s

Mega Pixels Per Second

Multi-Media Double x4 iSSE2 (MPixel/s)

Cryptography

Cryptographic Bandwidth

MB/s

Megabytes Per Second

Cryptographic Bandwidth (MB/s)

AES256 CPU Cryptographic Bandwidth

MB/s

Megabytes Per Second

AES256 CPU Cryptographic Bandwidth (MB/s)

SHA256 CPU Hashing Bandwidth

MB/s

Megabytes Per Second

SHA256 CPU Hashing Bandwidth (MB/s)

4MB Blocks AES256 Encryption

MB/s

Megabytes Per Second

4MB Blocks AES256 Encryption (MB/s)

4MB Blocks AES256 Decryption

MB/s

Megabytes Per Second

4MB Blocks AES256 Decryption (MB/s)

4MB Blocks SHA256 Hashing

MB/s

Megabytes Per Second

4MB Blocks SHA256 Hashing (MB/s)

Memory Bandwidth

Aggregate Memory Performance

GB/s

Gigabytes Per Second

Aggregate Memory Performance (GB/s)

Int Buff'd iSSE2 Memory Bandwidth

GB/s

Gigabytes Per Second

Int Buff'd iSSE2 Memory Bandwidth (GB/s)

Float Buff'd iSSE2 Memory Bandwidth

GB/s

Gigabytes Per Second

Float Buff'd iSSE2 Memory Bandwidth (GB/s)

Memory Latency

Random

ns

Nanosecond

Random (ns)

Linear

ns

Nanosecond

Linear (ns)

Cache and Memory

Cache/Memory Bandwidth

GB/s

Gigabytes Per Second

Cache/Memory Bandwidth (GB/s)

Integrated Data Cache

GB/s

Gigabytes Per Second

Integrated Data Cache (GB/s)

L2 On-board Cache

GB/s

Gigabytes Per Second

L2 On-board Cache (GB/s)

File System

Drive Index

MB/s

Megabytes Per Second

Drive Index (MB/s)

Buffered Read

MB/s

Megabytes Per Second

Buffered Read (MB/s)

Sequential Read

MB/s

Megabytes Per Second

Sequential Read (MB/s)

Random Read

MB/s

Megabytes Per Second

Random Read (MB/s)

Buffered Write

MB/s

Megabytes Per Second

Buffered Write (MB/s)

Sequential Write

MB/s

Megabytes Per Second

Sequential Write (MB/s)

Random Write

MB/s

Megabytes Per Second

Random Write (MB/s)

Random Access Time

ms

Millisecond

Random Access Time (ms)

Understanding the Host and Virtual Machine Configurations

To properly test the various server and virtual machine configuration and give it all some sort of meaning, I had to construct a system to identify the benchmark iterations depending on hardware configuration.  The following table outlines the possible combination values for the tests.

 

Naming Key  Description

Platform

BM

Bare Metal

BH

Bare Metal  + Hyper-V Role Installed

HV

Hyper-V Virtual Machine

HI

Hyper-V Virtual Machine Without Integration Components

VS

Virtual Server 2005 R2 SP1 Virtual Machine

Processor Configuration

P1

1 Physical CPU

P2

2 Physical CPU's

L1

1 Logical CPU

L2

2 Logical CPU's

L4

4 Logical CPU's

As an example, a machine configuration with a name of HV-P2-L1 would refer to a Hyper-V Virtual Machine with Integration Components installed, running on a server with two Physical CPUs and two Logical CPUs (CPUs presented to the Virtual Machine).

Hyper-V Performance Benchmark Results

So now to the good part, the results of this benchmarking exercise.  To recap, we are interested in finding out how the performance of Windows Server 2008 R2 x64 changes under various operating conditions.  In particular, we are interested in the relative performance change on the host when we install the Hyper-V role and then how these results compare to the performance of virtual machines operating with different logical processor configurations.

Processor Arithmetic

First to the test is Processor Arithmetic Performance on Hyper-V, and judging by the huge drop in processor operating speed after we installed the Hyper-V role, it's something to consider during server deployments.  In short, if you don't require Hyper-V or you aren't going to be running any virtual machines on the server, don't install the Hyper-V role.

Hyper-V Processor Arithmetic

[Figure 1 - Processor Arithmetic Performance - Higher values are better]

Keep in mind that once you install the Hyper-V role, the host Operating System also issues calls to the underlying hardware via the hypervisor; in effect it acts like a pseudo virtual machine itself.

An interesting point to notice, and this is a common theme throughout the remainder of the benchmarks, is that HV-P2-L4 yields the best performance from all the tested combinations.  Still, a single virtual machine on the host is running at almost 50% of the native bare-metal host.

Another interesting fact is that HV-P2-L1, HI-P2-L1 & VS-P2-L1 have performance results, meaning that with a single logical processor exposed to the virtual machine we aren’t seeing any increases in processing ability, even if the Hyper-V Integration Components are installed in the VM.

Processor Multi-Media Performance

The Multi-Media performance seems to follow the pattern demonstrated in the Arithmetic Performance section above.  Again, increasing the virtual CPU count improved the multi-media performance.

Hyper-V Processor Multi-Media

[Figure 2 - Processor Multi-Media Performance - Higher values are better]

Cryptography Performance

Modern computer require encrypting and decrypting functionality more today than ever before.  Think of every packet you send to a web server where the traffic is encrypted using SSL or where utilizing an encrypting file system.  Just about every modern transaction requires some form of cryptography - for these reasons I’ve included the cryptography benchmarking section.

Hyper-V Cryptography

[Figure 3 - Cryptography - Higher values are better]

Memory Bandwidth Performance

From the start it is quite clear that memory bandwidth is not an issue when the Hyper-V role is installed.  Both the bare metal and bare metal with Hyper-V configurations exhibit similar results.  Also of interest is the fact that logical CPU count in the virtual machines does not influence memory bandwidth performance significantly.

Comparing a Virtual Machine running on Virtual Server 2005 (VS-P2-L1) with Hyper-V (HV-P2-L1) we can see that performance difference is relatively negligible between the two products.  Certainly HI-P2-L1 suffers without the Integration Components being installed.

Hyper-V Memory Bandwidth

[Figure 4 - Memory Bandwidth - Higher values are better]

Memory Latency Performance

As with Memory Bandwidth, Memory Latency Performance on Hyper-V seems almost an insignificant factor in the virtual machine environment and this could be due to the way that direct memory access is handled by the host hardware.  Again, HI-P2-L1 suffers due to not having the Integration Components installed in the guest OS.

Hyper-V Memory Latency

[Figure 5 - Memory Latency - Lower values are better]

Cache & Memory Performacne

This performance benchmark displays a similar picture to our other results, with only slight performance degradation after the Hyper-V role was installed.  Curiously, HI-P2-L1 does not seem to suffer from any cache related issues even though the Integration Components have not been installed.

Hyper-V Cache and Memory

[Figure 6 - Cache & Memory - Higher values are better]

File System Performance

One of the problems Virtual Server 2005 exhibited was extremely poor file system performance - this is certainly evident in the Buffered Read performance.  Across the board, we can see similar Files System results with the exception of HI-P2-L1, suffering from a lack of Integration Components and VS-P2-L1 showing its poor legacy performance.  CPU does not seem to play any significant part in virtual machine file system performance.

Hyper-V File System Performance

[Figure 7 - File System Performance - Higher values are better]

Conclusion

There seems to be a linear relationship between Physical CPU Cores and Logical Processors exposed to the Virtual Machine - quite a significant factor.  This indicates that if we want to squeeze out as much processing power from the VM running in Hyper-V we should match the number of Logical Cores in the VM with the Physical Cores present on one CPU(s).

Our test hardware was fitted with two quad-core CPUs giving us 8 cores; however Hyper-V at present only supports 4 virtual CPUs per virtual machine.  It would nevertheless be an interesting test to see how well Hyper-V deals with virtual CPU counts matching those of the physical cores across multiple CPUs.

To conclude this discussion, we thought it would be interesting to see overall system performance in relation to the bare metal hardware.  We did this by consolidating all the Hyper-V benchmark results and formulated an Overall Benchmark Score which is representative of the raw results collected.

All results are relative to the bare metal performance of BM-P2-L8 at 100%.  Looking at the second configuration, BH-P2-L8 we can see that system performance has dropped to 82% of the bare metal configuration as a result of installing the Hyper-V role.  This is an important point to note for organizations which install features and/or roles without understanding the impact they place on the overall performance of the system.

Hyper-V Benchmark

[Figure 8 - Overall Benchmark Score - Higher values are better]

Also of interest, is the fact that the Microsoft team has managed to squeeze out a massive performance improvement by moving away from the Virtual Server legacy to the new hypervisor-based Hyper-V architecture; looking at VS-P2-L1 and HV-P2-L4, a staggering 32% improvement.

Many organizations seem to install the Hyper-V role simply because it’s there or that they will at one point or another decide to use virtualization.  If you fall into this category, do consider that the overall base system performance will drop by around 18% as a result - it’s not a large performance drop, but it is worth knowing.  So if you don’t need the Hyper-V role, or any other role, don’t enable it.

One last point to remember, irrespective of how many physical CPUs or cores you have, or the number of logical processors that are exposed to your virtual machines, in order to avoid negatively impacting the performance of your virtual machines, always install the Hyper-V Integration Components.

References

Benchmarking VMware ESX Server 2.5 vs. Microsoft Virtual Server 2005 Enterprise Edition
/articles/benchmarking-vmware-esx-server-25-vs-microsoft-virtual-server-2005-enterprise-edition.aspx

Benchmarking Microsoft Virtual Server 2005
/articles/benchmarking-microsoft-virtual-server-2005.aspx

Tools for Benchmarking
http://msdn.microsoft.com/en-us/library/cc768530%28BTS.10%29.aspx

Measuring Performance on Hyper-V
http://msdn.microsoft.com/en-us/library/cc768535%28BTS.10%29.aspx  

 

permalink [Permalink] - Updated: Monday, October 28, 2013





| More

 

Articles of Interest


International Careers & Jobs - An international employment directory, reviewing world-wide top job sites


 
 
(c) Capitalhead Pty Ltd
Contact Capitalhead About Us Articles & Publications Partners Solutions & Services Products Valid XHTML Valid CSS