ESXi disk I/O benchmark and improvements

ESXi.pngSince the SSD arrived I had concerns about the disk throughput, performance was just not right. Copying a virtual disk image took a hilarious amount of time due to the ~80kB/sec transfer speed. I spent yesterday night with benchmarking, and the results are already better, but still not perfect.
I quickly learned not to use the cp command on ESXi shell to move vmdk files: it is implemented but only for compatibility reasons. As soon as I switched to vmkfstools, the speed jumped up to ~45MB/sec; it is a drastic improvement but the 360Mb/sec is still far away from promised the 3Gb/sec by the SATAII interface.
Let the fun begin.
 
The disk setup in the N40L Microserver is the following:
- Western Digital Caviar Black WD1002FAEX-00Z3A0 1TB in caddy #1
- Samsung HD103SI in caddy #2
- Segate VB0250EAVER in caddy #3
- Samsung 750 EVO SSD in caddy #4
 
I set up a 32-bit Windows 10 VM and with PowerShell I uninstalled as much apps as I could. Next step was to disable all unnecessary services and visual elements. After installing vmWare tools, I replaced the SCSI controller with a paravirtual one and attached an empty 20GB disk from each datastore. Once this all was done, I only had to download CrystalDiskMark to start the benchmark. Once the results were logged I went in the BIOS and re-adjusted all settings. After re-running the benchmarks with drive write cache enabled I decided not to run any further tests without it as the difference was huge.
The last step was to download and install HPs own HPVSA drivers and compare the results again.
 
Turning the write cache on meant a peak of 1200% increase, while HPs drivers added an other 2,62%; maximum transfer speed grew over 2Gb/sec. However I'm already happy with the results, I'll continue my investigation to utilize the promised SATAII speeds.
 

Add a comment

HTML code is displayed as text and web addresses are automatically converted.