Jan 19

Like me, you may have a few high I/O windows machines in your enviroment, for us its mainly mailservers, and after realising the performance benefits of the paravirtualised drivers in vSphere (along with them being supported for boot disks in windows), I wanted to move a few servers to use these drivers.

On an existing machine, you may not have seperate boot and data vmdk’s, if you just change the driver to pvscsi you can look forward to a bluescreen on boot, as Windows won’t have loaded in the drivers you need.

  1. Ensure you are running ESX4 U1, as boot disks using the pvscsi driver are not supported in earlier releases (it may work, but if something goes wrong VMware won’t want to know!)
  2. Make sure your VMware tools install is up to date.
  3. Add a new data disk to your VM, and select the virtual device node as SCSI 1:0 rather than 0:1/2/3 etc, this will add a new controller just for that disk.
  4. Change the adapter that is created to be ‘paravirtualised’ drom the default LSI Parallel and boot the VM.
  5. Once its booted, and the drivers have been installed, you can shut the VM down, remove the temp disk you created and change the adapter for your main disk to paravirtualised.
  6. Boot the box and all should be well, if you have any problems you can change back to the LSI driver and try again 🙂

Enjoy the increased performance!

*Please note these instructions are provided for use at your own risk, while they worked fine for me on quite a few machines your setup may be different, so take a snapshot and make sure you have a backup!*


Jan 17

Recently on a a few of our more I/O intensive linux machines we noticed dramatic iowait increases, especially during the backup window. While they are quite heavily used boxes, and the backup will always hit the disks hard, it was more than I would have expected!

While the server itself was still perfectly responsive, throughput did drop to around 7/8mb during these times, and the googling began!

Firstly we looked at our Equalogic SAN’s, and ran some iometer tests with the same average workload on a physical machine with one volume on a SAN and on a VM. The volume connected directly to the SAN performed as expected, while the VM started to struggle slightly, not miles off what you would expect, but the difference was there! Using esxtop we were able to check the latency on the requests, and how big the disk queue was, all looked normal with around 2/3ms latency. While we checked a few other bits and bobs, I came across a few recent posts on the paravirtualised drivers available for ESX, which have only recently become properly supported.

Over the last week we have started using the PVSCSI driver available in vSphere, which has significantly improved performance above the old LSI parallel adapter. While not yet supported for boot disks on RHEL, it is on windows 2003 and 2008, and I would definately recommend trying this adapter out for any machines with a more signifcant I/O demand.

There is a great tutorial for RHEL at http://southbrain.com/south/tutorials/installing-redhat-enterprise-5.html and I will be adding a windows howto shortly!