Having some spare time on my hands I decided to investigate the performance differences between presenting local storage as a Raw Device Mapping (RDM) and as a VMFS formatted datastore on my testbed server at home. The server and disk specifications are as follows:
- HP Microserver N36L
- 8GB RAM
- 1 x 250GB HDD, 2 x 1TB HDD in use within VMFS5 formatted datastores
- An HP 500GB 7200RPM SATA Disk mounted in slot 4 of the Microserver.
- WHS 2011 as the only virtual machine installed.
- vSphere Hypervisor 5.1 (free)
Creating the RDM passthrough for unsupported hardware/scenarios required a bit of a hack using the command line. Here’s how I did it (using this article from David Warburton as a guide. Thanks David!)
- SSH into your ESXi box and login with your root username and password credentials. If SSH is denied you need to allow it in the security profile of your ESXi box and start the service (using vSphere client).
- Now type: cd /dev/disks
- Now type ls -l to get a list of the drives (See figure 1)
- Look at the raw device ID’s and copy the ID you want to map to (in my case the device ID was t10.ATA_____MB0500EBZQA_____________________________Z1M04166____________
- Now type: cd /vmfs/volumes and press enter.
- Here you will see the local datastores already presented to ESXi. Not sure if this is gospel or not, but if you don’t have a datastore you won’t be able to create the RDM pass-through mapping file as it needs to be homed on an existing datastore.
- Make a note of the datastore you want that mapping file to be homed in. In my example I have used the local datastore called “250GB Disk”.
- Now type the following command: vmkfstools -z /vmfs/devices/disks/<name of RAW device from Step 1> <location to store VMDK>/<RDM name>.vmdk (where RDM name.vmdk is the name of the mapping file we are creating). You can see the complete command in the screenshot below:
The quotes were needed around the /vmfs etc as my datastore name has a space in it. The quotes are unnecessary where your datastore name does not have any spaces in it!
- You can now add your RDM mapped drive to a virtual machine.
- Open up the vSphere client and right click on the Virtual Machine. Click “Edit Settings”
- Click “Add” and then “Hard Disk” then “Next”.
- Click the “Use Existing Hard Disk” radio button and click “Next”
- Click the “Browse” button and open the datastore your mapping file resides in.
- Select the mapping file and click “Next”
- From the Virtual Device Node ensure that your mapping file is mapped onto a different SCSI controller than the existing datastore. In my example I have used SCSI 1:0
- Your added RDM should look something like this (Mapped Raw LUN):
- You can now initialise the disk within your OS and use it.
Rudimentary Speed Tests:
I performed 2 sets of tests, one within the virtual machine using Crystal Disk Mark and one which tested the network copying speed from Windows 7 to the WHS 2011 VM for a file of 5.23 GB in size (Full 1Gbit/s Network)
Here is a side by side comparison of the Crystal Disk Mark Results with RDM on the left and VMFS presented Datastore on the right.
In terms of network file copying the speeds were as follows:
File size: 5,232,404 KB copied from the same Windows 7 machine to a share on the same virtual machine.
RDM – 71 seconds = 73.69 MB/s
VMFS – 107 seconds = 48.90 MB/s
This test clearly shows that disk performance on ESXi using RDM passthrough to local storage is significantly faster than mapping the same storage as a VMFS5 datastore. Remember, everything hardware wise was the same in both tests, the only difference being in how the storage was presented through vSphere.
It is a bit of a “hack” and you need to be aware that to be a `supported` RDM you need an available and unformatted LUN (SCSI, iSCSI, FC) to map a .vmdk file to, therefore I’d not recommend it for use in production environments. However, given the speed benefits I’d recommend it for use in test labs or home scenarios if you don’t have shared storage and don’t mind the tinkering!