ESXi 5.1: Using Raw Device Mappings (RDM) on an HP Microserver

Having some spare time on my hands I decided to investigate the performance differences between presenting local storage as a Raw Device Mapping (RDM) and as a VMFS formatted datastore on my testbed server at home.  The server and disk specifications are as follows:

  • HP Microserver N36L
  • 8GB RAM
  • 1 x 250GB HDD, 2 x 1TB HDD in use within VMFS5 formatted datastores
  • An HP 500GB 7200RPM SATA Disk mounted in slot 4 of the Microserver.
  • WHS 2011 as the only virtual machine installed.
  • vSphere Hypervisor 5.1 (free)

Creating the RDM passthrough for unsupported hardware/scenarios required a bit of a hack using the command line.  Here’s how I did it (using this article from David Warburton as a guide. Thanks David!)

Step 1:

  • SSH into your ESXi box and login with your root username and password credentials.  If SSH is denied you need to allow it in the security profile of your ESXi box and start the service (using vSphere client).
  • Now type:  cd /dev/disks
  • Now type  ls -l to get a list of the drives (See figure 1)

Figure 1: /dev/disks ls-l output

  • Look at the raw device ID’s and copy the ID you want to map to (in my case the device ID was t10.ATA_____MB0500EBZQA_____________________________Z1M04166____________
  • Now type: cd /vmfs/volumes and press enter.
  • Here you will see the local datastores already presented to ESXi.  Not sure if this is gospel or not, but if you don’t have a datastore you won’t be able to create the RDM pass-through mapping file as it needs to be homed on an existing datastore.
  • Make a note of the datastore you want that mapping file to be homed in.  In my example I have used the local datastore called “250GB Disk”.
  • Now type the following command:   vmkfstools -z /vmfs/devices/disks/<name of RAW device from Step 1> <location to store VMDK>/<RDM name>.vmdk   (where RDM name.vmdk is the name of the mapping file we are creating).  You can see the complete command in the screenshot below:

vmkftools complete command

The quotes were needed around the /vmfs etc as my datastore name has a space in it.  The quotes are unnecessary where your datastore name does not have any spaces in it!

  •  You can now add your RDM mapped drive to a virtual machine.
  • Open up the vSphere client and right click on the Virtual Machine.  Click “Edit Settings”
  • Click “Add” and then “Hard Disk” then “Next”.
  • Click the “Use Existing Hard Disk” radio button and click “Next”

Use Existing Disk

  • Click the “Browse” button and open the datastore your mapping file resides in.
  • Select the mapping file and click “Next”
  • From the Virtual Device Node ensure that your mapping file is mapped onto a different SCSI controller than the existing datastore.  In my example I have used SCSI 1:0

Virtual Device Node – Selecting an alternate SCSI controller for the RDM

  • Your added RDM should look something like this (Mapped Raw LUN):
  •  You can now initialise the disk within your OS and use it.

Rudimentary Speed Tests:

I performed 2 sets of tests, one within the virtual machine using Crystal Disk Mark and one which tested the network copying speed from Windows 7 to the WHS 2011 VM for a file of 5.23 GB in size (Full 1Gbit/s Network)

Here is a side by side comparison of the Crystal Disk Mark Results with RDM on the left and VMFS presented Datastore on the right.

In terms of network file copying the speeds were as follows:

File size:  5,232,404 KB copied from the same Windows 7 machine to a share on the same virtual machine.

RDM – 71 seconds = 73.69 MB/s

VMFS – 107 seconds = 48.90 MB/s

Conclusions

This test clearly shows that disk performance on ESXi using RDM passthrough to local storage is significantly faster than mapping the same storage as a VMFS5 datastore.  Remember, everything hardware wise was the same in both tests, the only difference being in how the storage was presented through vSphere.

It is a bit of a “hack” and you need to be aware that to be a `supported` RDM you need an available and unformatted LUN (SCSI, iSCSI, FC) to map a .vmdk file to, therefore I’d not recommend it for use in production environments.  However, given the speed benefits I’d recommend it for use in test labs or home scenarios if you don’t have shared storage and don’t mind the tinkering!

This entry was posted in Storage And Backup, Virtualisation, VMWare, Windows Home Server. Bookmark the permalink.

20 Responses to ESXi 5.1: Using Raw Device Mappings (RDM) on an HP Microserver

  1. cranfan says:

    3TB physical RDM doesn’t work in Windows Server 2012. 2TB and under works fine.
    Seems 3TB RDM works in ZFS setup, hope there’s some workaround for Windows setup.

  2. tino says:

    Might it be related to the disk layout? MBR supports up to 2TB, you need GPT for disks over 2TB

  3. MrMix says:

    Hello,
    I have a ML110 G5 with esx1 5.1 I wanted to add a 2 TB disk and I followed the above mentioned instructions. Everythink was ok. I add the disk to a fresh created VM (Xp Pro).
    It’s 30 sec. to boot without the RDM and almost 10 minutes with RDM !
    Any help to discover what is the problem ?
    Thank you

  4. seer says:

    Any chance you could do a similar test but witht he controller passed throguh? been trying to work out if i am better passing my whole controller though to my VM or using RDM. i have multiple controllers so that is not an issue

  5. alex says:

    Would this allow me to import an existing zfs pool into FreeNAS/Nexetastor etc. running as a VM in esxi?

  6. ben says:

    Pretty good information.
    I have the N40L and I will make some tests as soon as possible.

    I have just one question : do you know if it’s possible to passtrough 4 disks using RAW in one VM like OpenMediaVault or FreeNAS and make a software raid and provisioning NFS storage for all others VMs on the ESXI ?

  7. Dylan says:

    With your other discs, u have discID and discID:1.

    Do u have to do these steps for each of them or are they still considered one drive.

    I have tried with only using one of them but i get errors

    ** Failed to reopen virtual disk: Failed to lock the file (16392).**

  8. Mr. X says:

    OK, so how do I jive your results with the following VMware blog?

    https://blogs.vmware.com/vsphere/2013/01/vsphere-5-1-vmdk-versus-rdm.html, which seems to say that RDM’s speed advantages are negligible?

    • tino says:

      My results are my results on my entry level test rig. Nothing more nothing less. However I wouldn’t expect VMWare themselves to admit that RDM’s are faster now would you? ;-)

  9. Jose Cardoso says:

    That’s assuming your test results are valid. In my experience I see negligible difference between RDM and a VMFS 5 datastore on the same hardware (N40L Microserver + WD Red 3TB drives + ESXi 5.1 + CentOS 6.3 VM).

    What virtual drive controller are you using on your Windows VM? You make no mention of this in your article. If you aren’t using VMware’s Paravirtual SCSI drive controller then you aren’t getting the full native speed passed through to your VM.

    http://kb.vmware.com/kb/1010398

  10. David Jones says:

    I’ve been reading this article with interest. I am thinking about setting up and all -in- one ESX server with FreeNAS installed – does enabling RDM serve as an alternative to having a VT-d motherboard? The trouble I have is that I have two machines that I could easily use for this task, both with VT-d (or IOMMU in the case of my AMD machine), however neither of my motherboards work. In order to get RDM working properly with a FreeNAS VM and ZFS – I guess I need VT-d in addition to the above??

    • Karl says:

      As far as I know, you don’t need VT-d to achieve RDM. I just purchased an ASROCK PRO4 motherboard for Intel VT-d and didn’t want to passthrough the on-board controller to a VM. RDM saved the day. ;-)

  11. Karl says:

    Great article!! Works a treat. Just added 1TB RDM to my storage server 2012 and now about to add 3 more! Nice one!

    One thing I would add though is after this line

    Now type: cd /vmfs/volumes and press enter.

    I reckon you should again state ‘Now type ls -l to get a list of the drives’, for noobs like me I couldn’t figure it out but got there in the end.

  12. Eugene says:

    For mount disk usage command:
    vmkfstools -z /vmfs/devices/disks/ /.vmdk
    How can I unmount disk? If I want to remove or remove to another VM.
    There is a team “detach”, but it has a slightly different effect.

  13. andrew says:

    Good read, worked well thanks

  14. aleks says:

    Do we need to use -z option here ? what about option -r ?

  15. iggy says:

    I have 2tb 2.5″ SATA (7200rpm) as RDM on my host, using LSI SAS as controller, Win7x64 as guest OS, now MY speed is no more than 22MB/sec! Copying large files (backup snapshots) from the RDM to another VM on the host. Is this normal speed or am I doing something incorrectly? (I’d love 70mb/sec haha)

  16. Pingback: ESXi 5.5: Using Raw Device Mappings RDM on a Single ESXi host with unsupported hardware/scenario | First Law of Troubleshooting

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>