December 14, 2009 - Johan

Large locally attached storage

We all know that ESX supports LUNs / Disks up to 2TB (minus a few kb) but what happens when a larger locally attached disk is connected to a host?

This is what  I have noticed when I tried to create a large volume on my own ESX white box with more than 2TB locally attached storage: it only allowed me to create a volume with the space above the 2TB. In my situation I got a 48 GB volume on a 2048 GB LUN. Also, this “2TB” volume, which I created with the vmkfstools commandline utility, was useless because I could not create a single VM on it.

Ok here’s my setup: a RAID5 set with 3 x 7.5GB and a hotspare with a LSI Logic 8408E SAS Controller which gives me a disk of approx. 2048GB. 

One should expect that ESX creates a volume of 2TB and disregards the rest or allows you to create an additional volume but that’s not the case.

How I worked around this strange behaviour? Within the RAID controller application I created a Virtual Drive with a capacity less that 2TB. I know for sure that the LSI adapters use this feature but other adapters my incorporate a similar feature. An other option was to create a RAID set less than 2TB for this obvious reason. I ended up with 2 Virtual Disks stripped accross the RAID5 set each a little over 1TB in size thus eliminating the problem.

These problems hardly exist on shared storage because we just create a new LUN and then create a VMFS volume on it, but with the current large 1 and 2TB disks it will be easier to create very large locally attached storage.

Virtual Infrastructure ESX / VMware / vSphere /

Comments

  • mervincm says:

    I saw something even wierder.

    I have a whitebox esxi 4.0 u1 server
    HP Smart array P400
    4x Hitachi 2TB in raid 10

    The P400 was perfectly happy to create a 3.7TB array, but in esxi it saw a 512 BYTE partition! If I made arrays under 2TB (raid 1 a pair or them or a single drive) it was perfectly happy. Running out of time before my vendor would exchange, I gave up and chose a workaround. My workaraound was to replace the 2TB disks with 1TB disks so that I could have a RAID 10 array of 4 disks and keep it under 2TB.

  • mervincm says:

    tag for comments