How to Reclaim Free Block Space from a Lun with VMware vSphere 5.5 and Netapp Cluster Mode
In my lab I have setup VMware vSphere 5.5 along with the Netapp Cluster Mode 8.2 Simulator. I have created an iSCSI network and presented a few luns to my ESXi host.
When a thin provisioned Lun gets filled with data and then some or all of that data is removed, the free space is not actually free’d from the lun or from the back-end storage, it is actually marked as free blocks ready for overwrite, but the size of the datastore or back end lun does not actually reduce in size. If you are adding and deleting quite often you could see a lot of your storage space wasted since free space is not released.
Unfortunately at this point in time there is no automatic way to do this with VMware, block storage and thin provisioning, except by way of scripting (powercli for example)
In previous version of vSphere we had to run the command vmkfstools -y 60, which would release up to 60% of free space and that percentage number is customizable, i.e. you could use 100% however you want to make sure you have ample free space available because of the way the reclaiming process works.
A pre-requisite to being able to reclaim lun space is that your ESXi hosts and backend storage must support VAAI and it must be enabled.
Within vSphere 5.5 we no longer use vmkfstools, we now use the esxcli as you will see in the tutorial/demo below.
At the end of the tutorial I have provided links from my reference material and also additional information for your reading
Adding Data to our iSCSI lun and verifying VAAI is enabled
1. In my Netapp Cluster Mode storage I have provisioned 2 luns named vmware_db2 and vmware_vswap. These 2 luns are presented to my VMware ESXi host. The lun we will be working on is vmware_vswap. As you can see the total size for this lun is 5.25GB and the available size is 5.22GB.









Deleting Files and Reclaiming Lun Space
10. I will now establish an SSH session to my VMware ESXi host. Once authenticated I issue the command:
esxcli storage vmfs extent list
This command will enable me to see the naa device name of my datastore.

esxcli storage core device list -d naa.x.x.x.x.x
The other option we are also looking at here is the VAAI status output. We want to make sure this says supported.

esxcli storage core device vaai status get -d naa.x.x.x.x.x
If we see supported next to Delete Status we are happy, if we see unsupported this either means the lun is thick provisioned or that the backend storage does not support this derivative.
NetApp Update: If you don’t see ‘Delete Status’ as supported, check the lun is enabled for -space-allocation enabled and that the lun is enabled as thin provisioned using -space-reserve disabled






esxcli storage vmfs unmap -l vmware_vswap



u (for disk view)
f (for fields)
a (show Device = Device Name)
o (show VAAISTATS)
In the image below I can see 1100 hits on the DELETE column while I ran the esxcli storage vmfs unmap -l vmware_vswap command.

Reference Sites:
I would like to thank the following reference sites for the information in enabling me to put this tutorial together:
Nimbleconnect – Tips and Tricks for using ESXTOP
Boche.net – vSphere 5.5 UNMAP Deep Dive
Boche.net – Starting Thin and Staying Thin with VAAI UNAMP
blogs.vmware – What’s new in vSphere 5.5 Storage
blogs.vmware – VAAI Thin Provisioning Block Reclaim/UNMAP In Action
Derek Seaman’s Blog – VMworld 2013: vSphere 5.5 Space Reclaimation
Disclaimer:
All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.
Be the first to comment