Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All

Storage and Backups-Purestorage

Verifing that ATS is Configured on a Datastore in a VMware Support Bundle

Confirming the SCSI-2 Reservations are Happening

If VMware is not configured as per best practice expectations (ATS Enabled) then we may see SCSI-2 Reservations in our logs.   This is how you can check to see if that's happening:

  1. Run this command tgrep -c 'vol.pr_cache inserting registration' core* on Penguin Fuse on the date directory in question for the array, this will give you the number of SCSI-2 reservations created every hour:
    quelyn@i-9000a448:/logs/del-valle.k12.tx.us/dvisd-pure01-ct1/2015_10_27$ tgrep -c 'vol.pr_cache inserting registration' core*
    core.log-2015102700.gz:1875
    core.log-2015102701.gz:1798
    core.log-2015102702.gz:1827
    core.log-2015102703.gz:1817
    core.log-2015102704.gz:1860
    core.log-2015102705.gz:1812
    core.log-2015102706.gz:1818
    core.log-2015102707.gz:2577
    core.log-2015102708.gz:8181
    core.log-2015102709.gz:15131
    core.log-2015102710.gz:21826
    core.log-2015102711.gz:19140
    core.log-2015102712.gz:12044
    core.log-2015102713.gz:13451
    core.log-2015102714.gz:22995
    core.log-2015102715.gz:33136
    core.log-2015102716.gz:18587
    core.log-2015102717.gz:5900
    core.log-2015102718.gz:7324
    core.log-2015102719.gz:2541
    core.log-2015102720.gz:2213
    core.log-2015102721.gz:1850
    core.log-2015102722.gz:1807
    core.log-2015102723.gz:1851
  2. Running this command without the '-c' will allow you to see which LUNs are being locked, seen below bolded.  This can yield many lines of output, so you may want to do this per log file:
    quelyn@i-9000a448:/logs/del-valle.k12.tx.us/dvisd-pure01-ct1/2015_10_27$ tgrep 'vol.pr_cache inserting registration' core*
    core.log-2015102700.gz:Oct 26 23:18:51.394 7FB1ACCF3700 I     vol.pr_cache inserting registration, seq 3097137672 vol 69674 i_t 20000025B5AA000E-spc2-0 res_type 15
    core.log-2015102700.gz:Oct 26 23:18:51.431 7FB1AD5F5700 I     vol.pr_cache inserting registration, seq 3097137673 vol 69673 i_t 20000025B5BB000E-spc2-0 res_type 15
    core.log-2015102700.gz:Oct 26 23:18:51.457 7FB1AE378700 I     vol.pr_cache inserting registration, seq 3097137674 vol 69662 i_t 20000025B5AA000E-spc2-0 res_type 15
    core.log-2015102700.gz:Oct 26 23:18:51.483 7FB1AE378700 I     vol.pr_cache inserting registration, seq 3097137675 vol 69661 i_t 20000025B5AA000E-spc2-0 res_type 15
    core.log-2015102700.gz:Oct 26 23:18:51.503 7FB1A73FC700 I     vol.pr_cache inserting registration, seq 3097137676 vol 69676 i_t 20000025B5AA000E-spc2-0 res_type 15
    core.log-2015102700.gz:Oct 26 23:18:51.522 7FB1AF7FD700 I     vol.pr_cache inserting registration, seq 3097137677 vol 69667 i_t 20000025B5BB000E-spc2-0 res_type 15
  3. You can then run the pslun command to determine the volume name:
    quelyn@i-9000a448:/logs/del-valle.k12.tx.us/dvisd-pure01-ct1/2015_10_27$ pslun
    
    Volume Name                              pslun Name
    --------------------------------         ----------
    PURE-STR-LUN01                           pslun69648
    PURE-STR-LUN02                           pslun69660
    PURE-STR-LUN03                           pslun69661
    PURE-STR-LUN04                           pslun69662
    PURE-STR-LUN05                           pslun69664
    PURE-STR-LUN06                           pslun69665
    PURE-STR-LUN07                           pslun69666
    PURE-STR-LUN08                           pslun69667
    PURE-STR-LUN09                           pslun69668
    PUR-STR-LUN10                            pslun69669
    PUR-STR-LUN11                            pslun69670
    PURE-STR-LUN12                           pslun69671
    PURE-STR-LUN13                           pslun69672
    PURE-STR-LUN14                           pslun69673
    PURE-STR-LUN15                           pslun69674
    PURE-STR-LUN16                           pslun69675
    PURE-STR-LUN17                           pslun69676
    PURE-STR-LUN18                           pslun69677
    PURE-STR-LUN19                           pslun69678
  4. To determine which hosts are creating the SCSI-2 Reservations, we will need a VMware bundle.  The customer can send this to us via FTP.
  5. Once the bundle is uploaded, please prepare for analysis as per  KB: Retrieving Customer Logs from the FTP
  6. Run the vm script that jhop created against the bundles to check the global configuration of VAAI ATS Wiki: VMware vSphere Ovreview and Troubleshooting
    /home/jhop/python/Mr_VMware.py
  7. After we have confirmed that VAAI ATS is enabled globally if we are still seeing SCSI-2 reservations we will want to check each volume individually.  Please proceed to the next section:

Identifying datastore ATS Configuration on VMware ESXi

Step 1:

Since we only care about datastores (not Raw Device Maps (RDMs)) on Pure Storage we will find our applicable LUNs in the "esxcfg-scsidevs -m.txt" file under the "commands" folder in a VMware Support Bundle. Below is an example of what a line from there will look like:

Screen Shot 2015-12-03 at 2.43.59 PM.png

There are several things that we want to identify from this output; the first is the "NAA Identifier". This is important because anything starting with "naa.624a937" is a Pure Storage LUN. Once we have a Pure Storage LUN we then want to take note of the "VMFS UUID" number (i.e. 53c80075-7ddcc5ba-7d03-0025b5000080 ). The reason why we focus on this instead of the "User-Friendly Name" is because the customers can choose any name they want in that option. If we choose the VMFS UUID then we are guaranteed to know we are referring to a Pure Storage LUN since that is a uniquely generated ID that the vCenter Server assigns to individual LUNs.

Step 2:

Once we have this information the next step is to search for the "vmkfstools" text file that contains the File System information on this device; this will also be found in the "commands" folder you already reside in. An example of what the text file will look like is as follows:

vmkfstools_-P--v-10-vmfsvolumes 53c80075-7ddcc5ba-7d03-0025b5000080 .txt

Notice above our "VMFS UUID" is contained in the file name (in red). We can now search this file for the "Mode" it is running in. An example of what this line will look like, if configured properly, is as follows:

Mode: public ATS-only

If the datastore is not configured properly it will look as follows:

Mode: public

If the datastore is showing a "public" mode then we know that this datastore is misconfigured and we'll be receiving an excessive amount of SCSI-2 Reservations from the ESXi Hosts. This means that locking tasks are not being offloaded to the FlashArray.

Obviously if the customer has a lot of LUNs this process above can take a while, so it is best to script this. I have listed below a simple one liner that will do this for you if you would like to use this instead of going through each LUN one-by-one:

grep "naa.624a937" esxcfg-scsidevs_-m.txt | awk '{print $3}' > Pure-LUNs.txt;while read f;do cat vmkfs*$f.txt |grep -e "Mode:" -e "naa.624a937";echo;done < Pure-LUNs.txt

NOTE: This command is able to be copied & pasted then used on any ESXi Host that is 5.0 and higher, as long as you are in the "commands" folder of the ESXi Host you want to verify.

Step 1:

Since we only care about datastores (not Raw Device Maps (RDMs)) on Pure Storage we will find our applicable LUNs in the "esxcfg-scsidevs -m.txt" file under the "commands" folder in a VMware Support Bundle. Below is an example of what a line from there will look like:

Screen Shot 2015-12-03 at 2.43.59 PM.png

There are several things that we want to identify from this output; the first is the "NAA Identifier". This is important because anything starting with "naa.624a937" is a Pure Storage LUN. Once we have a Pure Storage LUN we then want to take note of the "VMFS UUID" number (i.e. 53c80075-7ddcc5ba-7d03-0025b5000080 ). The reason why we focus on this instead of the "User-Friendly Name" is because the customers can choose any name they want in that option. If we choose the VMFS UUID then we are guaranteed to know we are referring to a Pure Storage LUN since that is a uniquely generated ID that the vCenter Server assigns to individual LUNs.

Step 2:

Once we have this information the next step is to search for the "vmkfstools" text file that contains the File System information on this device; this will also be found in the "commands" folder you already reside in. An example of what the text file will look like is as follows:

vmkfstools_-P--v-10-vmfsvolumes 53c80075-7ddcc5ba-7d03-0025b5000080 .txt

Notice above our "VMFS UUID" is contained in the file name (in red). We can now search this file for the "Mode" it is running in. An example of what this line will look like, if configured properly, is as follows:

Mode: public ATS-only

If the datastore is not configured properly it will look as follows:

Mode: public

If the datastore is showing a "public" mode then we know that this datastore is misconfigured and we'll be receiving an excessive amount of SCSI-2 Reservations from the ESXi Hosts. This means that locking tasks are not being offloaded to the FlashArray.

Obviously if the customer has a lot of LUNs this process above can take a while, so it is best to script this. I have listed below a simple one liner that will do this for you if you would like to use this instead of going through each LUN one-by-one:

grep "naa.624a937" esxcfg-scsidevs_-m.txt | awk '{print $3}' > Pure-LUNs.txt;while read f;do cat vmkfs*$f.txt |grep -e "Mode:" -e "naa.624a937";echo;done < Pure-LUNs.txt

NOTE: This command is able to be copied & pasted then used on any ESXi Host that is 5.0 and higher, as long as you are in the "commands" folder of the ESXi Host you want to verify.

Resolution

Once we have the misconfigured LUNs identified the customer can use the VMware KB listed below to resolve the issue:

Link to VMware KB: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1033665

Follow the steps outlined in the "Changing an ATS-only volume to public". Simply change the "0" they are setting to a "1" in the listed command they provide to turn ATS-only back on. It is important that the customer read all of the steps before continuing forward and reading the notes & caveats.

Alternatively, and also much less of a headache, the customer can simply create a new LUN from the FlashArray and mount it to the applicable ESXi Host(s). Once the new VMFS datastore is created they can verify that ATS is properly configured. Once confirmed the new datastore has ATS enabled can then migrate the Virtual Machines from the misconfigured datastore to the newly configured datastore. After everything has been moved from the old datastore and they have confirmed all is working well, they can simply destroy the old LUN. This is much easier to do and is typically what should be recommended as the first step.

If there are any questions please reach out to a fellow colleague or Support Escalations team member for assistance.

Troubleshooting: Could Not Generate DH Keypair

Symptoms

When attempting to install the Pure Storage vSphere plugin in a vSphere 5.1 environment, the following errors are reported in the vSphere virgo client logs:

[2016-02-11 16:06:03.213] ERROR [ERROR] http-bio-9443-exec-16 com.purestorage.FlashArrayHelper javax.net.ssl.SSLException: java.lang.RuntimeException: Could not generate DH keypair javax.net.ssl.SSLException: java.lang.RuntimeException: Could not generate DH keypair

and

Caused by: java.security.InvalidAlgorithmParameterException: Prime size must be multiple of 64, and can only range from 512 to 1024 (inclusive

We have also seen this behavior in customer environments were the plugin was previously working but stopped functioning. In an environment where the plugin has been working previously, but is now failing with the same errors above, are all caused by the same issue.

Diagnosis

Due to JAVA security changes for "mod_ssl" , Diffie-Hellman (DH) parameters now include primes with lengths of more than 1024 bits. Since Java 7 and earlier limit their support for DH prime sizes to a maximum of 1024 bits, the SSL negotiation between our FlashArray and vCenter fail. This issue is not caused by the FlashArray (or our vSphere Plugin) but is a problem with vCenter 5.1 using Java Development Kit (JDK) 6.

This issue directly correlates with Oracle Bug ID JDK-7044060. You can also get additional information from this issue here.

Solution

  1. Go to the Oracle Archive site to download & install Java Development Kit 7 Update 17 on the vSphere web client server.
  2. Find and make a copy of the file "wrapper.conf" from the following location: C:\Program Files\VMware\Infrastructure\vSphereWebClient\server\bin\service\conf\wrapper.conf (right click on it, select copy to desktop)
  3. Edit the vSphere Web Client "wrapper.conf" with the following changes:
    • Modify "wrapper.java.command" with the path to the new JDK 1.7U17 path.
      • 32-bit version: wrapper.java.command=C:/Program Files (x86)/Java/jdk1.7.0_17/bin/java
      • 64-bit version: wrapper.java.command=C:/Program Files/Java/jdk1.7.0_17/bin/java
    • Add the following lines at the top of the wrapper.conf file:
      • 32-bit version: set.default.JAVA_HOME=C:\Program Files (x86)\Java\jdk1.7.0_17
      • 64-bit version: set.default.JAVA_HOME=C:\Program Files\Java\jdk1.7.0_17 set.default._JAVA_OPTIONS=-Xmx1024M
    • Comment out (with a hash (#)) the following two lines in the wrapper.conf file: wrapper.java.initmemory=1024m
      wrapper.java.maxmemory=2048m
  4. R estart the vSphere Web Client service.

The path to the wrapper.conf file can be found in the following location on the vCenter Server: C:\Program Files\VMware\Infrastructure\vSphereWebClient\server\bin\service\conf\wrapper.conf

Please only make the changes to the file based on whether or not they downloaded the 32-bit or 64-bit version of the JDK

For clarify purposes, below is the top half of a wrapper.conf file showing where all the applicable changes have been made:

#********************************************************************
# Wrapper License Properties (Ignored by Community Edition)
#********************************************************************
# Include file problems can be debugged by removing the first '#'
#  from the following line:
##include.debug

#encoding=UTF-8
wrapper.license.type=DEV
wrapper.license.id=201106200012
wrapper.license.licensee=VMware Global, Inc.
wrapper.license.dev_application=vSphere Web Client
wrapper.license.features=pro, 64bit
wrapper.license.upgrade_term.begin_date=2009-10-27
wrapper.license.upgrade_term.end_date=2012-01-27
wrapper.license.key.1=feca-7df5-2263-9092
wrapper.license.key.2=a38a-acfa-38de-8031
wrapper.license.key.3=c824-a8fa-b95a-1b89
wrapper.license.key.4=8434-7a46-4450-d081

#######################################################################################################
## You must set the SERVER_HOME property either in your environment of here before running as a service
#######################################################################################################
#set.default.JAVA_HOME=<set JAVA_HOME>
set.default.SERVER_HOME=<set SERVER_HOME>
set.default.CONFIG_DIR=%SERVER_HOME%/config
set.default.JMX_PORT=9875
set.default.JAVA_HOME=C:\Program Files (x86)\Java\jdk1.7.0_17 <------------- ADDED Line
set.default._JAVA_OPTIONS=-Xmx1024M                           <------------- ADDED Line

#########
# General
#########
wrapper.console.title=vSphere Web Client
#wrapper.debug=TRUE


#############
# Application
#############
wrapper.java.command=C:/Program Files (x86)/Java/jdk1.7.0_17/bin/java <------ Modified Line
wrapper.working.dir=%SERVER_HOME%
wrapper.java.mainclass=org.tanukisoftware.wrapper.WrapperSimpleApp


###########
# Classpath
###########
wrapper.java.classpath.1=%SERVER_HOME%/bin/service/lib/wrapper.jar
wrapper.java.classpath.2=%SERVER_HOME%/lib/*.jar



##############
# Library Path
##############
wrapper.java.library.path.1=%SERVER_HOME%/bin/service/lib

#############
#  JVM Memory
#############
#wrapper.java.initmemory=1024m <------------- Commented out line
#wrapper.java.maxmemory=1024m  <------------- Commented out line
Read article

Best Practices: VMware Horizon

Hypervisor Recommended Settings

ESXi Recommended Parameter Changes and VMware KB Articles

  • VMware ESXi 5.x and 6.x Best Practices

VDI Product Recommended Settings

VMware Horizon View General Best Practices

Horizon View Parameter Recommended Default Description
View Storage Accelerator Disable Enabled This is disabled through the Connection Server.  Because Pure Arrays have lots of IOPS at very low latency, we don't need the extra layer of caching at the host.
Maximum Concurrent vCenter Operations >=50 ~20 The default concurrent vCenter operations are defined in the View Connection server advanced vCenter settings.  The default values are quite conservative and can be increased to higher values to allow recompose and other operations to complete much more quickly.
Virtual Disk Format SE Sparse Space Efficient sparse virtual disks are recommended on Pure Storage due to space efficiency and preventing VMDK bloat.

Useful Horizon View KB Articles and Whitepapers
  • VMware Horizon View Optimization Guide for Windows 7/8.1
  • App Volumes with near native performance on Pure Storage
  • VMware Horizon View Persona Management Deployment Guide
  • VMware View Planner 3.5 - VDI Performance Characterization Tool (using Pure Storage)
  • Manually deleting linked clones or stale virtual desktop entries from the View Composer database
  • VMware Flings (useful Apps and Tools created by VMware engineers)
  • Location of Horizon View Log Files
  • Collecting diagnostic information for Horizon View
  • Anti-Virus Best Practices for Horizon View White Paper
  • VDI Best Practices for Symantec Endpoint Protection
  • McAfee MOVE Setup Guide (for vShield EndPoint)
Reference Architectures
  • FlashStack Cisco Validated Design with 6000 Horizon View 7.12 Users
  • FlashStack Cisco Validated Design with 5000 Horizon View 6.2 Users
  • FlashStack Mini Reference Architecture with VMware Horizon 7
  • Pure Storage Design Guide for Virtualized Engineering Workstations with nVidia GRID
  • Pure Storage FlashStack Reference Architecture with Horizon View 6.2 on //m20
  • Pure Storage Reference Architecture with VMware Horizon View 5.0.1
  • Pure Storage FlashStack Reference Architecture with Horizon View 6.0

General VDI Sizing Guidelines

VDI-FlashArray-Overview.png

Useful Horizon View KB Articles and Whitepapers
  • VMware Horizon View Optimization Guide for Windows 7/8.1
  • App Volumes with near native performance on Pure Storage
  • VMware Horizon View Persona Management Deployment Guide
  • VMware View Planner 3.5 - VDI Performance Characterization Tool (using Pure Storage)
  • Manually deleting linked clones or stale virtual desktop entries from the View Composer database
  • VMware Flings (useful Apps and Tools created by VMware engineers)
  • Location of Horizon View Log Files
  • Collecting diagnostic information for Horizon View
  • Anti-Virus Best Practices for Horizon View White Paper
  • VDI Best Practices for Symantec Endpoint Protection
  • McAfee MOVE Setup Guide (for vShield EndPoint)
Reference Architectures
  • FlashStack Cisco Validated Design with 6000 Horizon View 7.12 Users
  • FlashStack Cisco Validated Design with 5000 Horizon View 6.2 Users
  • FlashStack Mini Reference Architecture with VMware Horizon 7
  • Pure Storage Design Guide for Virtualized Engineering Workstations with nVidia GRID
  • Pure Storage FlashStack Reference Architecture with Horizon View 6.2 on //m20
  • Pure Storage Reference Architecture with VMware Horizon View 5.0.1
  • Pure Storage FlashStack Reference Architecture with Horizon View 6.0
Read article

Guide: Design Guide for Horizon View

Read article

Citrix XenDesktop / XenApp General Best Practices

Reference Architectures

  • Pure Storage FlashStack Reference Architecture with Citrix XenDesktop 7.7 on //m20
  • Pure Storage FlashStack Reference Architecture with XenDesktop 7.6.1 on vSphere 6.0
  • Pure Storage Reference Architecture with Citrix XenDesktop 5.6

General VDI Sizing Guidelines

vdi_size_1.png

vdi_size_2.jpg

vdi_size_3.jpg

Read article

How-to Reclaim Space After Deleting Files on Host Systems

SCSI UNMAP

Deleting a file on a host file system does not automatically free up block space on storage systems. To do this, you need to trigger SCSI_UNMAP in order to notify the storage vendor that space should be freed up. This is required for many storage solutions, including Pure Storage FlashArrays. An UNMAP allows an operating system to inform an SSD which blocks of data are no longer considered in use and so can be wiped. This process informs the FlashArray what space can be reclaimed and ensures that you are not consuming unnecessary space.  For example, if a FlashArray consistently reports more data on a volume than host file-system utilities report for the file system, UNMAP is probably not being run regularly enough.

Host Operating System

File System Support

T10 UNMAP

ESX 5.0 U1, 5.1, 5.5, 60, 6.5

VMFS-5, VMFS-3 (1) support Yes

(1) The VAAI primitive UNMAP only works where the partition offset is a multiple of 1 MB. Operation will fail on misaligned VMFS-3 datastores or misaligned VMFS-3 datastores that have been converted to VMFS-5.

VMware Guest OS Space Reclamation

Many file systems cannot trigger the SCSI_UNMAP primitive like UFS, JFS, JFS2, NTFS-w2k8, etc. Additionally, some of the recent ones that can perform SCSI_UNAMP natively (Windows 2008 R2, Windows 2012, EXT4, etc) still have trouble doing so when operating within a guest VM, unless you are using Raw Device Mapping (RDM).

VMware VMFS - UNMAP Outside of the Guest

To reclaim unused storage blocks on a VMFS datastore, use the following VMware processes:

  • Space Reclamation in VMWare ESXi 5.5
    NOTE : As noted in the VMware Best Practices page, make sure the UNMAP block count is set to 1% of the VMFS free space.

  • Space Reclamation in VMWare ESXi 5.0/5.1
    NOTE : When running vmkfstools in 5.0/5.1 the process creates a balloon file that consumes space.  We recommend that you reclaim 20% at a time using this command and do not process more than 2TB of reclaimed space at a time.

Reclaiming Space within the Guest

vSphere 5.0, 5.1, and 5.5

In order to reclaim space inside of a virtual disk, we recommend that you use of a zeroing tool inside of the guest. See the following KB articles for reclaiming space:

  • Windows: Use sdelete to reclaim space.
  • Linux: Using shred to zero out the free space.

Since ESXi virtualized the SCSI layer, even if operating systems attempted to send UNMAP down to the virtual disk, it would not perform any function, nor would it make it to the FlashArray.  So any native OS attempts to send an UNMAP (such as discards in Linux) will not work.

vSphere 6.0

With the introduction of vSphere 6.0, the EnableBlockDelete option was changed so that it enables VMFS block delete when UNMAP is issued from a guest operating system. That is, if you enable this option, ESXi will now permits guest operating systems to issue UNMAP commands to a virtual disk to be translated down to the FlashArray so that the space can be reclaimed. Beyond that, if the virtual disk is thin, it will be reduced in size by the amount of space that was reclaimed.

Note : Apparently only thin virtual disks support this guest-UNMAP functionality.

For in-depth information, we strongly recommend reading Direct Guest OS UNMAP in vSphere 6.0.

vSphere 6.5

The release of vSphere 6.5 supports Linux-based In-Guest UNMAP.

Executing In-Guest UNMAP/TRIM with Linux

There are a three options for reclaiming space as guest in vSphere 6.5 using Linux:

  • Mount the file system with the discard option: mount -t ext4 -o discard /dev/sdc /mnt/UNMAP. This directs the system to automatically issue UNMAP when files are deleted from the file system.
  • Run the command sg_unmap . It allows you to run UNMAP on specific Logical Block Addresses (LBAs).
  • Run the command fstrim . It reclaims dead space across a directory or entire file system on demand. It does not require the discard option to be set, but is compatible with file systems that do have it enabled.

Trusted commentary on this subject recommends the discard option is the best option. The command sg_unmap requires extensive manual work and familiarity with logical block address placement and the command fstrim can run into alignment issues.

For in-depth information, we strongly recommend reading  What's New in ESXI 6.5 Storage Part I: UNMAP.

References

http://www.codyhosterman.com/2015/04...n-vsphere-6-0/

https://www.codyhosterman.com/2016/11/whats-new-in-esxi-6-5-storage-part-i-unmap/

VMware VMFS - UNMAP Outside of the Guest

To reclaim unused storage blocks on a VMFS datastore, use the following VMware processes:

  • Space Reclamation in VMWare ESXi 5.5
    NOTE : As noted in the VMware Best Practices page, make sure the UNMAP block count is set to 1% of the VMFS free space.

  • Space Reclamation in VMWare ESXi 5.0/5.1
    NOTE : When running vmkfstools in 5.0/5.1 the process creates a balloon file that consumes space.  We recommend that you reclaim 20% at a time using this command and do not process more than 2TB of reclaimed space at a time.

Reclaiming Space within the Guest

vSphere 5.0, 5.1, and 5.5

In order to reclaim space inside of a virtual disk, we recommend that you use of a zeroing tool inside of the guest. See the following KB articles for reclaiming space:

  • Windows: Use sdelete to reclaim space.
  • Linux: Using shred to zero out the free space.

Since ESXi virtualized the SCSI layer, even if operating systems attempted to send UNMAP down to the virtual disk, it would not perform any function, nor would it make it to the FlashArray.  So any native OS attempts to send an UNMAP (such as discards in Linux) will not work.

vSphere 6.0

With the introduction of vSphere 6.0, the EnableBlockDelete option was changed so that it enables VMFS block delete when UNMAP is issued from a guest operating system. That is, if you enable this option, ESXi will now permits guest operating systems to issue UNMAP commands to a virtual disk to be translated down to the FlashArray so that the space can be reclaimed. Beyond that, if the virtual disk is thin, it will be reduced in size by the amount of space that was reclaimed.

Note : Apparently only thin virtual disks support this guest-UNMAP functionality.

For in-depth information, we strongly recommend reading Direct Guest OS UNMAP in vSphere 6.0.

vSphere 6.5

The release of vSphere 6.5 supports Linux-based In-Guest UNMAP.

Executing In-Guest UNMAP/TRIM with Linux

There are a three options for reclaiming space as guest in vSphere 6.5 using Linux:

  • Mount the file system with the discard option: mount -t ext4 -o discard /dev/sdc /mnt/UNMAP. This directs the system to automatically issue UNMAP when files are deleted from the file system.
  • Run the command sg_unmap . It allows you to run UNMAP on specific Logical Block Addresses (LBAs).
  • Run the command fstrim . It reclaims dead space across a directory or entire file system on demand. It does not require the discard option to be set, but is compatible with file systems that do have it enabled.

Trusted commentary on this subject recommends the discard option is the best option. The command sg_unmap requires extensive manual work and familiarity with logical block address placement and the command fstrim can run into alignment issues.

For in-depth information, we strongly recommend reading  What's New in ESXI 6.5 Storage Part I: UNMAP.

References

http://www.codyhosterman.com/2015/04...n-vsphere-6-0/

https://www.codyhosterman.com/2016/11/whats-new-in-esxi-6-5-storage-part-i-unmap/

vSphere 5.0, 5.1, and 5.5

In order to reclaim space inside of a virtual disk, we recommend that you use of a zeroing tool inside of the guest. See the following KB articles for reclaiming space:

  • Windows: Use sdelete to reclaim space.
  • Linux: Using shred to zero out the free space.

Since ESXi virtualized the SCSI layer, even if operating systems attempted to send UNMAP down to the virtual disk, it would not perform any function, nor would it make it to the FlashArray.  So any native OS attempts to send an UNMAP (such as discards in Linux) will not work.

vSphere 6.0

With the introduction of vSphere 6.0, the EnableBlockDelete option was changed so that it enables VMFS block delete when UNMAP is issued from a guest operating system. That is, if you enable this option, ESXi will now permits guest operating systems to issue UNMAP commands to a virtual disk to be translated down to the FlashArray so that the space can be reclaimed. Beyond that, if the virtual disk is thin, it will be reduced in size by the amount of space that was reclaimed.

Note : Apparently only thin virtual disks support this guest-UNMAP functionality.

For in-depth information, we strongly recommend reading Direct Guest OS UNMAP in vSphere 6.0.

vSphere 6.5

The release of vSphere 6.5 supports Linux-based In-Guest UNMAP.

Executing In-Guest UNMAP/TRIM with Linux

There are a three options for reclaiming space as guest in vSphere 6.5 using Linux:

  • Mount the file system with the discard option: mount -t ext4 -o discard /dev/sdc /mnt/UNMAP. This directs the system to automatically issue UNMAP when files are deleted from the file system.
  • Run the command sg_unmap . It allows you to run UNMAP on specific Logical Block Addresses (LBAs).
  • Run the command fstrim . It reclaims dead space across a directory or entire file system on demand. It does not require the discard option to be set, but is compatible with file systems that do have it enabled.

Trusted commentary on this subject recommends the discard option is the best option. The command sg_unmap requires extensive manual work and familiarity with logical block address placement and the command fstrim can run into alignment issues.

For in-depth information, we strongly recommend reading  What's New in ESXI 6.5 Storage Part I: UNMAP.

Executing In-Guest UNMAP/TRIM with Linux

There are a three options for reclaiming space as guest in vSphere 6.5 using Linux:

  • Mount the file system with the discard option: mount -t ext4 -o discard /dev/sdc /mnt/UNMAP. This directs the system to automatically issue UNMAP when files are deleted from the file system.
  • Run the command sg_unmap . It allows you to run UNMAP on specific Logical Block Addresses (LBAs).
  • Run the command fstrim . It reclaims dead space across a directory or entire file system on demand. It does not require the discard option to be set, but is compatible with file systems that do have it enabled.

Trusted commentary on this subject recommends the discard option is the best option. The command sg_unmap requires extensive manual work and familiarity with logical block address placement and the command fstrim can run into alignment issues.

For in-depth information, we strongly recommend reading  What's New in ESXI 6.5 Storage Part I: UNMAP.

References

http://www.codyhosterman.com/2015/04...n-vsphere-6-0/

https://www.codyhosterman.com/2016/11/whats-new-in-esxi-6-5-storage-part-i-unmap/

Read article