Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All

Storage and Backups-Purestorage

How-to Reclaim Space After Deleting Files on Host Systems

SCSI UNMAP

Deleting a file on a host file system does not automatically free up block space on storage systems. To do this, you need to trigger SCSI_UNMAP in order to notify the storage vendor that space should be freed up. This is required for many storage solutions, including Pure Storage FlashArrays. An UNMAP allows an operating system to inform an SSD which blocks of data are no longer considered in use and so can be wiped. This process informs the FlashArray what space can be reclaimed and ensures that you are not consuming unnecessary space.  For example, if a FlashArray consistently reports more data on a volume than host file-system utilities report for the file system, UNMAP is probably not being run regularly enough.

Host Operating System

File System Support

T10 UNMAP

ESX 5.0 U1, 5.1, 5.5, 60, 6.5

VMFS-5, VMFS-3 (1) support Yes

(1) The VAAI primitive UNMAP only works where the partition offset is a multiple of 1 MB. Operation will fail on misaligned VMFS-3 datastores or misaligned VMFS-3 datastores that have been converted to VMFS-5.

VMware Guest OS Space Reclamation

Many file systems cannot trigger the SCSI_UNMAP primitive like UFS, JFS, JFS2, NTFS-w2k8, etc. Additionally, some of the recent ones that can perform SCSI_UNAMP natively (Windows 2008 R2, Windows 2012, EXT4, etc) still have trouble doing so when operating within a guest VM, unless you are using Raw Device Mapping (RDM).

VMware VMFS - UNMAP Outside of the Guest

To reclaim unused storage blocks on a VMFS datastore, use the following VMware processes:

  • Space Reclamation in VMWare ESXi 5.5
    NOTE : As noted in the VMware Best Practices page, make sure the UNMAP block count is set to 1% of the VMFS free space.

  • Space Reclamation in VMWare ESXi 5.0/5.1
    NOTE : When running vmkfstools in 5.0/5.1 the process creates a balloon file that consumes space.  We recommend that you reclaim 20% at a time using this command and do not process more than 2TB of reclaimed space at a time.

Reclaiming Space within the Guest

vSphere 5.0, 5.1, and 5.5

In order to reclaim space inside of a virtual disk, we recommend that you use of a zeroing tool inside of the guest. See the following KB articles for reclaiming space:

  • Windows: Use sdelete to reclaim space.
  • Linux: Using shred to zero out the free space.

Since ESXi virtualized the SCSI layer, even if operating systems attempted to send UNMAP down to the virtual disk, it would not perform any function, nor would it make it to the FlashArray.  So any native OS attempts to send an UNMAP (such as discards in Linux) will not work.

vSphere 6.0

With the introduction of vSphere 6.0, the EnableBlockDelete option was changed so that it enables VMFS block delete when UNMAP is issued from a guest operating system. That is, if you enable this option, ESXi will now permits guest operating systems to issue UNMAP commands to a virtual disk to be translated down to the FlashArray so that the space can be reclaimed. Beyond that, if the virtual disk is thin, it will be reduced in size by the amount of space that was reclaimed.

Note : Apparently only thin virtual disks support this guest-UNMAP functionality.

For in-depth information, we strongly recommend reading Direct Guest OS UNMAP in vSphere 6.0.

vSphere 6.5

The release of vSphere 6.5 supports Linux-based In-Guest UNMAP.

Executing In-Guest UNMAP/TRIM with Linux

There are a three options for reclaiming space as guest in vSphere 6.5 using Linux:

  • Mount the file system with the discard option: mount -t ext4 -o discard /dev/sdc /mnt/UNMAP. This directs the system to automatically issue UNMAP when files are deleted from the file system.
  • Run the command sg_unmap . It allows you to run UNMAP on specific Logical Block Addresses (LBAs).
  • Run the command fstrim . It reclaims dead space across a directory or entire file system on demand. It does not require the discard option to be set, but is compatible with file systems that do have it enabled.

Trusted commentary on this subject recommends the discard option is the best option. The command sg_unmap requires extensive manual work and familiarity with logical block address placement and the command fstrim can run into alignment issues.

For in-depth information, we strongly recommend reading  What's New in ESXI 6.5 Storage Part I: UNMAP.

References

http://www.codyhosterman.com/2015/04...n-vsphere-6-0/

https://www.codyhosterman.com/2016/11/whats-new-in-esxi-6-5-storage-part-i-unmap/

VMware VMFS - UNMAP Outside of the Guest

To reclaim unused storage blocks on a VMFS datastore, use the following VMware processes:

  • Space Reclamation in VMWare ESXi 5.5
    NOTE : As noted in the VMware Best Practices page, make sure the UNMAP block count is set to 1% of the VMFS free space.

  • Space Reclamation in VMWare ESXi 5.0/5.1
    NOTE : When running vmkfstools in 5.0/5.1 the process creates a balloon file that consumes space.  We recommend that you reclaim 20% at a time using this command and do not process more than 2TB of reclaimed space at a time.

Reclaiming Space within the Guest

vSphere 5.0, 5.1, and 5.5

In order to reclaim space inside of a virtual disk, we recommend that you use of a zeroing tool inside of the guest. See the following KB articles for reclaiming space:

  • Windows: Use sdelete to reclaim space.
  • Linux: Using shred to zero out the free space.

Since ESXi virtualized the SCSI layer, even if operating systems attempted to send UNMAP down to the virtual disk, it would not perform any function, nor would it make it to the FlashArray.  So any native OS attempts to send an UNMAP (such as discards in Linux) will not work.

vSphere 6.0

With the introduction of vSphere 6.0, the EnableBlockDelete option was changed so that it enables VMFS block delete when UNMAP is issued from a guest operating system. That is, if you enable this option, ESXi will now permits guest operating systems to issue UNMAP commands to a virtual disk to be translated down to the FlashArray so that the space can be reclaimed. Beyond that, if the virtual disk is thin, it will be reduced in size by the amount of space that was reclaimed.

Note : Apparently only thin virtual disks support this guest-UNMAP functionality.

For in-depth information, we strongly recommend reading Direct Guest OS UNMAP in vSphere 6.0.

vSphere 6.5

The release of vSphere 6.5 supports Linux-based In-Guest UNMAP.

Executing In-Guest UNMAP/TRIM with Linux

There are a three options for reclaiming space as guest in vSphere 6.5 using Linux:

  • Mount the file system with the discard option: mount -t ext4 -o discard /dev/sdc /mnt/UNMAP. This directs the system to automatically issue UNMAP when files are deleted from the file system.
  • Run the command sg_unmap . It allows you to run UNMAP on specific Logical Block Addresses (LBAs).
  • Run the command fstrim . It reclaims dead space across a directory or entire file system on demand. It does not require the discard option to be set, but is compatible with file systems that do have it enabled.

Trusted commentary on this subject recommends the discard option is the best option. The command sg_unmap requires extensive manual work and familiarity with logical block address placement and the command fstrim can run into alignment issues.

For in-depth information, we strongly recommend reading  What's New in ESXI 6.5 Storage Part I: UNMAP.

References

http://www.codyhosterman.com/2015/04...n-vsphere-6-0/

https://www.codyhosterman.com/2016/11/whats-new-in-esxi-6-5-storage-part-i-unmap/

vSphere 5.0, 5.1, and 5.5

In order to reclaim space inside of a virtual disk, we recommend that you use of a zeroing tool inside of the guest. See the following KB articles for reclaiming space:

  • Windows: Use sdelete to reclaim space.
  • Linux: Using shred to zero out the free space.

Since ESXi virtualized the SCSI layer, even if operating systems attempted to send UNMAP down to the virtual disk, it would not perform any function, nor would it make it to the FlashArray.  So any native OS attempts to send an UNMAP (such as discards in Linux) will not work.

vSphere 6.0

With the introduction of vSphere 6.0, the EnableBlockDelete option was changed so that it enables VMFS block delete when UNMAP is issued from a guest operating system. That is, if you enable this option, ESXi will now permits guest operating systems to issue UNMAP commands to a virtual disk to be translated down to the FlashArray so that the space can be reclaimed. Beyond that, if the virtual disk is thin, it will be reduced in size by the amount of space that was reclaimed.

Note : Apparently only thin virtual disks support this guest-UNMAP functionality.

For in-depth information, we strongly recommend reading Direct Guest OS UNMAP in vSphere 6.0.

vSphere 6.5

The release of vSphere 6.5 supports Linux-based In-Guest UNMAP.

Executing In-Guest UNMAP/TRIM with Linux

There are a three options for reclaiming space as guest in vSphere 6.5 using Linux:

  • Mount the file system with the discard option: mount -t ext4 -o discard /dev/sdc /mnt/UNMAP. This directs the system to automatically issue UNMAP when files are deleted from the file system.
  • Run the command sg_unmap . It allows you to run UNMAP on specific Logical Block Addresses (LBAs).
  • Run the command fstrim . It reclaims dead space across a directory or entire file system on demand. It does not require the discard option to be set, but is compatible with file systems that do have it enabled.

Trusted commentary on this subject recommends the discard option is the best option. The command sg_unmap requires extensive manual work and familiarity with logical block address placement and the command fstrim can run into alignment issues.

For in-depth information, we strongly recommend reading  What's New in ESXI 6.5 Storage Part I: UNMAP.

Executing In-Guest UNMAP/TRIM with Linux

There are a three options for reclaiming space as guest in vSphere 6.5 using Linux:

  • Mount the file system with the discard option: mount -t ext4 -o discard /dev/sdc /mnt/UNMAP. This directs the system to automatically issue UNMAP when files are deleted from the file system.
  • Run the command sg_unmap . It allows you to run UNMAP on specific Logical Block Addresses (LBAs).
  • Run the command fstrim . It reclaims dead space across a directory or entire file system on demand. It does not require the discard option to be set, but is compatible with file systems that do have it enabled.

Trusted commentary on this subject recommends the discard option is the best option. The command sg_unmap requires extensive manual work and familiarity with logical block address placement and the command fstrim can run into alignment issues.

For in-depth information, we strongly recommend reading  What's New in ESXI 6.5 Storage Part I: UNMAP.

References

http://www.codyhosterman.com/2015/04...n-vsphere-6-0/

https://www.codyhosterman.com/2016/11/whats-new-in-esxi-6-5-storage-part-i-unmap/

Troubleshooting: Installation "No virtual IP configured" Error

Symptoms

While attempting to install the Pure Storage vSphere Plugin, specifically while using "vir1" as the primary interface, the following error message is received:

CLI error:

pureplugin: error: No virtual IP configured.

GUI error:

vir1 error

This issue is not seen when the 'vir0' interface is configured, enabled, and on the same network as the vCenter Server.

Diagnosis

There is currently a limitation with our vSphere Plugin that doesn't allow you to install / register it on the vCenter Server using the 'vir1' interface. You must have the 'vir0' interface enabled, and configured, on the vCenter Server network to get the registration process to complete successfully.

Solution

Enable & configure 'vir0' on the FlashArray with a valid IP, netmask, & gateway to ensure successful installation of the Pure Storage vSphere plugin. You can refer to TI-3866 for additional information on this specific issue.

This KB will be updated once we have a working vSphere Plugin that addresses this issue. Until then, please perform the steps above to work around this issue.

Please remember that 'vir0' uses the ctX.eth0 interfaces for connectivity. This means that you must have physical connectivity to those interfaces for 'vir0' to properly function.

Read article

New LUN Not Detected After Rescan

Symptoms

You present new LUN(s) to the ESXi Host / Cluster from the Pure Storage FlashArray. Upon initiating a rescan you verify that the expected LUN(s) are not seen on the ESXi Host as available devices under the "Storage Adapters" section. While investigating the issue you are able to confirm other LUNs are successfully mapped to the ESXi Host without issue.

Note: If checking from the command line of the ESXi Host the LUN(s) would be missing the following command:

'esxcli storage core device list'

Diagnosis

Upon reviewing the ESXi Host 'vmkernel' logs you are able to locate the following error message being reported at the time of rescan:

2016-04-23 22:21:43.483Z cpu13:34281087)WARNING: ScsiPath: 903: The number of paths allocated has reached the maximum: 1024. Path: vmhba2:C0:T14:L0 will not be allocated. This warning won't be repeated until some paths are removed.

Additionally, when running the following command from the ESXi Host command line interface it reports a value of '1024':

grep vmhba esxcfg-mpath -b |wc -l

Solution

The error message above indicates that the the maximum number of total paths allowed to an ESXi Host has been reached. You will not be allowed to add additional LUN(s) to the ESXi Host until this is addressed.

This can be resolved the following ways:

  • Reduce the number of paths to each LUN so they can add the additional LUN(s) as needed.
  • Reduce the number of total LUNs (if possible) to the ESXi Host so they can have desired number of paths.
  • Map the LUNs to a new ESXi Host / Cluster.

This decision needs to be made by the customer to determine what the best scenario for their environment is. We do not have control over this variable within the ESXi Host. If they have additional directions kindly direct them to VMware Support as needed.

Review the following VMware links below for additional information:

  • Maximum number of paths reached (1020654)
  • VMware vSphere 5.1 Maximum Configurations - Page 3
  • VMware vSphere 5.5 Maximum Configurations - Page 3
  • VMware vSphere 6.0 Maximum Configurations - Page 12
Read article

How To: Capture System Performance Data with esxtop

How to capture an esxtop output

There are a couple of very important things to keep in mind when capturing an esxtop:

  • The esxtop should be captured from the ESXi host where you suspect possible performance issues are originating from. It is host specific and does not capture for a cluster.
  • If there are particular VMs you suspect as the cause of performance anomalies, the esxtop needs to be captured on the ESXi host where the VM resides at the time the esxtop is run.
  • When specifying the location of the esxtop.csv file, please be sure you send it to a datastore that has at least 5GB of space available. If you try to send the file to the tmp folder, you risk filling it up and causing the ESXi Host management services to stall. This would require a host reboot and possible outage, so please be mindful of this.
  • SSH must be enabled on the ESXi host since esxtop can only be ran / captured from the ESXi host CLI.
  • The output file should always be a .csv format for easy review of the data.
What syntax should I use when capturing an esxtop?
esxtop -b -a -d 2 -n 300 > /vmfs/volumes/pure_datastore/esxtop-[hostname]-[date].csv
Example:
[root@ac-esxi-6:~] mkdir /vmfs/volumes/sn1-x70-b05-vmfs-scale-ds/esxtop-captures/
[root@ac-esxi-6:~]
[root@ac-esxi-6:~] esxtop -b -a -d 2 -n 300 > /vmfs/volumes/sn1-x70-b05-vmfs-scale-ds/esxtop-captures/esxtop-ac-esxi-6-June-2019.csv
[root@ac-esxi-6:~]
[root@ac-esxi-6:~] ls -lh /vmfs/volumes/sn1-x70-b05-vmfs-scale-ds/esxtop-captures/
total 87040
-rw-r--r--    1 root     root       84.1M Jun 12 15:15 esxtop-ac-esxi-6-June-2019.csv

The above runs the esxtop capture in batch mode ( -b ), captures all available counters ( -a ), at 2 second intervals ( -d ), for 300 iterations ( -n ).  Which means that this will capture 10 minutes of performance data with 2 second intervals (shortest interval VMware supports) and output it to the 'esxtop.csv' file for review.

There are few instances where a capture longer than 10 minutes of data is needed to understand and get a clear picture of the problem.  Should a longer capture be needed only modify the number of iterations (-n) so that everything else remains the same.

You will know the esxtop capture is completed once the CLI prompt returns and you are able to type again. After the esxtop has completed running, you can navigate to the path the file was saved and SCP it off the ESXi host and upload to FTPS for Pure Storage review.


Related Links and References

Here are some useful links for unlocking the power of the esxtop output that was captured.

  • How to upload files to Pure Storage FTPS Service
  • Exporting esxtop performance data as a CSV file
  • Using esxtop to identify storage performance issues
  • Using visualesxtop to troubleshoot performance issues
  • Using visualesxtop on a Mac

Related Links and References

Here are some useful links for unlocking the power of the esxtop output that was captured.

  • How to upload files to Pure Storage FTPS Service
  • Exporting esxtop performance data as a CSV file
  • Using esxtop to identify storage performance issues
  • Using visualesxtop to troubleshoot performance issues
  • Using visualesxtop on a Mac

Read article

How-to: Determine Java Version in vSphere Web Client Log

Problem

The Pure Storage icon is not showing up in the vCenter Web Client. This can be caused the JDK not being properly installed on the VMware vCenter server.  We can verify whether the JDK is installed correctly, and which Java version is being used by the Pure Plugin in the VMware vSphere Web Client main log file (vsphere_client_virgo.log).

Impact

An incorrect installation and configuration of the JDK will cause issues with the Pure Plugin.

Solution

The Java version can be found in the vshere_client_virgo.log, and will only show up in the log when the vSphere web client is restarted.

In this example, the JDK 1.7u17 is installed on the Windows Server 2008 R2 for vCenter Server 5.1.

[2016-07-15 10:22:41.225] INFO  [INFO ] start-signalling-1            com.vmware.vise.util.debug.SystemUsageMonitor                     System info :
 OS - Windows Server 2008 R2
 Arch - amd64
 Java Version - 1.7.0_17 
[2016-07-15 10:22:41.256] INFO  [INFO ] Timer-2                       com.vmware.vise.util.debug.SystemUsageMonitor                     
 Heap     : init = 201292928(196575K) used = 309326640(302076K) committed = 672727040(656960K) max = 954466304(932096K)
 non-Heap : init = 136773632(133568K) used = 82368472(80437K) committed = 142344192(139008K) max = 318767104(311296K)
 No of loaded classes : 13796 
Read article

Troubleshooting: Certificate Errors in vSphere Web Client

Problem

When viewing the Performance under the Pure Storage tab in the vSphere Web Client, you get an error "Content was blocked because it was not signed by a valid security certificate. For more information, see Certificate Errors in Internet Explorer Help".

CertificateErrors.png

This issue can occur in any web browser, not just Internet Explorer.

Impact

On the vSphere Web Client, the user is unable to view the Performance graph in the Pure Storage tab.

Solution

Open a new browser window or tab to log in to the Pure FlashArray GUI.  Once you are able to see the login screen of the Pure FlashArray, then the browser has accepted the SSL certificate from the Pure FlashArray.

GUIlogin.png

Then go back to the vSphere Web Client, the Performance graph should now be displayed under the Pure Storage tab.

PerformanceGraph.png

Read article