This is one stop global knowledge base where you can learn about all the products, solutions and support features.
When attempting to install the Pure Storage vSphere plugin in a vSphere 5.1 environment, the following errors are reported in the vSphere virgo client logs:
[2016-02-11 16:06:03.213] ERROR [ERROR] http-bio-9443-exec-16 com.purestorage.FlashArrayHelper javax.net.ssl.SSLException: java.lang.RuntimeException: Could not generate DH keypair javax.net.ssl.SSLException: java.lang.RuntimeException: Could not generate DH keypair
and
Caused by: java.security.InvalidAlgorithmParameterException: Prime size must be multiple of 64, and can only range from 512 to 1024 (inclusive
We have also seen this behavior in customer environments were the plugin was previously working but stopped functioning. In an environment where the plugin has been working previously, but is now failing with the same errors above, are all caused by the same issue.
Due to JAVA security changes for "mod_ssl" , Diffie-Hellman (DH) parameters now include primes with lengths of more than 1024 bits. Since Java 7 and earlier limit their support for DH prime sizes to a maximum of 1024 bits, the SSL negotiation between our FlashArray and vCenter fail. This issue is not caused by the FlashArray (or our vSphere Plugin) but is a problem with vCenter 5.1 using Java Development Kit (JDK) 6.
This issue directly correlates with Oracle Bug ID JDK-7044060. You can also get additional information from this issue here.
wrapper.java.command=C:/Program Files (x86)/Java/jdk1.7.0_17/bin/java
wrapper.java.command=C:/Program Files/Java/jdk1.7.0_17/bin/java
set.default.JAVA_HOME=C:\Program Files (x86)\Java\jdk1.7.0_17
set.default.JAVA_HOME=C:\Program Files\Java\jdk1.7.0_17
set.default._JAVA_OPTIONS=-Xmx1024M
wrapper.java.initmemory=1024m
wrapper.java.maxmemory=2048m
R
estart the vSphere Web Client service.
The path to the wrapper.conf file can be found in the following location on the vCenter Server: C:\Program Files\VMware\Infrastructure\vSphereWebClient\server\bin\service\conf\wrapper.conf
Please only make the changes to the file based on whether or not they downloaded the 32-bit or 64-bit version of the JDK
For clarify purposes, below is the top half of a wrapper.conf file showing where all the applicable changes have been made:
#******************************************************************** # Wrapper License Properties (Ignored by Community Edition) #******************************************************************** # Include file problems can be debugged by removing the first '#' # from the following line: ##include.debug #encoding=UTF-8 wrapper.license.type=DEV wrapper.license.id=201106200012 wrapper.license.licensee=VMware Global, Inc. wrapper.license.dev_application=vSphere Web Client wrapper.license.features=pro, 64bit wrapper.license.upgrade_term.begin_date=2009-10-27 wrapper.license.upgrade_term.end_date=2012-01-27 wrapper.license.key.1=feca-7df5-2263-9092 wrapper.license.key.2=a38a-acfa-38de-8031 wrapper.license.key.3=c824-a8fa-b95a-1b89 wrapper.license.key.4=8434-7a46-4450-d081 ####################################################################################################### ## You must set the SERVER_HOME property either in your environment of here before running as a service ####################################################################################################### #set.default.JAVA_HOME=<set JAVA_HOME> set.default.SERVER_HOME=<set SERVER_HOME> set.default.CONFIG_DIR=%SERVER_HOME%/config set.default.JMX_PORT=9875 set.default.JAVA_HOME=C:\Program Files (x86)\Java\jdk1.7.0_17 <------------- ADDED Line set.default._JAVA_OPTIONS=-Xmx1024M <------------- ADDED Line ######### # General ######### wrapper.console.title=vSphere Web Client #wrapper.debug=TRUE ############# # Application ############# wrapper.java.command=C:/Program Files (x86)/Java/jdk1.7.0_17/bin/java <------ Modified Line wrapper.working.dir=%SERVER_HOME% wrapper.java.mainclass=org.tanukisoftware.wrapper.WrapperSimpleApp ########### # Classpath ########### wrapper.java.classpath.1=%SERVER_HOME%/bin/service/lib/wrapper.jar wrapper.java.classpath.2=%SERVER_HOME%/lib/*.jar ############## # Library Path ############## wrapper.java.library.path.1=%SERVER_HOME%/bin/service/lib ############# # JVM Memory ############# #wrapper.java.initmemory=1024m <------------- Commented out line #wrapper.java.maxmemory=1024m <------------- Commented out line
ESXi Recommended Parameter Changes and VMware KB Articles
VMware Horizon View General Best Practices
Horizon View Parameter | Recommended | Default | Description |
---|---|---|---|
View Storage Accelerator | Disable | Enabled | This is disabled through the Connection Server. Because Pure Arrays have lots of IOPS at very low latency, we don't need the extra layer of caching at the host. |
Maximum Concurrent vCenter Operations | >=50 | ~20 | The default concurrent vCenter operations are defined in the View Connection server advanced vCenter settings. The default values are quite conservative and can be increased to higher values to allow recompose and other operations to complete much more quickly. |
Virtual Disk Format | SE Sparse | Space Efficient sparse virtual disks are recommended on Pure Storage due to space efficiency and preventing VMDK bloat. |
Useful Horizon View KB Articles and Whitepapers
|
Reference Architectures
|
General VDI Sizing Guidelines
Deleting a file on a host file system does not automatically free up block space on storage systems. To do this, you need to trigger SCSI_UNMAP in order to notify the storage vendor that space should be freed up. This is required for many storage solutions, including Pure Storage FlashArrays. An UNMAP allows an operating system to inform an SSD which blocks of data are no longer considered in use and so can be wiped. This process informs the FlashArray what space can be reclaimed and ensures that you are not consuming unnecessary space. For example, if a FlashArray consistently reports more data on a volume than host file-system utilities report for the file system, UNMAP is probably not being run regularly enough.
Host Operating System |
File System Support |
T10 UNMAP |
---|---|---|
ESX 5.0 U1, 5.1, 5.5, 60, 6.5 |
VMFS-5, VMFS-3 (1) support | Yes |
(1) The VAAI primitive UNMAP only works where the partition offset is a multiple of 1 MB. Operation will fail on misaligned VMFS-3 datastores or misaligned VMFS-3 datastores that have been converted to VMFS-5.
Many file systems cannot trigger the SCSI_UNMAP primitive like UFS, JFS, JFS2, NTFS-w2k8, etc. Additionally, some of the recent ones that can perform SCSI_UNAMP natively (Windows 2008 R2, Windows 2012, EXT4, etc) still have trouble doing so when operating within a guest VM, unless you are using Raw Device Mapping (RDM).
To reclaim unused storage blocks on a VMFS datastore, use the following VMware processes:
Space Reclamation in VMWare ESXi 5.5
NOTE
: As noted in the VMware Best Practices page, make sure the UNMAP block count is set to 1% of the VMFS free space.
Space Reclamation in VMWare ESXi 5.0/5.1
NOTE
: When running
vmkfstools
in 5.0/5.1 the process creates a balloon file that consumes space. We recommend that you reclaim 20% at a time using this command and do not process more than 2TB of reclaimed space at a time.
In order to reclaim space inside of a virtual disk, we recommend that you use of a zeroing tool inside of the guest. See the following KB articles for reclaiming space:
sdelete
to reclaim space.
shred
to zero out the free space.
Since ESXi virtualized the SCSI layer, even if operating systems attempted to send UNMAP down to the virtual disk, it would not perform any function, nor would it make it to the FlashArray. So any native OS attempts to send an UNMAP (such as discards in Linux) will not work.
With the introduction of vSphere 6.0, the
EnableBlockDelete
option was changed so that it enables VMFS block delete when UNMAP is issued from a guest operating system. That is, if you enable this option, ESXi will now permits guest operating systems to issue UNMAP commands to a virtual disk to be translated down to the FlashArray so that the space can be reclaimed. Beyond that, if the virtual disk is thin, it will be reduced in size by the amount of space that was reclaimed.
Note : Apparently only thin virtual disks support this guest-UNMAP functionality.
For in-depth information, we strongly recommend reading Direct Guest OS UNMAP in vSphere 6.0.
The release of vSphere 6.5 supports Linux-based In-Guest UNMAP.
There are a three options for reclaiming space as guest in vSphere 6.5 using Linux:
mount -t ext4 -o discard /dev/sdc /mnt/UNMAP.
This directs the system to automatically issue UNMAP when files are deleted from the file system.
sg_unmap
. It allows you to run UNMAP on specific Logical Block Addresses (LBAs).
fstrim
. It reclaims dead space across a directory or entire file system on demand. It does
not
require the
discard
option to be set, but is compatible with file systems that do have it enabled.
Trusted commentary on this subject recommends the
discard
option is the best option. The command
sg_unmap
requires extensive manual work and familiarity with logical block address placement and the command
fstrim
can run into alignment issues.
For in-depth information, we strongly recommend reading What's New in ESXI 6.5 Storage Part I: UNMAP.
http://www.codyhosterman.com/2015/04...n-vsphere-6-0/
https://www.codyhosterman.com/2016/11/whats-new-in-esxi-6-5-storage-part-i-unmap/
To reclaim unused storage blocks on a VMFS datastore, use the following VMware processes:
Space Reclamation in VMWare ESXi 5.5
NOTE
: As noted in the VMware Best Practices page, make sure the UNMAP block count is set to 1% of the VMFS free space.
Space Reclamation in VMWare ESXi 5.0/5.1
NOTE
: When running
vmkfstools
in 5.0/5.1 the process creates a balloon file that consumes space. We recommend that you reclaim 20% at a time using this command and do not process more than 2TB of reclaimed space at a time.
In order to reclaim space inside of a virtual disk, we recommend that you use of a zeroing tool inside of the guest. See the following KB articles for reclaiming space:
sdelete
to reclaim space.
shred
to zero out the free space.
Since ESXi virtualized the SCSI layer, even if operating systems attempted to send UNMAP down to the virtual disk, it would not perform any function, nor would it make it to the FlashArray. So any native OS attempts to send an UNMAP (such as discards in Linux) will not work.
With the introduction of vSphere 6.0, the
EnableBlockDelete
option was changed so that it enables VMFS block delete when UNMAP is issued from a guest operating system. That is, if you enable this option, ESXi will now permits guest operating systems to issue UNMAP commands to a virtual disk to be translated down to the FlashArray so that the space can be reclaimed. Beyond that, if the virtual disk is thin, it will be reduced in size by the amount of space that was reclaimed.
Note : Apparently only thin virtual disks support this guest-UNMAP functionality.
For in-depth information, we strongly recommend reading Direct Guest OS UNMAP in vSphere 6.0.
The release of vSphere 6.5 supports Linux-based In-Guest UNMAP.
There are a three options for reclaiming space as guest in vSphere 6.5 using Linux:
mount -t ext4 -o discard /dev/sdc /mnt/UNMAP.
This directs the system to automatically issue UNMAP when files are deleted from the file system.
sg_unmap
. It allows you to run UNMAP on specific Logical Block Addresses (LBAs).
fstrim
. It reclaims dead space across a directory or entire file system on demand. It does
not
require the
discard
option to be set, but is compatible with file systems that do have it enabled.
Trusted commentary on this subject recommends the
discard
option is the best option. The command
sg_unmap
requires extensive manual work and familiarity with logical block address placement and the command
fstrim
can run into alignment issues.
For in-depth information, we strongly recommend reading What's New in ESXI 6.5 Storage Part I: UNMAP.
http://www.codyhosterman.com/2015/04...n-vsphere-6-0/
https://www.codyhosterman.com/2016/11/whats-new-in-esxi-6-5-storage-part-i-unmap/
In order to reclaim space inside of a virtual disk, we recommend that you use of a zeroing tool inside of the guest. See the following KB articles for reclaiming space:
sdelete
to reclaim space.
shred
to zero out the free space.
Since ESXi virtualized the SCSI layer, even if operating systems attempted to send UNMAP down to the virtual disk, it would not perform any function, nor would it make it to the FlashArray. So any native OS attempts to send an UNMAP (such as discards in Linux) will not work.
With the introduction of vSphere 6.0, the
EnableBlockDelete
option was changed so that it enables VMFS block delete when UNMAP is issued from a guest operating system. That is, if you enable this option, ESXi will now permits guest operating systems to issue UNMAP commands to a virtual disk to be translated down to the FlashArray so that the space can be reclaimed. Beyond that, if the virtual disk is thin, it will be reduced in size by the amount of space that was reclaimed.
Note : Apparently only thin virtual disks support this guest-UNMAP functionality.
For in-depth information, we strongly recommend reading Direct Guest OS UNMAP in vSphere 6.0.
The release of vSphere 6.5 supports Linux-based In-Guest UNMAP.
There are a three options for reclaiming space as guest in vSphere 6.5 using Linux:
mount -t ext4 -o discard /dev/sdc /mnt/UNMAP.
This directs the system to automatically issue UNMAP when files are deleted from the file system.
sg_unmap
. It allows you to run UNMAP on specific Logical Block Addresses (LBAs).
fstrim
. It reclaims dead space across a directory or entire file system on demand. It does
not
require the
discard
option to be set, but is compatible with file systems that do have it enabled.
Trusted commentary on this subject recommends the
discard
option is the best option. The command
sg_unmap
requires extensive manual work and familiarity with logical block address placement and the command
fstrim
can run into alignment issues.
For in-depth information, we strongly recommend reading What's New in ESXI 6.5 Storage Part I: UNMAP.
There are a three options for reclaiming space as guest in vSphere 6.5 using Linux:
mount -t ext4 -o discard /dev/sdc /mnt/UNMAP.
This directs the system to automatically issue UNMAP when files are deleted from the file system.
sg_unmap
. It allows you to run UNMAP on specific Logical Block Addresses (LBAs).
fstrim
. It reclaims dead space across a directory or entire file system on demand. It does
not
require the
discard
option to be set, but is compatible with file systems that do have it enabled.
Trusted commentary on this subject recommends the
discard
option is the best option. The command
sg_unmap
requires extensive manual work and familiarity with logical block address placement and the command
fstrim
can run into alignment issues.
For in-depth information, we strongly recommend reading What's New in ESXI 6.5 Storage Part I: UNMAP.
http://www.codyhosterman.com/2015/04...n-vsphere-6-0/
https://www.codyhosterman.com/2016/11/whats-new-in-esxi-6-5-storage-part-i-unmap/
While attempting to install the Pure Storage vSphere Plugin, specifically while using "vir1" as the primary interface, the following error message is received:
CLI error:
pureplugin: error: No virtual IP configured.
GUI error:
This issue is not seen when the 'vir0' interface is configured, enabled, and on the same network as the vCenter Server.
There is currently a limitation with our vSphere Plugin that doesn't allow you to install / register it on the vCenter Server using the 'vir1' interface. You must have the 'vir0' interface enabled, and configured, on the vCenter Server network to get the registration process to complete successfully.
Enable & configure 'vir0' on the FlashArray with a valid IP, netmask, & gateway to ensure successful installation of the Pure Storage vSphere plugin. You can refer to TI-3866 for additional information on this specific issue.
This KB will be updated once we have a working vSphere Plugin that addresses this issue. Until then, please perform the steps above to work around this issue.
Please remember that 'vir0' uses the ctX.eth0 interfaces for connectivity. This means that you must have physical connectivity to those interfaces for 'vir0' to properly function.