This is one stop global knowledge base where you can learn about all the products, solutions and support features.
After installing the plug-in in the Pure GUI, you are unable to see that it is installed in the vSphere web client.
You will want to check the logs on the vSphere server that is hosting the web services.
The logs are in the following directory: C:\ProgramData\VMware\vSphere Web Client\serviceability\logs.
The ones of interest are: vsphere_client_virgo and com.purestorageui.Purestorageui-1.x.x.
In the virgo logs you may see the following error:
Error unzipping https://10.193.17.235/download/purestorage-vsphere<wbr/>-plugin.zip?version=1.1.10 javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: Server certificate chain is not trusted and thumbprint doesn't match
There is an issue when the GUI is started when /cache/ssl/gui.keystore is not available yet (new Array) which results in an unmatched fingerprint.
For 3.3.x you will need to run the following command, on both controllers:
restart gui
For 3.4.x you will need to run the following command, on both controllers:
/etc/init.d/nginx restart
The JIRA for this fix is :
https://jira.purestorage.com/browse/PURE-22188
For 3.3.x you will need to run the following command, on both controllers:
restart gui
For 3.4.x you will need to run the following command, on both controllers:
/etc/init.d/nginx restart
The JIRA for this fix is :
https://jira.purestorage.com/browse/PURE-22188
Occasionally the vSphere plug-in will show the array being in a non-compatible state. The root culprit of this is due to the fact that the security token that is generated can become invalid after a change in the configuration of the array. One example of this would be an SSD reset, or a Purity upgrade.
Here is an example of Pure-b3 showing as non-compatible.
To correct this you will need to select the array and click on the edit button. Then provide all the necessary credentials for vSphere to log into the array and request a new security token:
Click save and the vSphere web client will now be able to administer the array. It should now show as 'true'
Steps
* Open vSphere client and connect to the ESX server you want to configure, either directly or through vCenter.
* Click on the ESX server you want to configure then click on the "Configuration" tab.
* Click on the Advanced Settings link and you should see the following screen
(Note - this ESX server already has the Emulex HBA configured in passthrough mode so its showing here)
* Click on the Edit link just above the window on the right.
* Locate the physical hardware you want to remove virtualization on. Put a checkmark in all the boxes that you want to devirtualize. In this example, the Emulex HBA has checks in all boxes including the parent, which means take the whole adapter.
After this is done, the system will request for a reboot after clicking on OK. Make sure you have prepared the ESX box to be rebooted by pausing all VMs and alerting others who use it.
How to configure the HBA and assign it to a VM.
Since the HBA is now configured in passthrough, it is now dedicated to be used by a VM. This is the difference between a passthrough and virtualized device. The ESX server makes the device a shared device in virtualization mode between VMs.
* Select a VM and right click on it to Edit settings.
* Click on Add to add a resource to the VM.
* Select PCI Device. You'll need to repeat the Add step if you want to add more than one port. In this demo, we want to add both HBA ports
because we want to get MPIO to work correctly with two paths. Repeat this step to add the other port from the screenshot above (03:00.0 and 03:00.1).
The repeat is needed because ESX only allows us to add one port at a time.
* Click on Next until you come to Finish to complete the configuration of the VM. Be sure to repeat the steps to add the second HBA port.
The following screen should be the result after adding both HBA ports to the VM
Click on OK to save the VM's configuration. In the previous screenshots, I moved the PCI devices from one VM to another. This is how the HBA ports can be reassigned to another VM.
On my RHEL64 VM, I can now see multipathd show multiple paths for each lun.
[root@rhel64 ~]# multipath -l 3624a9370bf28da2ee4cf586d00010004 dm-2 PURE,FlashArray size=1.0T features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=active |- 3:0:1:3 sdg 8:96 active undef running |- 3:0:0:3 sdd 8:48 active undef running |- 3:0:2:3 sdj 8:144 active undef running |- 3:0:3:3 sdm 8:192 active undef running |- 3:0:4:3 sdp 8:240 active undef running |- 3:0:5:3 sds 65:32 active undef running |- 3:0:6:3 sdv 65:80 active undef running |- 3:0:7:3 sdy 65:128 active undef running |- 4:0:0:3 sdab 65:176 active undef running |- 4:0:1:3 sdae 65:224 active undef running |- 4:0:2:3 sdah 66:16 active undef running |- 4:0:3:3 sdak 66:64 active undef running |- 4:0:4:3 sdan 66:112 active undef running |- 4:0:5:3 sdaq 66:160 active undef running |- 4:0:6:3 sdat 66:208 active undef running `- 4:0:7:3 sdaw 67:0 active undef running 3624a9370bf28da2ee4cf586d00010003 dm-1 PURE,FlashArray size=1.0T features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=active |- 3:0:0:2 sdc 8:32 active undef running |- 3:0:2:2 sdi 8:128 active undef running |- 3:0:3:2 sdl 8:176 active undef running |- 3:0:1:2 sdf 8:80 active undef running |- 3:0:4:2 sdo 8:224 active undef running |- 3:0:5:2 sdr 65:16 active undef running |- 3:0:6:2 sdu 65:64 active undef running |- 3:0:7:2 sdx 65:112 active undef running |- 4:0:0:2 sdaa 65:160 active undef running |- 4:0:1:2 sdad 65:208 active undef running |- 4:0:2:2 sdag 66:0 active undef running |- 4:0:3:2 sdaj 66:48 active undef running |- 4:0:4:2 sdam 66:96 active undef running |- 4:0:5:2 sdap 66:144 active undef running |- 4:0:6:2 sdas 66:192 active undef running `- 4:0:7:2 sdav 66:240 active undef running 3624a9370bf28da2ee4cf586d00010002 dm-0 PURE,FlashArray size=1.0T features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=active |- 3:0:1:1 sde 8:64 active undef running |- 3:0:0:1 sdb 8:16 active undef running |- 3:0:3:1 sdk 8:160 active undef running |- 3:0:2:1 sdh 8:112 active undef running |- 3:0:4:1 sdn 8:208 active undef running |- 3:0:5:1 sdq 65:0 active undef running |- 3:0:6:1 sdt 65:48 active undef running |- 3:0:7:1 sdw 65:96 active undef running |- 4:0:0:1 sdz 65:144 active undef running |- 4:0:1:1 sdac 65:192 active undef running |- 4:0:2:1 sdaf 65:240 active undef running |- 4:0:3:1 sdai 66:32 active undef running |- 4:0:4:1 sdal 66:80 active undef running |- 4:0:5:1 sdao 66:128 active undef running |- 4:0:6:1 sdar 66:176 active undef running `- 4:0:7:1 sdau 66:224 active undef running
The purevol list output has serial numbers matching that of the multipath -l output (in bold) above.
In this lab, the Brocade 300 switch has an open default zone configured. This isn't the best practice however it makes administration simpler on a small FC switch by not having to change the zoning config every time.
This sheet lists all of the common Site Recovery Manager management or recovery operations and the relevant SRA operations initiated by them. The location of the respective log for the SRA operation is listed, is one of the following:
The SRM logs are located at:
/var/log/vmware/srm/
The SRA logs are located at:
/var/log/vmware/srm/SRAs/sha256{RandomCharacters}
SRM logs are located at:
C:\ProgramData\VMware\VMware vCenter Site Recovery Manager\Logs\vmware-dr*
SRA logs are located at:
C:\ProgramData\VMware\VMware vCenter Site Recovery Manager\Logs\SRAs\purestorage\
The SRM logs are located at:
/var/log/vmware/srm/
The SRA logs are located at:
/var/log/vmware/srm/SRAs/sha256{RandomCharacters}
SRM logs are located at:
C:\ProgramData\VMware\VMware vCenter Site Recovery Manager\Logs\vmware-dr*
SRA logs are located at:
C:\ProgramData\VMware\VMware vCenter Site Recovery Manager\Logs\SRAs\purestorage\
SRM Operation |
SRA Operation |
Log Location |
---|---|---|
SRA Discover |
QueryInfo |
Initiating SRM server |
↓ |
QueryCapabilities |
↓ |
↓ |
QueryConnectionParameters |
↓ |
↓ |
QueryErrorDefinitions |
↓ |
↓ |
DiscoverArrays [i] |
↓ |
↓ |
DiscoverDevice [ii] |
↓ |
↓ |
QueryReplicationSettings [iii] |
↓ |
SRM Operation |
SRA Operation |
Log Location |
---|---|---|
Discover Arrays |
DiscoverArrays |
Initiating SRM server |
SRM Operation |
SRA Operation |
Log Location |
---|---|---|
Discover Arrays |
DiscoverArrays |
Initiating SRM server |
SRM Operation |
SRA Operation |
Log Location |
---|---|---|
Discover Devices |
DiscoverDevices |
Both |
SRM Operation |
SRA Operation |
Log Location |
---|---|---|
Discover Devices |
DiscoverDevices |
Both |
SRM Recovery Plan Step |
SRA Operation |
Log Location |
---|---|---|
Synchronize Storage |
QueryReplicationSettings [iv] |
Protected |
↓ |
SyncOnce [v] |
Protected |
↓ |
QuerySyncStatus [vi] |
Protected |
Create Writeable Storage Snapshot |
TestFailoverStart |
Recovery |
↓ |
DiscoverDevices |
Recovery |
SRM Recovery Plan Step |
SRA Operation |
Log Location |
---|---|---|
Discard test data and reset storage |
TestFailoverStop |
Recovery |
↓ |
DiscoverDevices |
Recovery |
SRM Recovery Plan Step |
SRA Operation |
Log Location |
---|---|---|
Pre-synchronize Storage |
QueryReplicationSettings |
Protected |
↓ |
SyncOnce |
Protected |
↓ |
QuerySyncStatus |
Protected |
Prepare Protected VMs for Migration |
PrepareFailover |
Protected |
↓ |
DiscoverDevices |
Protected |
Synchronize Storage |
SyncOnce |
Protected |
↓ |
QuerySyncStatus |
Protected |
Change Recovery Site Storage to Writeable |
Failover |
Recovery |
↓ |
DiscoverDevices |
Recovery |
SRM Recovery Plan Step |
SRA Operation |
Log Location |
---|---|---|
Configure Storage to Reverse Direction |
ReverseReplication |
Former Recovery Site |
↓ |
DiscoverDevices |
Both |
Synchronize Storage |
QueryReplicationSettings |
Former Recovery Site |
↓ |
SyncOnce |
Former Recovery Site |
↓ |
QuerySyncStatus |
Former Recovery Site |
SRM Operation |
SRA Operation |
Log Location |
---|---|---|
SRA Discover |
QueryInfo |
Initiating SRM server |
↓ |
QueryCapabilities |
↓ |
↓ |
QueryConnectionParameters |
↓ |
↓ |
QueryErrorDefinitions |
↓ |
↓ |
DiscoverArrays [i] |
↓ |
↓ |
DiscoverDevice [ii] |
↓ |
↓ |
QueryReplicationSettings [iii] |
↓ |
SRM Operation |
SRA Operation |
Log Location |
---|---|---|
Discover Arrays |
DiscoverArrays |
Initiating SRM server |
SRM Operation |
SRA Operation |
Log Location |
---|---|---|
Discover Arrays |
DiscoverArrays |
Initiating SRM server |
SRM Operation |
SRA Operation |
Log Location |
---|---|---|
Discover Devices |
DiscoverDevices |
Both |
SRM Operation |
SRA Operation |
Log Location |
---|---|---|
Discover Devices |
DiscoverDevices |
Both |
SRM Recovery Plan Step |
SRA Operation |
Log Location |
---|---|---|
Synchronize Storage |
QueryReplicationSettings [iv] |
Protected |
↓ |
SyncOnce [v] |
Protected |
↓ |
QuerySyncStatus [vi] |
Protected |
Create Writeable Storage Snapshot |
TestFailoverStart |
Recovery |
↓ |
DiscoverDevices |
Recovery |
SRM Recovery Plan Step |
SRA Operation |
Log Location |
---|---|---|
Discard test data and reset storage |
TestFailoverStop |
Recovery |
↓ |
DiscoverDevices |
Recovery |
SRM Recovery Plan Step |
SRA Operation |
Log Location |
---|---|---|
Pre-synchronize Storage |
QueryReplicationSettings |
Protected |
↓ |
SyncOnce |
Protected |
↓ |
QuerySyncStatus |
Protected |
Prepare Protected VMs for Migration |
PrepareFailover |
Protected |
↓ |
DiscoverDevices |
Protected |
Synchronize Storage |
SyncOnce |
Protected |
↓ |
QuerySyncStatus |
Protected |
Change Recovery Site Storage to Writeable |
Failover |
Recovery |
↓ |
DiscoverDevices |
Recovery |
SRM Recovery Plan Step |
SRA Operation |
Log Location |
---|---|---|
Configure Storage to Reverse Direction |
ReverseReplication |
Former Recovery Site |
↓ |
DiscoverDevices |
Both |
Synchronize Storage |
QueryReplicationSettings |
Former Recovery Site |
↓ |
SyncOnce |
Former Recovery Site |
↓ |
QuerySyncStatus |
Former Recovery Site |
This section lists all of the relevant SRM to SRA operations. Each operation has a definition in accordance to what SRM expects to happen and then also a definition of what the Pure Storage SRA actually does to fulfill SRMs expectations.
[i] Only is created if there are already existing array managers created in SRM for the Pure Storage SRA.
[ii] Only is created if one or more array pairs are enabled
[iii] Only is created if one or more array pairs are enabled
[iv] Only created if “Replicate Recent Changes” is selected at the start of the test recovery
[v] Only created if “Replicate Recent Changes” is selected at the start of the test recovery
[vi] Only created if “Replicate Recent Changes” is selected at the start of the test recovery
vSphere reports the following error while attempting to format a VMFS datastore using a Pure Storage iSCSI LUN:
"HostDatastoreSystem.CreateVmfsDatastore" for object "<...>" on vCenter Server "<...>" failed
The LUN will report as online and available under the "Storage Adapters" section in the vSphere Client.
This error can be due to improper configuration in the network path causing jumbo frames to be fragmented from the ESXi Host to the FlashArray.
How to confirm Jumbo Frames can pass through the network
Run the following command from the ESXi Host in question via SSH:
vmkping -d -s 8972 <target portal ipaddress>
If no response is received, or the following message is returned, then jumbo frames are not successfully traversing the network:
sendto() failed (Message too long)
sendto() failed (Message too long)
sendto() failed (Message too long)
There is an L2 device between the ESXi host and FlashArray that is not allowing jumbo frames to properly pass. Please have the customer check virtual and physical switches on the subnet to ensure jumbo frames are configured from end-to-end.
Make sure all network devices allow jumbo frames to pass from the ESXi host to the Pure Storage FlashArray.
Enabling CHAP authentication leads to ESXi hosts disconnecting and they are unable to reconnect.
The array has CHAP authentication enabled and is unable to reconnect after configuring CHAP on the ESXi host.
Purity does not support Dynamic Discovery with CHAP.
Follow this blog post for a more detailed guide.
Configure the ESXi host to use static CHAP, confirm Dynamic CHAP is not set up, and inherit from parent is not checked.
Two methods of configuring CHAP to the pure array: