Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All

Storage and Backups-Purestorage

Thumbprint Mismatch - Plugin Installed on FlashArray But Not in vSphere Web Client

Problem

After installing the plug-in in the Pure GUI, you are unable to see that it is installed in the vSphere web client.

Diagnosis

You will want to check the logs on the vSphere server that is hosting the web services.

The logs are in the following directory: C:\ProgramData\VMware\vSphere Web Client\serviceability\logs.
The ones of interest are: vsphere_client_virgo and com.purestorageui.Purestorageui-1.x.x.

In the virgo logs you may see the following error:

Error unzipping https://10.193.17.235/download/purestorage-vsphere<wbr/>-plugin.zip?version=1.1.10 javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: Server certificate chain is not trusted and thumbprint doesn't match

Solution

There is an issue when the GUI is started when /cache/ssl/gui.keystore is not available yet (new Array) which results in an unmatched fingerprint.

3.3.x

For 3.3.x you will need to run the following command, on both controllers:

restart gui

3.4.Xx

For 3.4.x you will need to run the following command, on both controllers:

/etc/init.d/nginx restart

The JIRA for this fix is :

https://jira.purestorage.com/browse/PURE-22188

3.3.x

For 3.3.x you will need to run the following command, on both controllers:

restart gui

3.4.Xx

For 3.4.x you will need to run the following command, on both controllers:

/etc/init.d/nginx restart

The JIRA for this fix is :

https://jira.purestorage.com/browse/PURE-22188

Read article

Adding an Array Fails in vCenter When Underscore is Used in Username

Read article

vSphere Plug-in Shows Array as Non-compatible

Problem

Occasionally the vSphere plug-in will show the array being in a non-compatible state.  The root culprit of this is due to the fact that the security token that is generated can become invalid after a change in the configuration of the array.  One example of this would be an SSD reset, or a Purity upgrade.

Here is an example of Pure-b3 showing as non-compatible.

v_plug01.png

Solution

To correct this you will need to select the array and click on the edit button.  Then provide all the necessary credentials for vSphere to log into the array and request a new security token:

v_plug02.png

Click save and the vSphere web client will now be able to administer the array.  It should now show as 'true'

v_plug03.png

Read article

How-to: Configure an ESX Resource to be Passthrough

HOW TO CONFIGURE THE HBA IN PASSTHROUGH MODE

Steps

* Open vSphere client and connect to the ESX server you want to configure, either directly or through vCenter.

* Click on the ESX server you want to configure then click on the "Configuration" tab.

* Click on the Advanced Settings link and you should see the following screen

esx_pass_01.png

(Note - this ESX server already has the Emulex HBA configured in passthrough mode so its showing here)

* Click on the Edit link just above the window on the right.

* Locate the physical hardware you want to remove virtualization on. Put a checkmark in all the boxes that you want to devirtualize. In this example, the Emulex HBA has checks in all boxes including the parent, which means take the whole adapter.

esx_pass_02.png

After this is done, the system will request for a reboot after clicking on OK. Make sure you have prepared the ESX box to be rebooted by pausing all VMs and alerting others who use it.

How to configure the HBA and assign it to a VM.

Since the HBA is now configured in passthrough, it is now dedicated to be used by a VM. This is the difference between a passthrough and virtualized device. The ESX server makes the device a shared device in virtualization mode between VMs.

* Select a VM and right click on it to Edit settings.

* Click on Add to add a resource to the VM.

* Select PCI Device. You'll need to repeat the Add step if you want to add more than one port. In this demo, we want to add both HBA ports

because we want to get MPIO to work correctly with two paths. Repeat this step to add the other port from the screenshot above (03:00.0 and 03:00.1).

The repeat is needed because ESX only allows us to add one port at a time.

esx_pass_03.png

* Click on Next until you come to Finish to complete the configuration of the VM. Be sure to repeat the steps to add the second HBA port.

The following screen should be the result after adding both HBA ports to the VM

esx_pass_04.png

Click on OK to save the VM's configuration. In the previous screenshots, I moved the PCI devices from one VM to another. This is how the HBA ports can be reassigned to another VM.

TESTING MPIO IN LINUX TO SEE THE PATHS TO PURE VOLS

On my RHEL64 VM, I can now see multipathd show multiple paths for each lun.

[root@rhel64 ~]# multipath -l
3624a9370bf28da2ee4cf586d00010004 dm-2 PURE,FlashArray
size=1.0T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
 |- 3:0:1:3 sdg 8:96 active undef running
 |- 3:0:0:3 sdd 8:48 active undef running
 |- 3:0:2:3 sdj 8:144 active undef running
 |- 3:0:3:3 sdm 8:192 active undef running
 |- 3:0:4:3 sdp 8:240 active undef running
 |- 3:0:5:3 sds 65:32 active undef running
 |- 3:0:6:3 sdv 65:80 active undef running
 |- 3:0:7:3 sdy 65:128 active undef running
 |- 4:0:0:3 sdab 65:176 active undef running
 |- 4:0:1:3 sdae 65:224 active undef running
 |- 4:0:2:3 sdah 66:16 active undef running
 |- 4:0:3:3 sdak 66:64 active undef running
 |- 4:0:4:3 sdan 66:112 active undef running
 |- 4:0:5:3 sdaq 66:160 active undef running
 |- 4:0:6:3 sdat 66:208 active undef running
 `- 4:0:7:3 sdaw 67:0 active undef running
3624a9370bf28da2ee4cf586d00010003 dm-1 PURE,FlashArray
size=1.0T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
 |- 3:0:0:2 sdc 8:32 active undef running
 |- 3:0:2:2 sdi 8:128 active undef running
 |- 3:0:3:2 sdl 8:176 active undef running
 |- 3:0:1:2 sdf 8:80 active undef running
 |- 3:0:4:2 sdo 8:224 active undef running
 |- 3:0:5:2 sdr 65:16 active undef running
 |- 3:0:6:2 sdu 65:64 active undef running
 |- 3:0:7:2 sdx 65:112 active undef running
 |- 4:0:0:2 sdaa 65:160 active undef running
 |- 4:0:1:2 sdad 65:208 active undef running
 |- 4:0:2:2 sdag 66:0 active undef running
 |- 4:0:3:2 sdaj 66:48 active undef running
 |- 4:0:4:2 sdam 66:96 active undef running
 |- 4:0:5:2 sdap 66:144 active undef running
 |- 4:0:6:2 sdas 66:192 active undef running
 `- 4:0:7:2 sdav 66:240 active undef running
3624a9370bf28da2ee4cf586d00010002 dm-0 PURE,FlashArray
size=1.0T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
 |- 3:0:1:1 sde 8:64 active undef running
 |- 3:0:0:1 sdb 8:16 active undef running
 |- 3:0:3:1 sdk 8:160 active undef running
 |- 3:0:2:1 sdh 8:112 active undef running
 |- 3:0:4:1 sdn 8:208 active undef running
 |- 3:0:5:1 sdq 65:0 active undef running
 |- 3:0:6:1 sdt 65:48 active undef running
 |- 3:0:7:1 sdw 65:96 active undef running
 |- 4:0:0:1 sdz 65:144 active undef running
 |- 4:0:1:1 sdac 65:192 active undef running
 |- 4:0:2:1 sdaf 65:240 active undef running
 |- 4:0:3:1 sdai 66:32 active undef running
 |- 4:0:4:1 sdal 66:80 active undef running
 |- 4:0:5:1 sdao 66:128 active undef running
 |- 4:0:6:1 sdar 66:176 active undef running
 `- 4:0:7:1 sdau 66:224 active undef running

HOW DO I KNOW THESE ARE THE CORRECT LUNS ASSIGNED TO MY HBA ADAPTER?

The purevol list output has serial numbers matching that of the multipath -l output (in bold) above.

WHY DO I SEE SO MANY PATHS? I THOUGHT THERES ONLY A FEW.

In this lab, the Brocade 300 switch has an open default zone configured. This isn't the best practice however it makes administration simpler on a small FC switch by not having to change the zoning config every time.

Read article

Troubleshooting Installation of ESXi with iSCSI CHAP on UCS

Read article

Determining Whether an ESXi LUN Supports SCSI UNMAP

Read article

SRM User Guide: Operation Log Locations

Pure Storage Storage Replication Adapter Log File Cheat Sheet

This sheet lists all of the common Site Recovery Manager management or recovery operations and the relevant SRA operations initiated by them. The location of the respective log for the SRA operation is listed, is one of the following:

  • Protected SRM server in respect to the Recovery Plan
  • Recovery SRM server in respect to the Recovery Plan
  • Initiating SRM server—in other words where the operation was started from (create an array manager on site A SRM server it will be on site A)
  • Both—this operation will create a log for that operation on both SRM servers
  • Former Recovery Site (in the case of reprotect where the recovery/protected servers have switched for that recovery plan this would be the original, now protected, SRM server)

Photon SRM Appliance Logs

The SRM logs are located at:

/var/log/vmware/srm/

The SRA logs are located at:

/var/log/vmware/srm/SRAs/sha256{RandomCharacters}

Windows SRM Appliance Logs

SRM logs are located at:

C:\ProgramData\VMware\VMware vCenter Site Recovery Manager\Logs\vmware-dr*

SRA logs are located at:

C:\ProgramData\VMware\VMware vCenter Site Recovery Manager\Logs\SRAs\purestorage\

Photon SRM Appliance Logs

The SRM logs are located at:

/var/log/vmware/srm/

The SRA logs are located at:

/var/log/vmware/srm/SRAs/sha256{RandomCharacters}

Windows SRM Appliance Logs

SRM logs are located at:

C:\ProgramData\VMware\VMware vCenter Site Recovery Manager\Logs\vmware-dr*

SRA logs are located at:

C:\ProgramData\VMware\VMware vCenter Site Recovery Manager\Logs\SRAs\purestorage\

SRM Operations

SRA Discovery

SRM Operation

SRA Operation

Log Location

SRA Discover

QueryInfo

Initiating SRM server

QueryCapabilities

QueryConnectionParameters

QueryErrorDefinitions

DiscoverArrays   [i]

DiscoverDevice   [ii]

QueryReplicationSettings  [iii]

Create Array Manager

SRM Operation

SRA Operation

Log Location

Discover Arrays

DiscoverArrays

Initiating SRM server

Discover New Arrays

SRM Operation

SRA Operation

Log Location

Discover Arrays

DiscoverArrays

Initiating SRM server

Enable Array Pair

SRM Operation

SRA Operation

Log Location

Discover Devices

DiscoverDevices

Both

Discover New Devices

SRM Operation

SRA Operation

Log Location

Discover Devices

DiscoverDevices

Both

Test Recovery Start

SRM Recovery Plan Step

SRA Operation

Log Location

Synchronize Storage

QueryReplicationSettings  [iv]

Protected

SyncOnce  [v]

Protected

QuerySyncStatus  [vi]

Protected

Create Writeable Storage Snapshot

TestFailoverStart

Recovery

DiscoverDevices

Recovery

Test Recovery Cleanup

SRM Recovery Plan Step

SRA Operation

Log Location

Discard test data and reset storage

TestFailoverStop

Recovery

DiscoverDevices

Recovery

Recovery (Planned Migration, in DR some operations may not occur)

SRM Recovery Plan Step

SRA Operation

Log Location

Pre-synchronize Storage

QueryReplicationSettings

Protected

SyncOnce

Protected

QuerySyncStatus

Protected

Prepare Protected VMs for Migration

PrepareFailover

Protected

DiscoverDevices

Protected

Synchronize Storage

SyncOnce

Protected

QuerySyncStatus

Protected

Change Recovery Site Storage to Writeable

Failover

Recovery

DiscoverDevices

Recovery

Reprotect

SRM Recovery Plan Step

SRA Operation

Log Location

Configure Storage to Reverse Direction

ReverseReplication

Former Recovery Site

DiscoverDevices

Both

Synchronize Storage

QueryReplicationSettings

Former Recovery Site

SyncOnce

Former Recovery Site

QuerySyncStatus

Former Recovery Site

SRA Discovery

SRM Operation

SRA Operation

Log Location

SRA Discover

QueryInfo

Initiating SRM server

QueryCapabilities

QueryConnectionParameters

QueryErrorDefinitions

DiscoverArrays   [i]

DiscoverDevice   [ii]

QueryReplicationSettings  [iii]

Create Array Manager

SRM Operation

SRA Operation

Log Location

Discover Arrays

DiscoverArrays

Initiating SRM server

Discover New Arrays

SRM Operation

SRA Operation

Log Location

Discover Arrays

DiscoverArrays

Initiating SRM server

Enable Array Pair

SRM Operation

SRA Operation

Log Location

Discover Devices

DiscoverDevices

Both

Discover New Devices

SRM Operation

SRA Operation

Log Location

Discover Devices

DiscoverDevices

Both

Test Recovery Start

SRM Recovery Plan Step

SRA Operation

Log Location

Synchronize Storage

QueryReplicationSettings  [iv]

Protected

SyncOnce  [v]

Protected

QuerySyncStatus  [vi]

Protected

Create Writeable Storage Snapshot

TestFailoverStart

Recovery

DiscoverDevices

Recovery

Test Recovery Cleanup

SRM Recovery Plan Step

SRA Operation

Log Location

Discard test data and reset storage

TestFailoverStop

Recovery

DiscoverDevices

Recovery

Recovery (Planned Migration, in DR some operations may not occur)

SRM Recovery Plan Step

SRA Operation

Log Location

Pre-synchronize Storage

QueryReplicationSettings

Protected

SyncOnce

Protected

QuerySyncStatus

Protected

Prepare Protected VMs for Migration

PrepareFailover

Protected

DiscoverDevices

Protected

Synchronize Storage

SyncOnce

Protected

QuerySyncStatus

Protected

Change Recovery Site Storage to Writeable

Failover

Recovery

DiscoverDevices

Recovery

Reprotect

SRM Recovery Plan Step

SRA Operation

Log Location

Configure Storage to Reverse Direction

ReverseReplication

Former Recovery Site

DiscoverDevices

Both

Synchronize Storage

QueryReplicationSettings

Former Recovery Site

SyncOnce

Former Recovery Site

QuerySyncStatus

Former Recovery Site

Glossary of SRA Operations

This section lists all of the relevant SRM to SRA operations. Each operation has a definition in accordance to what SRM expects to happen and then also a definition of what the Pure Storage SRA actually does to fulfill SRMs expectations.

  • queryInfo
    • SRM: Queries the SRA for basic properties such as name and version
    • SRA: Returns SRA name, version number, company and website
  • queryCapabilities
    • SRM: Queries the SRA for supported models of storage arrays and supported SRM commands
    • SRA: Returns FA 400 series, Purity 4.0, supported protocols (FC and iSCSI) and supported SRM commands: failover, discoverArrays, discoverDevices, prepareFailover, prepareRestoreReplication, queryCapabilities, queryConnectionParameters, queryErrorDefinitions, queryReplicationSettings, querySyncStatus, restoreReplication, reverseReplication, syncOnce, testFailoverStart, testFailoverStop, queryInfo.
  • queryErrorDefinitions
    • SRM: Queries the SRA for pre-defined array specific errors
    • SRA: Returns error messages relating specifically to the FlashArray
  • queryConnectionParameters
    • SRM: Queries the SRA for parameters needed to connect to the array management system to perform array management operations
    • SRA: Returns questions for array manager configuration to request connection/credential information for the local and remote FlashArray.
  • discoverArrays
    • SRM: Discovers storage array pairs configured for replication
    • SRA: Returns FlashArray information, Purity level, controller information and serial number/name.
  • discoverDevices
    • SRM: Discovers replicated devices on a given storage array
    • SRA: Returns local and remotely replicated devices (name and state), hosts on FlashArray information, initiator and storage port identifiers. Also looks for demoted devices, devices used for test failover and recovered volumes that are not yet replicated.
  • queryReplicationSettings
    • SRM: Queries replication settings for a list of devices
    • SRA: Returns host grouping information for local, replicated devices
  • syncOnce
    • SRM: Requests immediate replication
    • SRA: Starts a FlashRecover “replicatenow” operation and creates new remote snapshots for the given devices on the remote array through their protection groups. If there are multiple protection groups involved they will be all replicated. The source device and the new snapshot name will be returned to SRM along with the replication progress.
  • querySyncStatus
    • SRM: Queries the status of a replication initiated by syncOnce
    • SRA: Returns the source device and the new snapshot name to SRM along with the replication progress.
  • testFailoverStart
    • SRM: Creates writable temporary copies of the target devices
    • SRA: The SRA identifies the latest snapshot for each volume, identifies the ESXi connectivity information and correlates it with the configured hosts on the FlashArray. Favors attaching to a hostgroup over a host. Then creates a new volume for each source volume with the suffix of –puresra-testfailover and associates snapshots and then connects the volumes to the host or host group.
  • testFailoverStop
    • SRM: Deletes the temporary copies created by testFailoverStart
    • SRA: Disconnects and eradicates volumes created for a test recovery. Only volumes with the original prefix (the source name) and the puresra-testfailover suffix will be identified and eradicated. Replica names are returned to SRM.
  • prepareFailover
    • SRM: Makes source devices read-only and optionally takes a snapshot of the source devices in anticipation of a failover
    • SRA: SRA renames the source volumes with a suffix of –puresra-demoted and disconnects them from the hosts. Returns original volume name to SRM.
  • failover
    • SRM: Promotes target devices by stopping replication for those devices and making them writable
    • SRA: The SRA identifies the latest snapshot for each volume, identifies the ESXi connectivity information and correlates it with the configured hosts on the FlashArray. Favors attaching to a hostgroup over a host. Then either creates a new volume for each source volume or it identifies if there is a former volume from a previous recovery with the –puresra-demoted suffix and either names it or renames it with the suffix of –puresra-failover and associates snapshots and then connects the volumes to the host or host group.
  • reverseReplication
    • SRM: Reverses array replication so that the original target array becomes the source array and vice versa
    • SRA: Renames recovery volumes to remove –puresra-failover suffix, recreates the protection groups for the original source volumes on the target side and adds the volumes into it.

Footnotes

[i] Only is created if there are already existing array managers created in SRM for the Pure Storage SRA.

[ii] Only is created if one or more array pairs are enabled

[iii] Only is created if one or more array pairs are enabled

[iv] Only created if “Replicate Recent Changes” is selected at the start of the test recovery

[v] Only created if “Replicate Recent Changes” is selected at the start of the test recovery

[vi] Only created if “Replicate Recent Changes” is selected at the start of the test recovery

Read article

Troubleshooting when Formatting iSCSI VMFS Datastore Generates Error

Symptoms

vSphere reports the following error while attempting to format a VMFS datastore using a Pure Storage iSCSI LUN:

"HostDatastoreSystem.CreateVmfsDatastore" for object "<...>" on vCenter Server "<...>" failed

The LUN will report as online and available under the "Storage Adapters" section in the vSphere Client.

Diagnosis

This error can be due to improper configuration in the network path causing jumbo frames to be fragmented from the ESXi Host to the FlashArray.

How to confirm Jumbo Frames can pass through the network

Run the following command from the ESXi Host in question via SSH:

vmkping -d -s 8972 <target portal ipaddress>

If no response is received, or the following message is returned, then jumbo frames are not successfully traversing the network:

sendto() failed (Message too long)

sendto() failed (Message too long)

sendto() failed (Message too long)

There is an L2 device between the ESXi host and FlashArray that is not allowing jumbo frames to properly pass. Please have the customer check virtual and physical switches on the subnet to ensure jumbo frames are configured from end-to-end.

Solution

Make sure all network devices allow jumbo frames to pass from the ESXi host to the Pure Storage FlashArray.

Read article

Troubleshooting ESXi iSCSI Configuration Issue: Pure Shows Different Hosts in GUI and CLI

Read article

Troubleshooting when ESXi Hosts Disconnect with CHAP Enabled

Problem

Enabling CHAP authentication leads to ESXi hosts disconnecting and they are unable to reconnect.

Scenario

The array has CHAP authentication enabled and is unable to reconnect after configuring CHAP on the ESXi host.

Cause

Purity does not support Dynamic Discovery with CHAP.

Solution

Follow this blog post for a more detailed guide.

Configure the ESXi host to use static CHAP, confirm Dynamic CHAP is not set up, and inherit from parent is not checked.

Screen Shot 2015-07-13 at 10.16.52 AM.png


Two methods of configuring CHAP to the pure array:

Procedure 1 - Manually enter Static Discovery targets

  1. Configure array to use CHAP by entering Host User and Host Password in GUI > Storage Tab > Host > Gear > Configure CHAP .
  2. Confirm iSCSI Adapter > Properties > Dynamic Discovery does NOT contain array target.
  3. Configure iSCSI initiator to use CHAP by selecting Use CHAP and entering Name and Secret that matches array CHAP settings in vSphere Client > iSCSI Adapter > Properties > General tab > CHAP .
  4. Add Static Discovery array targets in vSphere Client > iSCSI Adapter > Properties > Static Discovery . Confirm the CHAP settings for each target are set to Inherit from parent.
  5. Rescan adapter

Procedure 2 - Enter CHAP settings for each discovered Static Discovery target

  1. Configure array to use CHAP by entering Host User and Host Password in GUI > Storage Tab > Host > Gear > Configure CHAP .
  2. Enter a single array iSCSI port IP address in iSCSI Adapter > Properties > Dynamic Discovery .
  3. Confirm Static Discovery list is populated with array iSCSI targets in iSCSI Adapter > Properties > Dynamic Discovery .
  4. For each array Static Discovery target configure CHAP settings to NOT inherit from parent and Use CHAP in iSCSI Adapter > Properties > Static Discovery > <target> > Settings > CHAP .
  5. Rescan adapter

Procedure 1 - Manually enter Static Discovery targets

  1. Configure array to use CHAP by entering Host User and Host Password in GUI > Storage Tab > Host > Gear > Configure CHAP .
  2. Confirm iSCSI Adapter > Properties > Dynamic Discovery does NOT contain array target.
  3. Configure iSCSI initiator to use CHAP by selecting Use CHAP and entering Name and Secret that matches array CHAP settings in vSphere Client > iSCSI Adapter > Properties > General tab > CHAP .
  4. Add Static Discovery array targets in vSphere Client > iSCSI Adapter > Properties > Static Discovery . Confirm the CHAP settings for each target are set to Inherit from parent.
  5. Rescan adapter

Procedure 2 - Enter CHAP settings for each discovered Static Discovery target

  1. Configure array to use CHAP by entering Host User and Host Password in GUI > Storage Tab > Host > Gear > Configure CHAP .
  2. Enter a single array iSCSI port IP address in iSCSI Adapter > Properties > Dynamic Discovery .
  3. Confirm Static Discovery list is populated with array iSCSI targets in iSCSI Adapter > Properties > Dynamic Discovery .
  4. For each array Static Discovery target configure CHAP settings to NOT inherit from parent and Use CHAP in iSCSI Adapter > Properties > Static Discovery > <target> > Settings > CHAP .
  5. Rescan adapter
Read article