This is one stop global knowledge base where you can learn about all the products, solutions and support features.
The Pure Storage icon is not showing up in the vCenter Web Client. This can be caused the JDK not being properly installed on the VMware vCenter server. We can verify whether the JDK is installed correctly, and which Java version is being used by the Pure Plugin in the VMware vSphere Web Client main log file (vsphere_client_virgo.log).
An incorrect installation and configuration of the JDK will cause issues with the Pure Plugin.
The Java version can be found in the vshere_client_virgo.log, and will only show up in the log when the vSphere web client is restarted.
In this example, the JDK 1.7u17 is installed on the Windows Server 2008 R2 for vCenter Server 5.1.
[2016-07-15 10:22:41.225] INFO [INFO ] start-signalling-1 com.vmware.vise.util.debug.SystemUsageMonitor System info : OS - Windows Server 2008 R2 Arch - amd64 Java Version - 1.7.0_17 [2016-07-15 10:22:41.256] INFO [INFO ] Timer-2 com.vmware.vise.util.debug. SystemUsageMonitor Heap : init = 201292928(196575K) used = 309326640(302076K) committed = 672727040(656960K) max = 954466304(932096K) non-Heap : init = 136773632(133568K) used = 82368472(80437K) committed = 142344192(139008K) max = 318767104(311296K) No of loaded classes : 13796
When viewing the Performance under the Pure Storage tab in the vSphere Web Client, you get an error "Content was blocked because it was not signed by a valid security certificate. For more information, see Certificate Errors in Internet Explorer Help".
This issue can occur in any web browser, not just Internet Explorer.
On the vSphere Web Client, the user is unable to view the Performance graph in the Pure Storage tab.
Open a new browser window or tab to log in to the Pure FlashArray GUI. Once you are able to see the login screen of the Pure FlashArray, then the browser has accepted the SSL certificate from the Pure FlashArray.
Then go back to the vSphere Web Client, the Performance graph should now be displayed under the Pure Storage tab.
When attempting to install the Pure Storage vSphere plugin in a vSphere 5.1 environment, the following errors are reported in the vSphere virgo client logs:
[2016-06-30 09:44:02.629] ERROR [ERROR] P Connection(2)-170.92.17.57 org.eclipse.virgo.kernel.deployer.management.StandardDeployer Exception filtered from JMX invocation org.eclipse.virgo.kernel.deployer.core.DeploymentException: Error creating bean with name 'FlashArrayDataAdapter': Invocation of init method failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'FlashArrayDataAdapterImpl' defined in URL [bundleentry://243.fwk971840267/META-INF/spring/bundle-context.xml]: Instantiation of bean failed; nested exception is java.lang.UnsupportedClassVersionError: com/purestorage/rest/exceptions/PureException : Unsupported major.minor version 51.0 Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'FlashArrayDataAdapterImpl' defined in URL [bundleentry://243.fwk971840267/META-INF/spring/bundle-context.xml]: Instantiation of bean failed; nested exception is java.lang.UnsupportedClassVersionError: com/purestorage/rest/exceptions/PureException : Unsupported major.minor version 51.0 Caused by: java.lang.UnsupportedClassVersionError: com/purestorage/rest/exceptions/PureException : Unsupported major.minor version 51.0 at com.purestorage.FlashArrayDataAdapter.<clinit>(Unknown Source)
vSphere Plugin Pure Storage icon does not show up on vSphere Web Client.
The error message "Unsupported major.minor version 51.0" is coming from Java and does not mean that vCenter 5.1 is not supported. This message indicates that the vCenter 5.1 is not using JDK 7.
See https://en.wikipedia.org/wiki/Java_class_file for a list of mapping from JDK version to the major version of the class file:
major version number of the class file format being used.
Java SE 7 = 51 (0x33 hex)
vCenter 5.1 comes with JDK 6 by default, which would cause this exception. The solution is to update to JDK 7u17 as per the vSphere Plugin FAQ for the vSphere plugin to work.
Link to the JDK 7u17:
http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html#jdk-7u17-oth-JPR
The customer is unable to provision new VMs on a new pure volume, even though the volume is not full and after increasing the volume size.
In this example, the Pure datastore shows only 52% full on the ESXi host. See attached "Puredatastore" screenshot.
~ # df -h Filesystem Size Used Available Use% Mounted on VMFS-5 41.0T 21.2T 19.8T 52% /vmfs/volumes/puredatastore
The Pure FlashArray shows the volume is provisioned 41TB but only a total of 1.2TB is used with a high data reduction ratio.
purevol list --space Name Size Thin Provisioning Data Reduction Total Reduction Volume Snapshots Shared Space System Total VM_Storage 41T 73% 7.9 to 1 30.2 to 1 939.07G 287.75G - - 1.20T
The error in the VMkernel log shows:
FS3DM: 2004: status No space left on device copying 1 extents between two files, bytesTransferred = 0 extentsTransferred: 0".
The customer is unable to provision larger size VMs on a Pure datastore mounted to the ESXi host. Creating a VM which is 10G of size works, but creating a 30GB VM fails.
Follow the solution provided in VMware KB
https://kb.vmware.com/selfservice/mi...rnalId=1007638 to gather the output and troubleshoot this issue:
vmkfstools -P -v 10 /vmfs/volumes/datastore_name
The following output shows that the datastore is running low on the pointer (Ptr) blocks or inodes, which is why it is full.
~ # vmkfstools -P -v 10 /vmfs/volumes/puredatastore/ VMFS-5.60 file system spanning 1 partitions. File system label (if any): puredatastore Mode: public ATS-only Capacity 45079708303360 (42991360 file blocks * 1048576), 21463906123776 (20469576 blocks) avail, max file size 69201586814976 Volume Creation Time: Wed Aug 13 06:40:01 2014 Files (max/free): 130000/116451 Ptr Blocks (max/free): 64512/245 Sub Blocks (max/free): 32000/29350 Secondary Ptr Blocks (max/free): 256/256 File Blocks (overcommit/used/overcommit %): 0/22521784/0 Ptr Blocks (overcommit/used/overcommit %): 0/64267/0 Sub Blocks (overcommit/used/overcommit %): 0/2650/0 Volume Metadata size: 1023770624 UUID: 53eb0841-1faf6578-b865-ecf4bbc519f8 Logical device: 53eb083d-9bd41bc0-17ca-ecf4bbc519f8 Partitions spanned (on "lvm"): naa.624a9370a2aedf261ad6c61800011010:1 Is Native Snapshot Capable: YES OBJLIB-LIB: ObjLib cleanup done.
There is not enough Ptr blocks to satisfy a larger VM which require more PTR blocks. The solution would be to do the following:
In Pure Storage's vSphere plugin release 5.2.0, this process is now greatly simplified by using a VMFS workflow built into the plugin. Read more about the workflow here.
This article is for restoring a virtual machine from a Pure Storage FlashArray snapshot of a VMFS Datatore only. This does not apply to VMs on vVols, other third party snapshot recovery processes or a Pure Storage FlashBlade. For restoring or undeleting a VM on a vVol datastore, in vSphere plugin release 5.1.0, the process is simplified with a built-in workflow.
Please follow the steps outlined below to successfully restore a virtual machine from a Pure Storage FlashArray snapshot:
vCenter GUI |
---|
|
FlashArray GUI |
---|
A little less specific vs using the FlashArray CLI. You'll need to know the name of the Volume or have need to look at a few volumes.
|
The easiest way to do this is from the ESXi CLI and FlashArray CLI.
ESXi CLI:
Locate the Datastore via esxcfg-scsidevs |
---|
[root@ESXi-4:~] esxcfg-scsidevs -m naa.624a937073e940225a2a52bb0003ae71:3 /vmfs/devices/disks/naa.624a937073e940225a2a52bb0003ae71:3 5b6b537a-4d4d8368-9e02-0025b521004f 0 ESXi-4-Boot-Lun naa.624a9370bd452205599f42910001edc7:1 /vmfs/devices/disks/naa.624a9370bd452205599f42910001edc7:1 5b7d8183-f78ed720-ddf0-0025b521004d 0 sn1-405-25-Content-Library-Datastore naa.624a9370bd452205599f42910001edc8:1 /vmfs/devices/disks/naa.624a9370bd452205599f42910001edc8:1 5b7d8325-b1db9568-4d28-0025b521004d 0 sn1-405-25-Datastore-1-LUN-150 naa.624a937098d1ff126d20469c000199ea:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199ea:1 5b7d78d7-2b993f30-6902-0025b521004d 0 sn1-405-21-ISO-Repository naa.624a937098d1ff126d20469c000199eb:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199eb:1 5b7d8309-56bd3d78-a081-0025b521004d 0 sn1-405-21-Datastore-1-LUN-100 naa.624a937098d1ff126d20469c0001aad1:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c0001aad1:1 5b8f115f-2b499358-c2e1-0025b521004d 0 prod-sn1-405-c12-21-SRM-Placeholder naa.624a937098d1ff126d20469c0001ae66:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c0001ae66:1 5b901f7e-a6bc0094-c0a3-0025b521003c 0 prod-sn1-405-c12-21-SRM-Datastore-1 naa.624a937098d1ff126d20469c00024c2e:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c00024c2e:1 5b96f277-04c3317f-85db-0025b521003c 0 Syncrep-sn1-405-prod-srm-datastore-1 naa.624a937098d1ff126d20469c00024c33:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c00024c33:1 5b96f28b-57eedbc0-ce59-0025b521004d 0 Syncrep-sn1-405-dev-srm-datastore-1 naa.624a9370bd452205599f42910003f8d8:1 /vmfs/devices/disks/naa.624a9370bd452205599f42910003f8d8:1 5ba8fbd3-f5f7b06e-5286-0025b521004f 0 Syncrep-sn1-405-prod-srm-datastore-2 [root@ESXi-4:~] [root@ESXi-4:~] esxcfg-scsidevs -m |grep "sn1-405-21-Datastore-1-LUN-100" naa.624a937098d1ff126d20469c000199eb:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199eb:1 5b7d8309-56bd3d78-a081-0025b521004d 0 sn1-405-21-Datastore-1-LUN-100 |
FlashArray CLI |
---|
pureuser@sn1-405-c12-21> purevol list Name Size Source Created Serial dev-sn1-405-21-Datastore-1-LUN-41 5T - 2018-09-04 10:42:27 PDT 98D1FF126D20469C0001A6A1 prod-sn1-405-21-Datastore-1-LUN-100 15T - 2018-08-22 09:07:13 PDT 98D1FF126D20469C000199EB prod-sn1-405-21-Prod-Cluster-RDM-FileShare-1 10T - 2018-08-24 20:24:02 PDT 98D1FF126D20469C00019ABD prod-sn1-405-21-Prod-Cluster-RDM-Quorum-Witness 1G - 2018-08-24 20:23:37 PDT 98D1FF126D20469C00019ABC prod-sn1-405-21-srm-datastore-1 5T sn1-405-c12-25:prod-sn1-405-21-srm-datastore-1-puresra-demoted 2018-09-05 11:24:15 PDT 98D1FF126D20469C0001AE66 prod-sn1-405-21-srm-placeholder 100G - 2018-09-04 16:08:15 PDT 98D1FF126D20469C0001AAD1 pureuser@sn1-405-c12-21> pureuser@sn1-405-c12-21> purevol list prod-sn1-405-21-Datastore-1-LUN-100 Name Size Source Created Serial prod-sn1-405-21-Datastore-1-LUN-100 15T - 2018-08-22 09:07:13 PDT 98D1FF126D20469C000199EB Similar to the GUI, we can match the datastore uuid to the FlashArray volume Serial number to confirm this is the datastore and volume we need to work with. |
pureuser@slc-405> purevol list slc-production --snap Name Size Source Created Serial slc-production.4674 500G slc-production 2017-01-17 11:26:18 MST 309582CAEE2411F900011242 |
pureuser@slc-405> purevol copy slc-production.4674 slc-production-recovery Name Size Source Created Serial slc-production-recovery 500G slc-production 2017-01-17 11:26:56 MST 309582CAEE2411F900011243 |
pureuser@slc-405> purevol list slc-production-recovery Name Size Source Created Serial slc-production-recovery 500G slc-production 2017-01-17 11:26:18 MST 309582CAEE2411F900011243 |
The 'Created' date & time should match the timestamp of when the snapshot was created on the newly created volume.
pureuser@slc-405> purevol connect --hgroup ESXi-HG slc-production-recovery Name Host Group Host LUN slc-production-recovery ESXi-HG slc-esx-1 253
|
Note in the output above that the recovery LUN is identified as a snapshot to our 'slc-production' datastore we will be recovering.
While creating the datastore ensure you choose the: ' Assign a new signature ' option.
It is not uncommon for the resignature process to take several minutes to complete. If this task does not complete after 10 minutes, engage additional resources for assistance.
While registering the recovery virtual machine to the ESXi host, and the original VM is still live on the ESXi host, ensure you rename the recovery VM. If you do not you will have two VMs with the same name and need to look at the underlying datastore properties to determine which VM is the recovery and which is the original.
When powering on the recovery VM you may be asked if the VM has been 'Copied' or 'Moved'. If the original VM is already destroyed and no longer in inventory you can safely choose 'I moved it'. If the original VM is not deleted and going to be around for additional time then you will need to select the: 'I copied it' option so that there is not a conflict in UUIDs between VMs.
1. Delete some of the VMs / files (or templates) from the datastore to release some of the ptr blocks so more VMs can be created.
2 Create a new datastore and create new VMs on that datastore.
This can be a fairly common issue with larger datastores (30+tb in size) and typically the work is by having multiple datastores around 30TB in size.