This is one stop global knowledge base where you can learn about all the products, solutions and support features.
The customer is unable to provision new VMs on a new pure volume, even though the volume is not full and after increasing the volume size.
In this example, the Pure datastore shows only 52% full on the ESXi host. See attached "Puredatastore" screenshot.
~ # df -h Filesystem Size Used Available Use% Mounted on VMFS-5 41.0T 21.2T 19.8T 52% /vmfs/volumes/puredatastore
The Pure FlashArray shows the volume is provisioned 41TB but only a total of 1.2TB is used with a high data reduction ratio.
purevol list --space Name Size Thin Provisioning Data Reduction Total Reduction Volume Snapshots Shared Space System Total VM_Storage 41T 73% 7.9 to 1 30.2 to 1 939.07G 287.75G - - 1.20T
The error in the VMkernel log shows:
FS3DM: 2004: status No space left on device copying 1 extents between two files, bytesTransferred = 0 extentsTransferred: 0".
The customer is unable to provision larger size VMs on a Pure datastore mounted to the ESXi host. Creating a VM which is 10G of size works, but creating a 30GB VM fails.
Follow the solution provided in VMware KB
https://kb.vmware.com/selfservice/mi...rnalId=1007638 to gather the output and troubleshoot this issue:
vmkfstools -P -v 10 /vmfs/volumes/datastore_name
The following output shows that the datastore is running low on the pointer (Ptr) blocks or inodes, which is why it is full.
~ # vmkfstools -P -v 10 /vmfs/volumes/puredatastore/ VMFS-5.60 file system spanning 1 partitions. File system label (if any): puredatastore Mode: public ATS-only Capacity 45079708303360 (42991360 file blocks * 1048576), 21463906123776 (20469576 blocks) avail, max file size 69201586814976 Volume Creation Time: Wed Aug 13 06:40:01 2014 Files (max/free): 130000/116451 Ptr Blocks (max/free): 64512/245 Sub Blocks (max/free): 32000/29350 Secondary Ptr Blocks (max/free): 256/256 File Blocks (overcommit/used/overcommit %): 0/22521784/0 Ptr Blocks (overcommit/used/overcommit %): 0/64267/0 Sub Blocks (overcommit/used/overcommit %): 0/2650/0 Volume Metadata size: 1023770624 UUID: 53eb0841-1faf6578-b865-ecf4bbc519f8 Logical device: 53eb083d-9bd41bc0-17ca-ecf4bbc519f8 Partitions spanned (on "lvm"): naa.624a9370a2aedf261ad6c61800011010:1 Is Native Snapshot Capable: YES OBJLIB-LIB: ObjLib cleanup done.
There is not enough Ptr blocks to satisfy a larger VM which require more PTR blocks. The solution would be to do the following:
In Pure Storage's vSphere plugin release 5.2.0, this process is now greatly simplified by using a VMFS workflow built into the plugin. Read more about the workflow here.
This article is for restoring a virtual machine from a Pure Storage FlashArray snapshot of a VMFS Datatore only. This does not apply to VMs on vVols, other third party snapshot recovery processes or a Pure Storage FlashBlade. For restoring or undeleting a VM on a vVol datastore, in vSphere plugin release 5.1.0, the process is simplified with a built-in workflow.
Please follow the steps outlined below to successfully restore a virtual machine from a Pure Storage FlashArray snapshot:
vCenter GUI |
---|
|
FlashArray GUI |
---|
A little less specific vs using the FlashArray CLI. You'll need to know the name of the Volume or have need to look at a few volumes.
|
The easiest way to do this is from the ESXi CLI and FlashArray CLI.
ESXi CLI:
Locate the Datastore via esxcfg-scsidevs |
---|
[root@ESXi-4:~] esxcfg-scsidevs -m naa.624a937073e940225a2a52bb0003ae71:3 /vmfs/devices/disks/naa.624a937073e940225a2a52bb0003ae71:3 5b6b537a-4d4d8368-9e02-0025b521004f 0 ESXi-4-Boot-Lun naa.624a9370bd452205599f42910001edc7:1 /vmfs/devices/disks/naa.624a9370bd452205599f42910001edc7:1 5b7d8183-f78ed720-ddf0-0025b521004d 0 sn1-405-25-Content-Library-Datastore naa.624a9370bd452205599f42910001edc8:1 /vmfs/devices/disks/naa.624a9370bd452205599f42910001edc8:1 5b7d8325-b1db9568-4d28-0025b521004d 0 sn1-405-25-Datastore-1-LUN-150 naa.624a937098d1ff126d20469c000199ea:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199ea:1 5b7d78d7-2b993f30-6902-0025b521004d 0 sn1-405-21-ISO-Repository naa.624a937098d1ff126d20469c000199eb:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199eb:1 5b7d8309-56bd3d78-a081-0025b521004d 0 sn1-405-21-Datastore-1-LUN-100 naa.624a937098d1ff126d20469c0001aad1:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c0001aad1:1 5b8f115f-2b499358-c2e1-0025b521004d 0 prod-sn1-405-c12-21-SRM-Placeholder naa.624a937098d1ff126d20469c0001ae66:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c0001ae66:1 5b901f7e-a6bc0094-c0a3-0025b521003c 0 prod-sn1-405-c12-21-SRM-Datastore-1 naa.624a937098d1ff126d20469c00024c2e:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c00024c2e:1 5b96f277-04c3317f-85db-0025b521003c 0 Syncrep-sn1-405-prod-srm-datastore-1 naa.624a937098d1ff126d20469c00024c33:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c00024c33:1 5b96f28b-57eedbc0-ce59-0025b521004d 0 Syncrep-sn1-405-dev-srm-datastore-1 naa.624a9370bd452205599f42910003f8d8:1 /vmfs/devices/disks/naa.624a9370bd452205599f42910003f8d8:1 5ba8fbd3-f5f7b06e-5286-0025b521004f 0 Syncrep-sn1-405-prod-srm-datastore-2 [root@ESXi-4:~] [root@ESXi-4:~] esxcfg-scsidevs -m |grep "sn1-405-21-Datastore-1-LUN-100" naa.624a937098d1ff126d20469c000199eb:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199eb:1 5b7d8309-56bd3d78-a081-0025b521004d 0 sn1-405-21-Datastore-1-LUN-100 |
FlashArray CLI |
---|
pureuser@sn1-405-c12-21> purevol list Name Size Source Created Serial dev-sn1-405-21-Datastore-1-LUN-41 5T - 2018-09-04 10:42:27 PDT 98D1FF126D20469C0001A6A1 prod-sn1-405-21-Datastore-1-LUN-100 15T - 2018-08-22 09:07:13 PDT 98D1FF126D20469C000199EB prod-sn1-405-21-Prod-Cluster-RDM-FileShare-1 10T - 2018-08-24 20:24:02 PDT 98D1FF126D20469C00019ABD prod-sn1-405-21-Prod-Cluster-RDM-Quorum-Witness 1G - 2018-08-24 20:23:37 PDT 98D1FF126D20469C00019ABC prod-sn1-405-21-srm-datastore-1 5T sn1-405-c12-25:prod-sn1-405-21-srm-datastore-1-puresra-demoted 2018-09-05 11:24:15 PDT 98D1FF126D20469C0001AE66 prod-sn1-405-21-srm-placeholder 100G - 2018-09-04 16:08:15 PDT 98D1FF126D20469C0001AAD1 pureuser@sn1-405-c12-21> pureuser@sn1-405-c12-21> purevol list prod-sn1-405-21-Datastore-1-LUN-100 Name Size Source Created Serial prod-sn1-405-21-Datastore-1-LUN-100 15T - 2018-08-22 09:07:13 PDT 98D1FF126D20469C000199EB Similar to the GUI, we can match the datastore uuid to the FlashArray volume Serial number to confirm this is the datastore and volume we need to work with. |
pureuser@slc-405> purevol list slc-production --snap Name Size Source Created Serial slc-production.4674 500G slc-production 2017-01-17 11:26:18 MST 309582CAEE2411F900011242 |
pureuser@slc-405> purevol copy slc-production.4674 slc-production-recovery Name Size Source Created Serial slc-production-recovery 500G slc-production 2017-01-17 11:26:56 MST 309582CAEE2411F900011243 |
pureuser@slc-405> purevol list slc-production-recovery Name Size Source Created Serial slc-production-recovery 500G slc-production 2017-01-17 11:26:18 MST 309582CAEE2411F900011243 |
The 'Created' date & time should match the timestamp of when the snapshot was created on the newly created volume.
pureuser@slc-405> purevol connect --hgroup ESXi-HG slc-production-recovery Name Host Group Host LUN slc-production-recovery ESXi-HG slc-esx-1 253
|
Note in the output above that the recovery LUN is identified as a snapshot to our 'slc-production' datastore we will be recovering.
While creating the datastore ensure you choose the: ' Assign a new signature ' option.
It is not uncommon for the resignature process to take several minutes to complete. If this task does not complete after 10 minutes, engage additional resources for assistance.
While registering the recovery virtual machine to the ESXi host, and the original VM is still live on the ESXi host, ensure you rename the recovery VM. If you do not you will have two VMs with the same name and need to look at the underlying datastore properties to determine which VM is the recovery and which is the original.
When powering on the recovery VM you may be asked if the VM has been 'Copied' or 'Moved'. If the original VM is already destroyed and no longer in inventory you can safely choose 'I moved it'. If the original VM is not deleted and going to be around for additional time then you will need to select the: 'I copied it' option so that there is not a conflict in UUIDs between VMs.
You may need to identify the LUN (Logical Unit Number) from a NAA (Network Addressing Authority) identifier for a number of processes. (One or more numbers may reside between the first set of six digits and the last set of 24 digits).
The LUN serial number is included as part of the NAA. From the serial number, you can then identify the LUN ID.
The first six digits in an NAA refer to Pure Storage. The last 24 digits refer to the LUN’s serial number.
Example NAA: naa . 624a93 7 0 a78e6e1d4bacd0960001001a
In the above example:
The first six digits , 624a93 7 , highlighted in blue for emphasis, is the identifier for Pure Storage, verifying its origin in a Pure Storage device.
The last 24 digits, a78e6e1d4bacd0960001001a , highlighted in green for emphasis.
(In the example, one digit resides between the two sets of digits that are relevant here).
From a list of pure storage volumes, identify the LUN ID from the serial number. For example, if you were to identify the LUN ID from the serial number above, you could enter the following command:
pureuser-ct0:~# purevol list
This result would follow:
CLFDEV01_26 2T - 2014-05-27 15:05:41 EDT A78E6E1D4BACD0960001001A
Of this result, the first section of digits, CLFDEV01_26, is the LUN ID, which corresponds to both the serial number and the NAA.
For information on identifying LUN IDs from a FlashArray, see Identifying LUNs (Logical Unit Numbers) from Purity to Correlate to a Host.
1. Delete some of the VMs / files (or templates) from the datastore to release some of the ptr blocks so more VMs can be created.
2 Create a new datastore and create new VMs on that datastore.
This can be a fairly common issue with larger datastores (30+tb in size) and typically the work is by having multiple datastores around 30TB in size.