This is one stop global knowledge base where you can learn about all the products, solutions and support features.
In Pure Storage's vSphere plugin release 5.2.0, this process is now greatly simplified by using a VMFS workflow built into the plugin. Read more about the workflow here.
This article is for restoring a virtual machine from a Pure Storage FlashArray snapshot of a VMFS Datatore only. This does not apply to VMs on vVols, other third party snapshot recovery processes or a Pure Storage FlashBlade. For restoring or undeleting a VM on a vVol datastore, in vSphere plugin release 5.1.0, the process is simplified with a built-in workflow.
Please follow the steps outlined below to successfully restore a virtual machine from a Pure Storage FlashArray snapshot:
vCenter GUI |
---|
|
FlashArray GUI |
---|
A little less specific vs using the FlashArray CLI. You'll need to know the name of the Volume or have need to look at a few volumes.
|
The easiest way to do this is from the ESXi CLI and FlashArray CLI.
ESXi CLI:
Locate the Datastore via esxcfg-scsidevs |
---|
[root@ESXi-4:~] esxcfg-scsidevs -m naa.624a937073e940225a2a52bb0003ae71:3 /vmfs/devices/disks/naa.624a937073e940225a2a52bb0003ae71:3 5b6b537a-4d4d8368-9e02-0025b521004f 0 ESXi-4-Boot-Lun naa.624a9370bd452205599f42910001edc7:1 /vmfs/devices/disks/naa.624a9370bd452205599f42910001edc7:1 5b7d8183-f78ed720-ddf0-0025b521004d 0 sn1-405-25-Content-Library-Datastore naa.624a9370bd452205599f42910001edc8:1 /vmfs/devices/disks/naa.624a9370bd452205599f42910001edc8:1 5b7d8325-b1db9568-4d28-0025b521004d 0 sn1-405-25-Datastore-1-LUN-150 naa.624a937098d1ff126d20469c000199ea:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199ea:1 5b7d78d7-2b993f30-6902-0025b521004d 0 sn1-405-21-ISO-Repository naa.624a937098d1ff126d20469c000199eb:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199eb:1 5b7d8309-56bd3d78-a081-0025b521004d 0 sn1-405-21-Datastore-1-LUN-100 naa.624a937098d1ff126d20469c0001aad1:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c0001aad1:1 5b8f115f-2b499358-c2e1-0025b521004d 0 prod-sn1-405-c12-21-SRM-Placeholder naa.624a937098d1ff126d20469c0001ae66:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c0001ae66:1 5b901f7e-a6bc0094-c0a3-0025b521003c 0 prod-sn1-405-c12-21-SRM-Datastore-1 naa.624a937098d1ff126d20469c00024c2e:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c00024c2e:1 5b96f277-04c3317f-85db-0025b521003c 0 Syncrep-sn1-405-prod-srm-datastore-1 naa.624a937098d1ff126d20469c00024c33:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c00024c33:1 5b96f28b-57eedbc0-ce59-0025b521004d 0 Syncrep-sn1-405-dev-srm-datastore-1 naa.624a9370bd452205599f42910003f8d8:1 /vmfs/devices/disks/naa.624a9370bd452205599f42910003f8d8:1 5ba8fbd3-f5f7b06e-5286-0025b521004f 0 Syncrep-sn1-405-prod-srm-datastore-2 [root@ESXi-4:~] [root@ESXi-4:~] esxcfg-scsidevs -m |grep "sn1-405-21-Datastore-1-LUN-100" naa.624a937098d1ff126d20469c000199eb:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199eb:1 5b7d8309-56bd3d78-a081-0025b521004d 0 sn1-405-21-Datastore-1-LUN-100 |
FlashArray CLI |
---|
pureuser@sn1-405-c12-21> purevol list Name Size Source Created Serial dev-sn1-405-21-Datastore-1-LUN-41 5T - 2018-09-04 10:42:27 PDT 98D1FF126D20469C0001A6A1 prod-sn1-405-21-Datastore-1-LUN-100 15T - 2018-08-22 09:07:13 PDT 98D1FF126D20469C000199EB prod-sn1-405-21-Prod-Cluster-RDM-FileShare-1 10T - 2018-08-24 20:24:02 PDT 98D1FF126D20469C00019ABD prod-sn1-405-21-Prod-Cluster-RDM-Quorum-Witness 1G - 2018-08-24 20:23:37 PDT 98D1FF126D20469C00019ABC prod-sn1-405-21-srm-datastore-1 5T sn1-405-c12-25:prod-sn1-405-21-srm-datastore-1-puresra-demoted 2018-09-05 11:24:15 PDT 98D1FF126D20469C0001AE66 prod-sn1-405-21-srm-placeholder 100G - 2018-09-04 16:08:15 PDT 98D1FF126D20469C0001AAD1 pureuser@sn1-405-c12-21> pureuser@sn1-405-c12-21> purevol list prod-sn1-405-21-Datastore-1-LUN-100 Name Size Source Created Serial prod-sn1-405-21-Datastore-1-LUN-100 15T - 2018-08-22 09:07:13 PDT 98D1FF126D20469C000199EB Similar to the GUI, we can match the datastore uuid to the FlashArray volume Serial number to confirm this is the datastore and volume we need to work with. |
pureuser@slc-405> purevol list slc-production --snap Name Size Source Created Serial slc-production.4674 500G slc-production 2017-01-17 11:26:18 MST 309582CAEE2411F900011242 |
pureuser@slc-405> purevol copy slc-production.4674 slc-production-recovery Name Size Source Created Serial slc-production-recovery 500G slc-production 2017-01-17 11:26:56 MST 309582CAEE2411F900011243 |
pureuser@slc-405> purevol list slc-production-recovery Name Size Source Created Serial slc-production-recovery 500G slc-production 2017-01-17 11:26:18 MST 309582CAEE2411F900011243 |
The 'Created' date & time should match the timestamp of when the snapshot was created on the newly created volume.
pureuser@slc-405> purevol connect --hgroup ESXi-HG slc-production-recovery Name Host Group Host LUN slc-production-recovery ESXi-HG slc-esx-1 253
|
Note in the output above that the recovery LUN is identified as a snapshot to our 'slc-production' datastore we will be recovering.
While creating the datastore ensure you choose the: ' Assign a new signature ' option.
It is not uncommon for the resignature process to take several minutes to complete. If this task does not complete after 10 minutes, engage additional resources for assistance.
While registering the recovery virtual machine to the ESXi host, and the original VM is still live on the ESXi host, ensure you rename the recovery VM. If you do not you will have two VMs with the same name and need to look at the underlying datastore properties to determine which VM is the recovery and which is the original.
When powering on the recovery VM you may be asked if the VM has been 'Copied' or 'Moved'. If the original VM is already destroyed and no longer in inventory you can safely choose 'I moved it'. If the original VM is not deleted and going to be around for additional time then you will need to select the: 'I copied it' option so that there is not a conflict in UUIDs between VMs.
You may need to identify the LUN (Logical Unit Number) from a NAA (Network Addressing Authority) identifier for a number of processes. (One or more numbers may reside between the first set of six digits and the last set of 24 digits).
The LUN serial number is included as part of the NAA. From the serial number, you can then identify the LUN ID.
The first six digits in an NAA refer to Pure Storage. The last 24 digits refer to the LUN’s serial number.
Example NAA: naa . 624a93 7 0 a78e6e1d4bacd0960001001a
In the above example:
The first six digits , 624a93 7 , highlighted in blue for emphasis, is the identifier for Pure Storage, verifying its origin in a Pure Storage device.
The last 24 digits, a78e6e1d4bacd0960001001a , highlighted in green for emphasis.
(In the example, one digit resides between the two sets of digits that are relevant here).
From a list of pure storage volumes, identify the LUN ID from the serial number. For example, if you were to identify the LUN ID from the serial number above, you could enter the following command:
pureuser-ct0:~# purevol list
This result would follow:
CLFDEV01_26 2T - 2014-05-27 15:05:41 EDT A78E6E1D4BACD0960001001A
Of this result, the first section of digits, CLFDEV01_26, is the LUN ID, which corresponds to both the serial number and the NAA.
For information on identifying LUN IDs from a FlashArray, see Identifying LUNs (Logical Unit Numbers) from Purity to Correlate to a Host.
This guide shows how to re-scan and examine FC LUNS on Solaris 10/11. This is required for detecting new LUNs and to assist with basic storage connectivity troubleshooting.
Re-scanning FC LUNs on Solaris is generally non-disruptive in a well configured environment with normal load.
The following commands are useful for general SAN stack fact-finding on Solaris:
root@Unixarena-SOL11:~# luxadm -e port |grep CONNECTED /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl CONNECTED /devices/pci@1d,700000/SUNW,qlc@3/fp@0,0:devctl CONNECTED
Unix@sol# cfgadm -al -o show_FCP_dev |grep fc-fabric c2 fc-fabric connected configured unknown c4 fc-fabric connected configured unknown Unix@sol#
fcinfo
command:
Unix@sol# fcinfo hba-port |grep Port HBA Port WWN: 10000000c884bb48 HBA Port WWN: 10000000c884bb49 HBA Port WWN: 10000000c884b85c HBA Port WWN: 10000000c884b85d Unix@sol#
luxadm
command, if HBA is already connected to the FC switch.
Unix@sol# luxadm -e dump_map /dev/cfg/c4 Pos Port_ID Hard_Addr Port WWN Node WWN Type 0 29900 0 50080e8008cfb814 50080e8008cfb814 0x0 (Disk device) 1 27400 0 10000000c884b85c 20000000c884b85c 0x1f (Unknown Type,Host Bus Adapter) Unix@sol#
From the above output, the last line shows the HBA information. In the same way you can find the other controller information as well.
Unix@sol# cfgadm -al -o show_FCP_dev c2 c4
Unix@sol# cfgadm -c configure c2::50080e8008cfb814 Unix@sol# cfgadm -c configure c4::50080e8008cfb814
Scanning for FC/SAN Devices | |
---|---|
cfgadm -al
|
To scan FC LUNs |
devfsadm -c disk
|
To make sure all the device files are created |
tail /var/adm/messages
|
To see the new LUNs information |
echo | format
|
To get the new LUNs information |
ls -lrt /dev/rdsk | grep s2 | tail
|
To get the new LUNs information |
Note
: The command
luxadm
probe
can also be used to scan FC LUNs.
If you are still unable to see the new LUN/DISK, then, if you have multipathing enabled, you can try to reset the HBA. (Do not try this in critical servers unless you are confident that multipathing is correctly configured)
root@Unixarena-SOL11:~# luxadm -e port |grep CONNECTED /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl CONNECTED /devices/pci@1d,700000/SUNW,qlc@3/fp@0,0:devctl CONNECTED
forcelip
option:
root@Unixarena-SOL11:~# luxadm -e forcelip /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl
The
forcelip
command can be issued to the controller names as well:
Unix@sol# cfgadm -al -o show_FCP_dev |grep fc-fabric c2 fc-fabric connected configured unknown c4 fc-fabric connected configured unknown Unix@sol# Unix@sol# luxadm -e forcelip /dev/cfg/c2
cfgadm -al
. Make sure disks didn't loose any SAN paths after the HBA reset. If everything seems to be working, then issue the
forcelip
command to the other controller as well.
If you are still not able to see the new FC/SAN LUNS, then, as a last resort, reboot the server and try again. Once you see the new LUN, make sure that you also see all FC paths. A minimum two FC paths is required for SAN disks.
root@Unixarena-SOL11:~# luxadm display /dev/rdsk/c1txxxxxxd0s2
The following commands are useful for general SAN stack fact-finding on Solaris:
root@Unixarena-SOL11:~# luxadm -e port |grep CONNECTED /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl CONNECTED /devices/pci@1d,700000/SUNW,qlc@3/fp@0,0:devctl CONNECTED
Unix@sol# cfgadm -al -o show_FCP_dev |grep fc-fabric c2 fc-fabric connected configured unknown c4 fc-fabric connected configured unknown Unix@sol#
fcinfo
command:
Unix@sol# fcinfo hba-port |grep Port HBA Port WWN: 10000000c884bb48 HBA Port WWN: 10000000c884bb49 HBA Port WWN: 10000000c884b85c HBA Port WWN: 10000000c884b85d Unix@sol#
luxadm
command, if HBA is already connected to the FC switch.
Unix@sol# luxadm -e dump_map /dev/cfg/c4 Pos Port_ID Hard_Addr Port WWN Node WWN Type 0 29900 0 50080e8008cfb814 50080e8008cfb814 0x0 (Disk device) 1 27400 0 10000000c884b85c 20000000c884b85c 0x1f (Unknown Type,Host Bus Adapter) Unix@sol#
From the above output, the last line shows the HBA information. In the same way you can find the other controller information as well.
Unix@sol# cfgadm -al -o show_FCP_dev c2 c4
Unix@sol# cfgadm -c configure c2::50080e8008cfb814 Unix@sol# cfgadm -c configure c4::50080e8008cfb814
Scanning for FC/SAN Devices | |
---|---|
cfgadm -al
|
To scan FC LUNs |
devfsadm -c disk
|
To make sure all the device files are created |
tail /var/adm/messages
|
To see the new LUNs information |
echo | format
|
To get the new LUNs information |
ls -lrt /dev/rdsk | grep s2 | tail
|
To get the new LUNs information |
Note
: The command
luxadm
probe
can also be used to scan FC LUNs.
If you are still unable to see the new LUN/DISK, then, if you have multipathing enabled, you can try to reset the HBA. (Do not try this in critical servers unless you are confident that multipathing is correctly configured)
root@Unixarena-SOL11:~# luxadm -e port |grep CONNECTED /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl CONNECTED /devices/pci@1d,700000/SUNW,qlc@3/fp@0,0:devctl CONNECTED
forcelip
option:
root@Unixarena-SOL11:~# luxadm -e forcelip /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl
The
forcelip
command can be issued to the controller names as well:
Unix@sol# cfgadm -al -o show_FCP_dev |grep fc-fabric c2 fc-fabric connected configured unknown c4 fc-fabric connected configured unknown Unix@sol# Unix@sol# luxadm -e forcelip /dev/cfg/c2
cfgadm -al
. Make sure disks didn't loose any SAN paths after the HBA reset. If everything seems to be working, then issue the
forcelip
command to the other controller as well.
If you are still not able to see the new FC/SAN LUNS, then, as a last resort, reboot the server and try again. Once you see the new LUN, make sure that you also see all FC paths. A minimum two FC paths is required for SAN disks.
root@Unixarena-SOL11:~# luxadm display /dev/rdsk/c1txxxxxxd0s2