Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
Storage and Backups-Purestorage
How-To: Manually Restoring a Virtual Machine in VMware

Overview

Sometimes a VMware Virtual Machine is deleted on accident or content in the VM is deleted or edited, but the end user wants to restore it.  A VMware Virtual Machine can be restored by using FlashArray Volume Snapshots.  This guide will walk you through the process of restoring one Virtual Machine in an ESXi 6.x Environment using this method.

In Pure Storage's vSphere plugin release 5.2.0, this process is now greatly simplified by using a VMFS workflow built into the plugin. Read more about the workflow here.

This article is for restoring a virtual machine from a Pure Storage FlashArray snapshot of a VMFS Datatore only. This does not apply to VMs on vVols, other third party snapshot recovery processes or a Pure Storage FlashBlade. For restoring or undeleting a VM on a vVol datastore, in vSphere plugin release 5.1.0, the process is simplified with a built-in workflow.


How to Restore the Virtual Machine

Please follow the steps outlined below to successfully restore a virtual machine from a Pure Storage FlashArray snapshot:

  1. Identify the VMware datastore that contained the problematic VM before the issue was identified.  This can be accomplished in multiple places by using the ESXi host CLI, from the vCenter GUI, the FlashArray CLI, or the FlashArray GUI.
  2. Here are some Examples:
    Please note that they are different from the Snapshot and VM example below.  This is an example of correlating a Datastore to FlashArray Volume
    vCenter GUI
    1. Navigate to the Datastore Tab and select the Datastore you want, then click on the Configure Tab and Device Backing Option
      vCenter GUI - Datastore Device Backing.png
    2. Here you'll see that the Device Backing is naa.624a937098d1ff126d20469c000199eb
    3. On the array, you will be looking for the Volume with the Serial 98d1ff126d20469c000199eb.
    FlashArray GUI

    A little less specific vs using the FlashArray CLI.  You'll need to know the name of the Volume or have need to look at a few volumes.

    1. Log into the FlashArray GUI, go to the storage tab and then the volumes tab.
    2. From here, click on the Volume you want to confirm correlates to that datastore
    3. Here I'm looking at the FlashArray Volume that correlates to the device backing of naa.624a937098d1ff126d20469c000199eb
      FlashArray GUI - Device Backing.png
    4. Note the Serial matches: 98d1ff126d20469c000199eb

    The easiest way to do this is from the ESXi CLI and FlashArray CLI.

    ESXi CLI:
    Locate the Datastore via esxcfg-scsidevs
    [root@ESXi-4:~] esxcfg-scsidevs -m
    naa.624a937073e940225a2a52bb0003ae71:3                           /vmfs/devices/disks/naa.624a937073e940225a2a52bb0003ae71:3 5b6b537a-4d4d8368-9e02-0025b521004f  0  ESXi-4-Boot-Lun
    naa.624a9370bd452205599f42910001edc7:1                           /vmfs/devices/disks/naa.624a9370bd452205599f42910001edc7:1 5b7d8183-f78ed720-ddf0-0025b521004d  0  sn1-405-25-Content-Library-Datastore
    naa.624a9370bd452205599f42910001edc8:1                           /vmfs/devices/disks/naa.624a9370bd452205599f42910001edc8:1 5b7d8325-b1db9568-4d28-0025b521004d  0  sn1-405-25-Datastore-1-LUN-150
    naa.624a937098d1ff126d20469c000199ea:1                           /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199ea:1 5b7d78d7-2b993f30-6902-0025b521004d  0  sn1-405-21-ISO-Repository
    naa.624a937098d1ff126d20469c000199eb:1                           /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199eb:1 5b7d8309-56bd3d78-a081-0025b521004d  0  sn1-405-21-Datastore-1-LUN-100
    naa.624a937098d1ff126d20469c0001aad1:1                           /vmfs/devices/disks/naa.624a937098d1ff126d20469c0001aad1:1 5b8f115f-2b499358-c2e1-0025b521004d  0  prod-sn1-405-c12-21-SRM-Placeholder
    naa.624a937098d1ff126d20469c0001ae66:1                           /vmfs/devices/disks/naa.624a937098d1ff126d20469c0001ae66:1 5b901f7e-a6bc0094-c0a3-0025b521003c  0  prod-sn1-405-c12-21-SRM-Datastore-1
    naa.624a937098d1ff126d20469c00024c2e:1                           /vmfs/devices/disks/naa.624a937098d1ff126d20469c00024c2e:1 5b96f277-04c3317f-85db-0025b521003c  0  Syncrep-sn1-405-prod-srm-datastore-1
    naa.624a937098d1ff126d20469c00024c33:1                           /vmfs/devices/disks/naa.624a937098d1ff126d20469c00024c33:1 5b96f28b-57eedbc0-ce59-0025b521004d  0  Syncrep-sn1-405-dev-srm-datastore-1
    naa.624a9370bd452205599f42910003f8d8:1                           /vmfs/devices/disks/naa.624a9370bd452205599f42910003f8d8:1 5ba8fbd3-f5f7b06e-5286-0025b521004f  0  Syncrep-sn1-405-prod-srm-datastore-2
    [root@ESXi-4:~]
    [root@ESXi-4:~] esxcfg-scsidevs -m |grep "sn1-405-21-Datastore-1-LUN-100"
    naa.624a937098d1ff126d20469c000199eb:1                           /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199eb:1 5b7d8309-56bd3d78-a081-0025b521004d  0  sn1-405-21-Datastore-1-LUN-100
    FlashArray CLI
    pureuser@sn1-405-c12-21> purevol list
    Name                                                        Size  Source                                                          Created                  Serial
    dev-sn1-405-21-Datastore-1-LUN-41                           5T    -                                                               2018-09-04 10:42:27 PDT  98D1FF126D20469C0001A6A1
    prod-sn1-405-21-Datastore-1-LUN-100                         15T   -                                                               2018-08-22 09:07:13 PDT  98D1FF126D20469C000199EB
    prod-sn1-405-21-Prod-Cluster-RDM-FileShare-1                10T   -                                                               2018-08-24 20:24:02 PDT  98D1FF126D20469C00019ABD
    prod-sn1-405-21-Prod-Cluster-RDM-Quorum-Witness             1G    -                                                               2018-08-24 20:23:37 PDT  98D1FF126D20469C00019ABC
    prod-sn1-405-21-srm-datastore-1                             5T    sn1-405-c12-25:prod-sn1-405-21-srm-datastore-1-puresra-demoted  2018-09-05 11:24:15 PDT  98D1FF126D20469C0001AE66
    prod-sn1-405-21-srm-placeholder                             100G  -                                                               2018-09-04 16:08:15 PDT  98D1FF126D20469C0001AAD1
    pureuser@sn1-405-c12-21>
    pureuser@sn1-405-c12-21> purevol list prod-sn1-405-21-Datastore-1-LUN-100
    Name                                 Size  Source  Created                  Serial
    prod-sn1-405-21-Datastore-1-LUN-100  15T   -       2018-08-22 09:07:13 PDT  98D1FF126D20469C000199EB
    

    Similar to the GUI, we can match the datastore uuid to the FlashArray volume Serial number to confirm this is the datastore and volume we need to work with.

  3. Now that you have the Datastore and Volume mapping, Determine the snapshot on the FlashArray that you would like to perform the restore from:
    Please note that from this point we have a different datastore and array being used as an example.
    pureuser@slc-405> purevol list slc-production --snap
    Name                 Size  Source          Created                  Serial
    slc-production.4674  500G  slc-production  2017-01-17 11:26:18 MST  309582CAEE2411F900011242
  4. After the snapshot has been identified create a new volume from the snapshot:
    pureuser@slc-405> purevol copy slc-production.4674 slc-production-recovery
    Name                     Size  Source          Created                  Serial
    slc-production-recovery  500G  slc-production  2017-01-17 11:26:56 MST  309582CAEE2411F900011243
  5. Confirm the new volume has been created based off of the snapshot listed:
    pureuser@slc-405> purevol list slc-production-recovery
    Name                     Size  Source          Created                  Serial
    slc-production-recovery  500G  slc-production  2017-01-17 11:26:18 MST  309582CAEE2411F900011243

    The 'Created' date & time should match the timestamp of when the snapshot was created on the newly created volume.

  6. Map the newly created volume to the ESXi host you would like to deploy the virtual machine you are restoring:
    pureuser@slc-405> purevol connect --hgroup ESXi-HG slc-production-recovery
    Name                     Host Group  Host       LUN
    slc-production-recovery  ESXi-HG     slc-esx-1  253
  7. Perform a rescan on the ESXi host that the newly created volume was presented to complete presentation of the LUN.
  8. Add the recovery LUN as a datastore to the ESXi host(s) you plan on performing the recovery on:

    recovery-lun-snapshot.png

    Note in the output above that the recovery LUN is identified as a snapshot to our 'slc-production' datastore we will be recovering.

  9. While creating the datastore ensure you choose the: ' Assign a new signature ' option.

    Assigning-Signature.png

    It is not uncommon for the resignature process to take several minutes to complete. If this task does not complete after 10 minutes, engage additional resources for assistance.

  10. Once the datastore creation has completed you will note that the datastore name will be in the following format: 'snap-hexNumbers-originalDatastoreName'. The image below is an example of what this specific restore datastore looks like:
    datastore-snapshot.png
  11. With the recovery datastore highlighted, locate the 'Actions' wheel and click on 'Register VM...' to locate our VM that needs to be restored.
    Actions.png
  12. After you have located the VM in need of restoration step through the VMware prompts to add the VM to the ESXi host inventory.

    While registering the recovery virtual machine to the ESXi host, and the original VM is still live on the ESXi host, ensure you rename the recovery VM. If you do not you will have two VMs with the same name and need to look at the underlying datastore properties to determine which VM is the recovery and which is the original.

  13. Once the recovery VM is listed in the ESXi host inventory proceed with powering on the VM and ensuring it contains the required data and is accessible as expected. If the original VM is still in the ESXi host inventory ensure it is powered off to ensure no conflicts encountered.

    When powering on the recovery VM you may be asked if the VM has been 'Copied' or 'Moved'. If the original VM is already destroyed and no longer in inventory you can safely choose 'I moved it'. If the original VM is not deleted and going to be around for additional time then you will need to select the: 'I copied it' option so that there is not a conflict in UUIDs between VMs.

  14. Once the recovery VM has been powered on and data integrity is confirmed, you can now storage vMotion the VM from the 'snap-hexNumber-OriginalDatastoreName' to the original datastore (if the original datastore will still be used). Otherwise, if the customer is going to destroy the old datastore you can simply rename the recovery datastore and use it as needed.
  15. If the customer decides to keep the original datastore, and the storage vMotion of the recovery VM has been completed, you can now safely unmap and clean-up the recovery volume as needed. If they are going to keep the newly created recovery datastore no clean-up is required and you can simply rename the LUN as needed.

References

  • VMFS Snapshots and the FlashArray Part VII: Restoring a VM

Scale-Out File Services (SoFS)
Read article
How-to: Identifying the LUN from the NAA Identifier

Problem

You may need to identify the LUN (Logical Unit Number) from a NAA (Network Addressing Authority) identifier for a number of processes. (One or more numbers may reside between the first set of six digits and the last set of 24 digits).

Solution

The LUN serial number is included as part of the NAA. From the serial number, you can then identify the LUN ID.

The first six digits in an NAA refer to Pure Storage. The last 24 digits refer to the LUN’s serial number.

Example NAA: naa . 624a93 7 0 a78e6e1d4bacd0960001001a

In the above example:

The first six digits , 624a93 7 , highlighted in blue for emphasis, is the identifier for Pure Storage, verifying its origin in a Pure Storage device.

The last 24 digits, a78e6e1d4bacd0960001001a , highlighted in green for emphasis.

(In the example, one digit resides between the two sets of digits that are relevant here).

From a list of pure storage volumes, identify the LUN ID from the serial number. For example, if you were to identify the LUN ID from the serial number above, you could enter the following command:

pureuser-ct0:~# purevol list 

This result would follow:

 CLFDEV01_26 2T - 2014-05-27 15:05:41 EDT A78E6E1D4BACD0960001001A

Of this result, the first section of digits, CLFDEV01_26, is the LUN ID, which corresponds to both the serial number and the NAA.

For information on identifying LUN IDs from a FlashArray, see Identifying LUNs (Logical Unit Numbers) from Purity to Correlate to a Host.

Read article
User Guides for Flash-based Plugin
Read article
Simplifying Oracle® High Availability and Disaster Recovery with Pure Storage® Purity ActiveCluster
Read article
How to Scan for New FC Devices and Gather LUN Information on Solaris 10/11

Problem

This guide shows how to re-scan and examine FC LUNS on Solaris 10/11. This is required for detecting new LUNs and to assist with basic storage connectivity troubleshooting.

Impact

Re-scanning FC LUNs on Solaris is generally non-disruptive in a well configured environment with normal load.

Solution

Understanding How FC is Configured on a Solaris Host

The following commands are useful for general SAN stack fact-finding on Solaris:

  1. List the connected HBA’s:
root@Unixarena-SOL11:~# luxadm -e port |grep CONNECTED
/devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl CONNECTED
/devices/pci@1d,700000/SUNW,qlc@3/fp@0,0:devctl CONNECTED
  1. Verify FC ports are connected and configured:
Unix@sol# cfgadm -al -o show_FCP_dev |grep fc-fabric
c2                             fc-fabric    connected    configured   unknown
c4                             fc-fabric    connected    configured   unknown
Unix@sol#
  1. To find the HBA’s World Wide Node number (WWN), use the fcinfo command:
Unix@sol# fcinfo hba-port |grep Port
HBA Port WWN: 10000000c884bb48
HBA Port WWN: 10000000c884bb49
HBA Port WWN: 10000000c884b85c
HBA Port WWN: 10000000c884b85d
Unix@sol#
  1. You can also find the WWN using luxadm command, if HBA is already connected to the FC switch.
Unix@sol# luxadm -e dump_map /dev/cfg/c4
Pos  Port_ID Hard_Addr Port WWN         Node WWN         Type
0    29900   0        50080e8008cfb814 50080e8008cfb814 0x0  (Disk device)
1    27400   0        10000000c884b85c 20000000c884b85c 0x1f (Unknown Type,Host Bus Adapter)
Unix@sol#

From the above output, the last line shows the HBA information. In the same way you can find the other controller information as well.

  1. Zoning can be verified using the below command:
Unix@sol# cfgadm -al -o show_FCP_dev c2 c4
  1. If you see any controller port WWN shown as “unconfigured," then  you can initiate FC session using below mentioned command:
Unix@sol# cfgadm -c configure c2::50080e8008cfb814
Unix@sol# cfgadm -c configure c4::50080e8008cfb814

Scanning for New FC Devices and Getting LUN Information:

Scanning for FC/SAN Devices
cfgadm -al To scan FC LUNs
devfsadm -c disk To make sure all the device files are created
tail /var/adm/messages To see the new LUNs information
echo | format To get the new LUNs information
ls -lrt /dev/rdsk | grep s2 | tail To get the new LUNs information

Note : The command luxadm probe can also be used to scan FC LUNs.

Resetting the HBA

If you are still unable to see the new LUN/DISK, then, if you have multipathing enabled, you can try to reset the HBA.  (Do not try this in critical servers unless you are confident that multipathing is correctly configured)

  1. List the connected HBA.
root@Unixarena-SOL11:~# luxadm -e port |grep CONNECTED
/devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl CONNECTED
/devices/pci@1d,700000/SUNW,qlc@3/fp@0,0:devctl CONNECTED
  1. Reset the HBA using forcelip option:
root@Unixarena-SOL11:~# luxadm -e forcelip /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl

The forcelip command can be issued to the controller names as well:

Unix@sol# cfgadm -al -o show_FCP_dev |grep fc-fabric
c2                             fc-fabric    connected    configured   unknown
c4                             fc-fabric    connected    configured   unknown
Unix@sol#
Unix@sol# luxadm -e forcelip /dev/cfg/c2
  1. Verify the controller status using cfgadm -al . Make sure disks didn't loose any SAN paths after the HBA reset. If everything seems to be working, then issue the forcelip command to the other controller as well.

If you are still not able to see the new FC/SAN LUNS, then, as a last resort, reboot the server and try again. Once you see the new LUN, make sure that you also see all FC paths. A minimum two FC paths is required for SAN disks.

  1. To verify the FC LUN details and multipathing, enter the following command:
root@Unixarena-SOL11:~# luxadm display /dev/rdsk/c1txxxxxxd0s2

Understanding How FC is Configured on a Solaris Host

The following commands are useful for general SAN stack fact-finding on Solaris:

  1. List the connected HBA’s:
root@Unixarena-SOL11:~# luxadm -e port |grep CONNECTED
/devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl CONNECTED
/devices/pci@1d,700000/SUNW,qlc@3/fp@0,0:devctl CONNECTED
  1. Verify FC ports are connected and configured:
Unix@sol# cfgadm -al -o show_FCP_dev |grep fc-fabric
c2                             fc-fabric    connected    configured   unknown
c4                             fc-fabric    connected    configured   unknown
Unix@sol#
  1. To find the HBA’s World Wide Node number (WWN), use the fcinfo command:
Unix@sol# fcinfo hba-port |grep Port
HBA Port WWN: 10000000c884bb48
HBA Port WWN: 10000000c884bb49
HBA Port WWN: 10000000c884b85c
HBA Port WWN: 10000000c884b85d
Unix@sol#
  1. You can also find the WWN using luxadm command, if HBA is already connected to the FC switch.
Unix@sol# luxadm -e dump_map /dev/cfg/c4
Pos  Port_ID Hard_Addr Port WWN         Node WWN         Type
0    29900   0        50080e8008cfb814 50080e8008cfb814 0x0  (Disk device)
1    27400   0        10000000c884b85c 20000000c884b85c 0x1f (Unknown Type,Host Bus Adapter)
Unix@sol#

From the above output, the last line shows the HBA information. In the same way you can find the other controller information as well.

  1. Zoning can be verified using the below command:
Unix@sol# cfgadm -al -o show_FCP_dev c2 c4
  1. If you see any controller port WWN shown as “unconfigured," then  you can initiate FC session using below mentioned command:
Unix@sol# cfgadm -c configure c2::50080e8008cfb814
Unix@sol# cfgadm -c configure c4::50080e8008cfb814

Scanning for New FC Devices and Getting LUN Information:

Scanning for FC/SAN Devices
cfgadm -al To scan FC LUNs
devfsadm -c disk To make sure all the device files are created
tail /var/adm/messages To see the new LUNs information
echo | format To get the new LUNs information
ls -lrt /dev/rdsk | grep s2 | tail To get the new LUNs information

Note : The command luxadm probe can also be used to scan FC LUNs.

Resetting the HBA

If you are still unable to see the new LUN/DISK, then, if you have multipathing enabled, you can try to reset the HBA.  (Do not try this in critical servers unless you are confident that multipathing is correctly configured)

  1. List the connected HBA.
root@Unixarena-SOL11:~# luxadm -e port |grep CONNECTED
/devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl CONNECTED
/devices/pci@1d,700000/SUNW,qlc@3/fp@0,0:devctl CONNECTED
  1. Reset the HBA using forcelip option:
root@Unixarena-SOL11:~# luxadm -e forcelip /devices/pci@1d,700000/SUNW,qlc@1,1/fp@0,0:devctl

The forcelip command can be issued to the controller names as well:

Unix@sol# cfgadm -al -o show_FCP_dev |grep fc-fabric
c2                             fc-fabric    connected    configured   unknown
c4                             fc-fabric    connected    configured   unknown
Unix@sol#
Unix@sol# luxadm -e forcelip /dev/cfg/c2
  1. Verify the controller status using cfgadm -al . Make sure disks didn't loose any SAN paths after the HBA reset. If everything seems to be working, then issue the forcelip command to the other controller as well.

If you are still not able to see the new FC/SAN LUNS, then, as a last resort, reboot the server and try again. Once you see the new LUN, make sure that you also see all FC paths. A minimum two FC paths is required for SAN disks.

  1. To verify the FC LUN details and multipathing, enter the following command:
root@Unixarena-SOL11:~# luxadm display /dev/rdsk/c1txxxxxxd0s2
Read article