Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
Storage and Backups-Purestorage
Web Guide: VMware vSphere Best Practices for Pure Storage® FlashBlade™

Link to VMware Compatibility Guide

Purity Release Certified vSphere Version(s)
1.0.0 -
1.1.0 6.0-7.0 (all updates)
1.2.0 -
2.0.3 6.0-7.0 (all updates)
2.4.0 6.0-7.0 (all updates)
3.0.0 -
3.1.0 -
3.2.0 6.7-7.0 (all updates)

FlashBlade Connectivity

FlashBlade™ client data is served via four 40Gb/s QSFP+ or 32 10Gb/s Ethernet ports. While it is beyond the scope of this document to describe and analyze available network technologies, at the minimum two network uplinks (one from each Fabric Module) are recommended. Each uplink should be connected to a different LAN switch. This network topology protects against the switch as well as FM and individual network port failures.

BEST PRACTICE : Provide at least two network uplinks (one per Fabric Module).

An example of high performance, high redundancy network configuration with four FlashBlade uplinks and Cisco UCS is shown in Figure 5. Please note that Cisco Nexus switches are configured with virtual Port Channel (vPC).

BEST PRACTICE : Separate storage network from other networks

fb5.png

Figure 5

ESXi Host Connectivity

The ESXi hosts connectivity to NAS devices is provided by Virtual switches with VMKernel Adapters and port groups. The Virtual switch must have at least one physical adapter (vmnic) assigned. While it is possible to connect ESXi hosts to an NFS datastore with a single vmnic, this configuration does not protect against potential NIC failures. Whenever possible, it is recommended to create a Virtual switch and to assign two vmnics to each dedicated VMKernel Adapter.

BEST PRACTICE: Assign two vmnics to a dedicated VMKernel Adapter and Virtual switch.

Additionally, to reduce the Ethernet broadcast domain, connections should be configured using separate VLANs and IP subnets. By default, the ESXi host will direct NFS data traffic through a single NIC. Therefore, single NIC’s bandwidth, even in multiple vmnic Virtual switch configurations , is a limiting factor for the NFS datastore I/O operations.

PLEASE READ: NFS datastore connection is limited by single NIC’s bandwidth.

Network traffic load balancing for the Virtual switches with multiple vmnics may be configured by changing the load-balancing policy – see VMware Load Balancing section.

ESXi Virtual Switch Configuration

A basic recommended ESXi Virtual switch configuration is shown in Figure 6.

fb6.png

Figure 6

For ESXi hosts with two high-bandwidth Network Interface Cards, adding a VMkernel port group will increase the IO parallelism – see Figure 7 and Datastores section for additional details. Please note that two VMkernel port groups are on different IP subnets.

fb7.png

Figure 7

For ESXi hosts with four or more high bandwidth Network Interface Cards, it is recommended to create a dedicated Virtual switch for each pair of NICs – see Figure 8.

Please note that each Virtual switch and each VMkernel port group exist on different IP subnets and the corresponding datastores. This configuration provides optimal connectivity to the NFS datastores by increasing the IO parallelism on the ESXi host as well as on FlashBlade– see Datastores section for additional details.

fb8.png

Figure 8

VMware Load Balancing

VMware supports several load balancing algorithms for virtual switches:

  1. Route based on originating virtual port – network uplinks are selected based on the virtual machine port id – this is the default routing policy.
  2. Route based on source MAC hash – network uplinks are selected based on the virtual machine MAC address.
  3. Route based on IP hash – network uplinks are selected based on the source and destination IP address of each datagram.
  4. Route based on physical NIC load – uplinks are selected based on the load evaluation performed by the virtual switch; this algorithm is available only on vSphere Distributed Switch.
  5. Explicit failover – uplinks are selected based on the order defined in the list of Active adapters; no load balancing.

The Route based on originating virtual port and Route based on source MAC hash routing teaming and failover policies require Virtual switch to virtual machine connections. Therefore, they are not appropriate for VMkernel Virtual switches and NFS datastores. The Route based on IP hash policy is the only applicable teaming option.

Route based on IP hash load balancing ensures the egress network traffic is directed through one vmnic and ingress through another.

The Route based on IP hash teaming policy also requires configuration changes of the network switches. The procedure to properly setup link aggregation is beyond the scope of this document. The following VMware Knowledge Base article provides additional details and examples regarding EtherChannel / Link Aggregation Control Protocol (LACP) with ESXi/ESX and Cisco/HP switches configuration:

https://kb.vmware.com/s/article/1004048

For the steps required to change VMware’s load balancing algorithm, see Appendix A.

ESXi Virtual Switch Configuration

A basic recommended ESXi Virtual switch configuration is shown in Figure 6.

fb6.png

Figure 6

For ESXi hosts with two high-bandwidth Network Interface Cards, adding a VMkernel port group will increase the IO parallelism – see Figure 7 and Datastores section for additional details. Please note that two VMkernel port groups are on different IP subnets.

fb7.png

Figure 7

For ESXi hosts with four or more high bandwidth Network Interface Cards, it is recommended to create a dedicated Virtual switch for each pair of NICs – see Figure 8.

Please note that each Virtual switch and each VMkernel port group exist on different IP subnets and the corresponding datastores. This configuration provides optimal connectivity to the NFS datastores by increasing the IO parallelism on the ESXi host as well as on FlashBlade– see Datastores section for additional details.

fb8.png

Figure 8

VMware Load Balancing

VMware supports several load balancing algorithms for virtual switches:

  1. Route based on originating virtual port – network uplinks are selected based on the virtual machine port id – this is the default routing policy.
  2. Route based on source MAC hash – network uplinks are selected based on the virtual machine MAC address.
  3. Route based on IP hash – network uplinks are selected based on the source and destination IP address of each datagram.
  4. Route based on physical NIC load – uplinks are selected based on the load evaluation performed by the virtual switch; this algorithm is available only on vSphere Distributed Switch.
  5. Explicit failover – uplinks are selected based on the order defined in the list of Active adapters; no load balancing.

The Route based on originating virtual port and Route based on source MAC hash routing teaming and failover policies require Virtual switch to virtual machine connections. Therefore, they are not appropriate for VMkernel Virtual switches and NFS datastores. The Route based on IP hash policy is the only applicable teaming option.

Route based on IP hash load balancing ensures the egress network traffic is directed through one vmnic and ingress through another.

The Route based on IP hash teaming policy also requires configuration changes of the network switches. The procedure to properly setup link aggregation is beyond the scope of this document. The following VMware Knowledge Base article provides additional details and examples regarding EtherChannel / Link Aggregation Control Protocol (LACP) with ESXi/ESX and Cisco/HP switches configuration:

https://kb.vmware.com/s/article/1004048

For the steps required to change VMware’s load balancing algorithm, see Appendix A.

Datastores

The performance of FlashBlade™ and DirectFlash™ modules (blades) is not dependent on the number of file systems created and exported to the hosts. However, for each host connection there is an internal 10Gb/s bandwidth threshold between the Fabric Module and the DirectFlash module (blade) – see Figure 2. While the data is distributed among multiple blades, a single DirectFlash module provides host to the storage network connection. The blade selection process to service specific host connection is determined by hashing function. This methodology minimizes the possibility of the same blade being used by multiple hosts. For instance, a connection from a single host may be internally routed to blade 1 whereas another connection from the same host may be internally routed to blade 2 for storage access.

The number of the datastores connected to the ESXi host will depend on the number of available network interfaces (NICs), bandwidth, and performance requirements. To take full advantage of FlashBlade parallelism, create or mount at least one datastore per host per network connection.

BEST PRACTICE: Create or mount at least one datastore per host per network connection

The basic ESXi host single datastore connection is shown in Figure 9.

fb9.png

Figure 9

For the servers with high bandwidth NICs (40Gb/s or higher), create two or more VMkernel port groups per Virtual switch and assign IP addresses to each port group. These IP addresses need to be on different subnets. In this configuration, the connection to each exported file system will be established using dedicated VMkernel port group and corresponding NICs. This configuration is shown in Figure 10.

fb10.png

Figure 10

For servers with four or more high bandwidth network adapters, create a dedicated Virtual switch for each pair of vmnics. The VMkernel port groups need to have IP addresses which are on different subnets. This configuration parallelizes the ESXi host as well as internal FlashBlade connectivity. See Figure 11.

fb11.png

Figure 11

BEST PRACTICE: Mount datastores on all hosts.

FlashBlade Configuration

The configuration of the FlashBlade includes the creation of the subnet and network interfaces for host connectivity.

All the tasks may be accomplished using FlashBlade’s web-based HTML 5 user interface (no client installation required), the command line or via RestAPI.

Configuring Client Connectivity

Create subnet for client (NFS, SMB, HTTP/S3) connectivity

  1. Command Line Interface (CLI)
puresubnet create --prefix <subnet/mask> --vlan <vlan_id> <vlan_name>

Example:

puresubnet create --prefix 10.25.0.0/16 --vlan vlan2025
  1. Graphical User Interface (GUI) - see figure 12.
    1. Select Settings in the left pane.
    2. Select Network
    3. Select ‘+’ sign at top-right in the Subnets header.
    4. Provide values in Create Subnet dialog window.
      1. Name – subnet name
      2. Prefix – network address for the subnet with the subnet mask length.
      3. Gateway – optional IP address of the gateway.
      4. MTU – optional Maximum Transmission Unit size, default is 1500, change to 9000 for jumbo frames - see also Appendix B.
    5. Click Create.

fb12.png

Figure 12

Create a Virtual Network Interface, Assign it to the Existing VLAN

  1. Command Line Interface (CLI):
purenetwork create vip --address <IP_address> --servicelist data name

Example:

purenetwork create vip --address 10.25.0.10 --servicelist data subnet25_NIC
  1. Using Graphical User Interface - see Figure 13.
    1. Select Settings in the left pane.
    2. Select Add interface ‘+’ sign.
    3. Provide values in Create Network Interface dialog box.
      1. Name – Interface name
      2. Address – IP address where file systems can be mounted.
      3. Services – not modifiable
      4. Subnet – not modifiable
    4. Click Create

fb13.png

Figure 13

Creating and Exporting File System

Create and Export File System

  1. Command Line Interface (CLI):
purefs create --rules <rules> --size <size> File_System

Example:

purefs create --rules '*(rw,no_root_squash)' --size 78GB DS10

For existing file systems modify export rules (if necessary).

purefs setattr --rules <rules> File_System

Example:

purefs setattr --rules '*(rw,no_root_squash)' DS10

where --rules are standard NFS (FlashBlade supported) export rules, in format ‘ip_addres(options)’

* (asterisk) – export the file system to all hosts.

rw – file system exported will be readable and writable.

ro – file system exported will be read-only.

root_squash – file system exported will be mapped to anonymous user ID when accessed by user root.

no_root_squash – file system exported will not be mapped to anonymous ID when accessed by user root.

fileid_32bit – file system exported will support clients that require 32-bit inode support.

Add the desired protocol to the file system:

purefs add --protocol <protocol> File_System

Example:

purefs add --protocol nfs DS10

Optionally enable fast-remove and/or snapshot options:

purefs enable --fast-remove-dir --snapshot-dir File_System
  1. Using Graphical User Interface – see Figure 14.
    1. Select Storage in the left pane.
    2. Select File Systems and ‘+’ sign.
    3. Provide values in Create File System.
      1. Files system Name
      2. Provisioned Size
      3. Select unit (K, M, G, T, P)
      4. Optionally enable Fast Remove.
      5. Optionally enable Snapshot.
      6. Enable NFS.
      7. Provide Export Rules [*(rw,no_root_squash)].
    4. Click Create.

For ESXi hosts the rw,no_root_squash export rules are recommended. It also recommended to export the file system to all hosts (include * in front of the parenthesis). This will allow the NFS datastores to be mounted on all ESXi hosts.

BEST PRACTICE: Use *(rw,no_root_squash) rule for exporting file systems to ESXi hosts.

fb14.png

Figure 14

Configuring Client Connectivity

Create subnet for client (NFS, SMB, HTTP/S3) connectivity

  1. Command Line Interface (CLI)
puresubnet create --prefix <subnet/mask> --vlan <vlan_id> <vlan_name>

Example:

puresubnet create --prefix 10.25.0.0/16 --vlan vlan2025
  1. Graphical User Interface (GUI) - see figure 12.
    1. Select Settings in the left pane.
    2. Select Network
    3. Select ‘+’ sign at top-right in the Subnets header.
    4. Provide values in Create Subnet dialog window.
      1. Name – subnet name
      2. Prefix – network address for the subnet with the subnet mask length.
      3. Gateway – optional IP address of the gateway.
      4. MTU – optional Maximum Transmission Unit size, default is 1500, change to 9000 for jumbo frames - see also Appendix B.
    5. Click Create.

fb12.png

Figure 12

Create a Virtual Network Interface, Assign it to the Existing VLAN

  1. Command Line Interface (CLI):
purenetwork create vip --address <IP_address> --servicelist data name

Example:

purenetwork create vip --address 10.25.0.10 --servicelist data subnet25_NIC
  1. Using Graphical User Interface - see Figure 13.
    1. Select Settings in the left pane.
    2. Select Add interface ‘+’ sign.
    3. Provide values in Create Network Interface dialog box.
      1. Name – Interface name
      2. Address – IP address where file systems can be mounted.
      3. Services – not modifiable
      4. Subnet – not modifiable
    4. Click Create

fb13.png

Figure 13

Create subnet for client (NFS, SMB, HTTP/S3) connectivity

  1. Command Line Interface (CLI)
puresubnet create --prefix <subnet/mask> --vlan <vlan_id> <vlan_name>

Example:

puresubnet create --prefix 10.25.0.0/16 --vlan vlan2025
  1. Graphical User Interface (GUI) - see figure 12.
    1. Select Settings in the left pane.
    2. Select Network
    3. Select ‘+’ sign at top-right in the Subnets header.
    4. Provide values in Create Subnet dialog window.
      1. Name – subnet name
      2. Prefix – network address for the subnet with the subnet mask length.
      3. Gateway – optional IP address of the gateway.
      4. MTU – optional Maximum Transmission Unit size, default is 1500, change to 9000 for jumbo frames - see also Appendix B.
    5. Click Create.

fb12.png

Figure 12

Create a Virtual Network Interface, Assign it to the Existing VLAN

  1. Command Line Interface (CLI):
purenetwork create vip --address <IP_address> --servicelist data name

Example:

purenetwork create vip --address 10.25.0.10 --servicelist data subnet25_NIC
  1. Using Graphical User Interface - see Figure 13.
    1. Select Settings in the left pane.
    2. Select Add interface ‘+’ sign.
    3. Provide values in Create Network Interface dialog box.
      1. Name – Interface name
      2. Address – IP address where file systems can be mounted.
      3. Services – not modifiable
      4. Subnet – not modifiable
    4. Click Create

fb13.png

Figure 13

Creating and Exporting File System

Create and Export File System

  1. Command Line Interface (CLI):
purefs create --rules <rules> --size <size> File_System

Example:

purefs create --rules '*(rw,no_root_squash)' --size 78GB DS10

For existing file systems modify export rules (if necessary).

purefs setattr --rules <rules> File_System

Example:

purefs setattr --rules '*(rw,no_root_squash)' DS10

where --rules are standard NFS (FlashBlade supported) export rules, in format ‘ip_addres(options)’

* (asterisk) – export the file system to all hosts.

rw – file system exported will be readable and writable.

ro – file system exported will be read-only.

root_squash – file system exported will be mapped to anonymous user ID when accessed by user root.

no_root_squash – file system exported will not be mapped to anonymous ID when accessed by user root.

fileid_32bit – file system exported will support clients that require 32-bit inode support.

Add the desired protocol to the file system:

purefs add --protocol <protocol> File_System

Example:

purefs add --protocol nfs DS10

Optionally enable fast-remove and/or snapshot options:

purefs enable --fast-remove-dir --snapshot-dir File_System
  1. Using Graphical User Interface – see Figure 14.
    1. Select Storage in the left pane.
    2. Select File Systems and ‘+’ sign.
    3. Provide values in Create File System.
      1. Files system Name
      2. Provisioned Size
      3. Select unit (K, M, G, T, P)
      4. Optionally enable Fast Remove.
      5. Optionally enable Snapshot.
      6. Enable NFS.
      7. Provide Export Rules [*(rw,no_root_squash)].
    4. Click Create.

For ESXi hosts the rw,no_root_squash export rules are recommended. It also recommended to export the file system to all hosts (include * in front of the parenthesis). This will allow the NFS datastores to be mounted on all ESXi hosts.

BEST PRACTICE: Use *(rw,no_root_squash) rule for exporting file systems to ESXi hosts.

fb14.png

Figure 14

Create and Export File System

  1. Command Line Interface (CLI):
purefs create --rules <rules> --size <size> File_System

Example:

purefs create --rules '*(rw,no_root_squash)' --size 78GB DS10

For existing file systems modify export rules (if necessary).

purefs setattr --rules <rules> File_System

Example:

purefs setattr --rules '*(rw,no_root_squash)' DS10

where --rules are standard NFS (FlashBlade supported) export rules, in format ‘ip_addres(options)’

* (asterisk) – export the file system to all hosts.

rw – file system exported will be readable and writable.

ro – file system exported will be read-only.

root_squash – file system exported will be mapped to anonymous user ID when accessed by user root.

no_root_squash – file system exported will not be mapped to anonymous ID when accessed by user root.

fileid_32bit – file system exported will support clients that require 32-bit inode support.

Add the desired protocol to the file system:

purefs add --protocol <protocol> File_System

Example:

purefs add --protocol nfs DS10

Optionally enable fast-remove and/or snapshot options:

purefs enable --fast-remove-dir --snapshot-dir File_System
  1. Using Graphical User Interface – see Figure 14.
    1. Select Storage in the left pane.
    2. Select File Systems and ‘+’ sign.
    3. Provide values in Create File System.
      1. Files system Name
      2. Provisioned Size
      3. Select unit (K, M, G, T, P)
      4. Optionally enable Fast Remove.
      5. Optionally enable Snapshot.
      6. Enable NFS.
      7. Provide Export Rules [*(rw,no_root_squash)].
    4. Click Create.

For ESXi hosts the rw,no_root_squash export rules are recommended. It also recommended to export the file system to all hosts (include * in front of the parenthesis). This will allow the NFS datastores to be mounted on all ESXi hosts.

BEST PRACTICE: Use *(rw,no_root_squash) rule for exporting file systems to ESXi hosts.

fb14.png

Figure 14

ESXi Host Configuration

The basic ESXi host configuration consists of creating a dedicated Virtual switch and datastore.

Creating Virtual Switch

To create a Virtual switch and NFS based datastores using vSphere Web Based client follow the steps below:

  1. Create a vSwitch – see Figure 15.
    1. Select the hosts tab, Host ➤Configure (tab)➤Virtual switches➤Add host networking icon.

fb15.png

Figure 15

  1. b. Select connection type: Select VMkernel Network Adapter – see Figure 16.
  2. fb16.png
  3. Figure 16
  4. c. Select target device: New standard switch - see Figure 17.

fb17.png

Figure 17

d. Create a Standard Switch: Assign free physical network adapters to the new switch (click green ‘+’ sign and select an available active adapter (vmnic)) – see Figure 18.

fb18.png

Figure 18

e. Select Next when finished assigning adapters.

f. Port properties – see Figure 19.

i. Network label (for example: VMkernelNFS)

ii. VLAN ID: leave at default (0) if you are not planning to tag the outgoing network frames.

iii. TCP/IP stack: Default

iv. Available service: all disabled (unchecked).

fb19.png

Figure 19

g. IPv4 settings – see Figure 20.

i. IPv4 settings: Use static IPv4 settings.

ii. Provide the IP address and the corresponding subnet mask.

iii. Review settings and finish creating the Virtual switch.

fb20.png

Figure 20

2. Optionally verify connectivity from ESXi host to the FlashBlade file system.

a. Login as root to the ESXi host.

b. Issue vmkping command.

vmkping <destination_ip>

Creating Datastore

1. Select the hosts tab, Host ➤Datastores-➤New Datastore - see Figure 21.

fb21.png

Figure 21

2. New Datastore - see Figure 22.

a. Type: NFS

fb22.png

Figure 22

3. Select NFS version: NFS 3 - see Figure 23.

fb23.png

Figure 23

a. Datastore name: friendly name for the datastore (for example: DS10) – see Figure 24.

b. Folder: Specify folder where this datastore was created on FlashBlade - Creating and Exporting File System.

c. Server: IP address or FQDN for the VIP on the FlashBlade.

fb24.png

Figure 24

When mounting NFS datastore on multiple hosts you must use the same FQDN, name, or IP address and datastore name. If using FQDN, ensure that DNS records have been updated and ESXi hosts have been correctly configured with IP address of the DNS server.

BEST Practice: Mount NFS datastores using IP addresses

Mounting NFS datastore using IP address instead of the FQDN removes the dependency on the availability of DNS servers.

Creating Virtual Switch

To create a Virtual switch and NFS based datastores using vSphere Web Based client follow the steps below:

  1. Create a vSwitch – see Figure 15.
    1. Select the hosts tab, Host ➤Configure (tab)➤Virtual switches➤Add host networking icon.

fb15.png

Figure 15

  1. b. Select connection type: Select VMkernel Network Adapter – see Figure 16.
  2. fb16.png
  3. Figure 16
  4. c. Select target device: New standard switch - see Figure 17.

fb17.png

Figure 17

d. Create a Standard Switch: Assign free physical network adapters to the new switch (click green ‘+’ sign and select an available active adapter (vmnic)) – see Figure 18.

fb18.png

Figure 18

e. Select Next when finished assigning adapters.

f. Port properties – see Figure 19.

i. Network label (for example: VMkernelNFS)

ii. VLAN ID: leave at default (0) if you are not planning to tag the outgoing network frames.

iii. TCP/IP stack: Default

iv. Available service: all disabled (unchecked).

fb19.png

Figure 19

g. IPv4 settings – see Figure 20.

i. IPv4 settings: Use static IPv4 settings.

ii. Provide the IP address and the corresponding subnet mask.

iii. Review settings and finish creating the Virtual switch.

fb20.png

Figure 20

2. Optionally verify connectivity from ESXi host to the FlashBlade file system.

a. Login as root to the ESXi host.

b. Issue vmkping command.

vmkping <destination_ip>

Creating Datastore

1. Select the hosts tab, Host ➤Datastores-➤New Datastore - see Figure 21.

fb21.png

Figure 21

2. New Datastore - see Figure 22.

a. Type: NFS

fb22.png

Figure 22

3. Select NFS version: NFS 3 - see Figure 23.

fb23.png

Figure 23

a. Datastore name: friendly name for the datastore (for example: DS10) – see Figure 24.

b. Folder: Specify folder where this datastore was created on FlashBlade - Creating and Exporting File System.

c. Server: IP address or FQDN for the VIP on the FlashBlade.

fb24.png

Figure 24

When mounting NFS datastore on multiple hosts you must use the same FQDN, name, or IP address and datastore name. If using FQDN, ensure that DNS records have been updated and ESXi hosts have been correctly configured with IP address of the DNS server.

BEST Practice: Mount NFS datastores using IP addresses

Mounting NFS datastore using IP address instead of the FQDN removes the dependency on the availability of DNS servers.

ESXi NFS Datastore Configuration Settings

Adjust the following ESXi parameters on each ESXi server (see Table 1):

  • NFS.MaxVolumes – Maximum number of NFS mounted volumes (per host)
  • Net.TcpipHeapSize – Initial TCP/IP heap size in MB
  • Net.TcpipHeapMax – Maximum TCP/IP heap size in MB
  • SunRPC.MaxConnPerIp – Maximum number of unique TCP connections per IP address

Parameter

Default Value

Maximum Value

Recommended Value

NFS.MaxVolumes

8

256

256

Net.TcpipHeapSize

0 MB

32 MB

32 MB

Net.TcpipHeapMax

512 MB

1536 MB

512 MB

SunRPC.MaxConnPerIp

4

128

128

Table 1

The SunRPC.MaxConnPerIp should be increased to avoid sharing the host to NFS datastore connections. There is a maximum of 256 NFS datastores with 128 unique TCP connections, therefore forcing connection sharing when the NFS datastore limit is reached.

The settings listed in Table 1 must adjusted on each ESXi host using vSphere Web Client (Advanced System Settings) or command line and may require a reboot.

Changing ESXi Advanced System Settings

To change ESXi advanced system settings using vSphere Web Client GUI – see Figure 25.

  1. Select Host (tab)➤Host➤Configure➤Advanced System Settings➤Edit.
  2. In Edit Advanced System Setting windows use the search window to locate the required parameter and modify its value, click OK.
  3. Reboot if required.

fb25.png

Figure 25

To change Advanced System Settings using esxcli:

esxcli system settings set --option=“/SectionName/OptionName” --int-value=<value> 

Example:

esxcli system settings set --option=“/NFS/MaxVolumes” --int-value=16

Changing ESXi Advanced System Settings

To change ESXi advanced system settings using vSphere Web Client GUI – see Figure 25.

  1. Select Host (tab)➤Host➤Configure➤Advanced System Settings➤Edit.
  2. In Edit Advanced System Setting windows use the search window to locate the required parameter and modify its value, click OK.
  3. Reboot if required.

fb25.png

Figure 25

To change Advanced System Settings using esxcli:

esxcli system settings set --option=“/SectionName/OptionName” --int-value=<value> 

Example:

esxcli system settings set --option=“/NFS/MaxVolumes” --int-value=16

Virtual Machine Configuration

For the virtual machines residing on FlashBlade backed NFS dastastores only thin provisioning is available however FlashBlade does not support thin provisioned disks at this time. Support for thin provisioning will be added in the future.

fb26.png

Figure 26

Based on VMware recommendations, the additional disks (non-root (Linux) or other than c:\ (Windows)) should be connected to a VMware Paravirtual SCSI controller.

Snapshots

Snapshots provide convenient means of creating a recovery point and can be enabled on FlashBlade on a per-file-system bases. The actual snapshots are located in the .snapshot directory on the exported file systems. The content of the .snapshot directory may be copied to a different location, providing a recovery point. To recover virtual machine using FlashBlade snapshot:

1. Mount the .snapshot directory with ‘Mount NFS as read-only’ option on the host where you would like recover virtual machine – see Figure 27.

fb27.png

Figure 27

2. Select the newly mounted datastore and browse the files to locate the directory where the virtual machine files reside - see Figure 28.

3. Select and copy all virtual machine files to another directory on different datastore

fb28.png

Figure 28

4.Register the virtual machine by selecting the Host➤Datastore➤Register VM by browsing to the new location of virtual machine files – see Figure 29.

fb29.png

Figure 29

5. Unmount datastore mounted in step 1

VMware managed snapshots are fully supported.

Conclusion

While the recommendations and suggestions outlined in this paper do not cover all possible ESXi and FlashBlade implementation details and configuration settings, they should serve as a guideline and provide a starting point for NFS datastore deployments. Continuous data collection and analysis of the network, active ESXi hosts and FlashBlade performance characteristics are the best method of determining if and what changes may be required to deliver the most reliable, robust, high-performing virtualized compute service.

BEST PRACTICE: Always monitor your network, FlashBlade, ESXi hosts

Appendix A

Changing network load balancing policy – see Figure A1.

To change network load balancing policy using command line:

esxcli network vswitch standard policy failover set -l iphash -v <vswitch-name>

Example:

esxcli network vswitch standard policy failover set -l iphash  -v vSwitch1

To change network load balancing policy using vSphere Web Client:

  1. Select Host (tab)➤Host➤Configure➤Virtual switches
  2. Select the switch➤Edit (Pencil)
  3. Virtual switch Edit Setting dialog➤Teaming and failover➤Load Balancing➤Route based on IP hash

fb_a1.png

Figure A1

Appendix B

While the typical Ethernet (IEEE 802.3 Standard) Maximum Transmission Unit is 1500 bytes, larger MTU values are also possible. Both FlashBlade and ESXi provide support for jumbo frames with an MTU of 9000 bytes.

FlashBlade Configuration

Create subnet with MTU 9000

1. Command Line Interface (CLI):

puresubnet create --prefix <subnet/mask> --vlan <vlan_id> --mtu <mtu> vlan_name

Example:

puresubnet create –prefix 10.25.64.0/21 –vlan vlan2064

2. Graphical User Interface (GUI) - see Figure B1

i. Select Settings in the left pane

ii. Select Network and ‘+’ sign next to “Subnets”

iii. Provide values in Create Subnet dialog window changing MTU to 9000

iv. Click Save

fb_b1.png

Figure B1

Change an existing subnet MTU to 9000

  1. Select Settings in the left pane
  2. Select Edit Subnet icon
  3. Provide new value for MTU
  4. Click Save

ESXi Host Configuration

Jumbo frames need to be enabled on per host and per VMkernel switch basis. Only command line configuration examples are provided below.

1. Login as root to the ESXi host

2. Modify MTU for the NFS datastore vSwitch

esxcfg-vswitch -m <MTU> <vSwitch>

Example:

esxcfg-vswitch -m 9000 vSwitch2

3. Modify MTU for the corresponding port group

esxcfg-vmknic -m <MTU> <portgroup_name>  

Example:

esxcfg-vmknic -m 9000 VMkernel2vs  

Verify connectivity between the ESXi host and the NAS device using jumbo frames

  vmkping -s 8784 -d <destination_ip>

Example:

vmkping -s 8784 -d 192.168.1.10

The -d option disables datagram fragmentation and -s options defines the size of ICMP data. ESXi does not support MTU greater than 9000 bytes, with 216 bytes for the header, the effective size should be 8784 bytes.

FlashBlade Configuration

Create subnet with MTU 9000

1. Command Line Interface (CLI):

puresubnet create --prefix <subnet/mask> --vlan <vlan_id> --mtu <mtu> vlan_name

Example:

puresubnet create –prefix 10.25.64.0/21 –vlan vlan2064

2. Graphical User Interface (GUI) - see Figure B1

i. Select Settings in the left pane

ii. Select Network and ‘+’ sign next to “Subnets”

iii. Provide values in Create Subnet dialog window changing MTU to 9000

iv. Click Save

fb_b1.png

Figure B1

Change an existing subnet MTU to 9000

  1. Select Settings in the left pane
  2. Select Edit Subnet icon
  3. Provide new value for MTU
  4. Click Save

Create subnet with MTU 9000

1. Command Line Interface (CLI):

puresubnet create --prefix <subnet/mask> --vlan <vlan_id> --mtu <mtu> vlan_name

Example:

puresubnet create –prefix 10.25.64.0/21 –vlan vlan2064

2. Graphical User Interface (GUI) - see Figure B1

i. Select Settings in the left pane

ii. Select Network and ‘+’ sign next to “Subnets”

iii. Provide values in Create Subnet dialog window changing MTU to 9000

iv. Click Save

fb_b1.png

Figure B1

Change an existing subnet MTU to 9000

  1. Select Settings in the left pane
  2. Select Edit Subnet icon
  3. Provide new value for MTU
  4. Click Save

ESXi Host Configuration

Jumbo frames need to be enabled on per host and per VMkernel switch basis. Only command line configuration examples are provided below.

1. Login as root to the ESXi host

2. Modify MTU for the NFS datastore vSwitch

esxcfg-vswitch -m <MTU> <vSwitch>

Example:

esxcfg-vswitch -m 9000 vSwitch2

3. Modify MTU for the corresponding port group

esxcfg-vmknic -m <MTU> <portgroup_name>  

Example:

esxcfg-vmknic -m 9000 VMkernel2vs  

Verify connectivity between the ESXi host and the NAS device using jumbo frames

  vmkping -s 8784 -d <destination_ip>

Example:

vmkping -s 8784 -d 192.168.1.10

The -d option disables datagram fragmentation and -s options defines the size of ICMP data. ESXi does not support MTU greater than 9000 bytes, with 216 bytes for the header, the effective size should be 8784 bytes.

References

  • Best Practices for Running vSphere on NFS Storage – https://www.vmware.com/techpapers/2010/best-practices-for-running-vsphere-on-nfs-storage-10096.html
  • Best Practices for running VMware vSphere on Network Attached Storage - https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-nfs-bestpractices-white-paper-en.pdf

  • NFS Best Practices – Part 1: Networking – https://cormachogan.com/2012/11/26/nfs-best-practices-part-1-networking/
  • NFS Best Practices – Part 2: Advanced Settings – https://cormachogan.com/2012/11/27/nfs-best-practices-part-2-advanced-settings/