Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All

Storage and Backups-Nutanix

Hyper-V Administration for Acropolis

AOS 5.20

Product Release Date: 2021-05-17

Last updated: 2022-09-20

Node Management

Logging on to a Controller VM

If you need to access a Controller VM on a host that has not been added to SCVMM or Hyper-V Manager, use this method.

Procedure

  1. Log on to the Hyper-V host with Remote Desktop Connection and start PowerShell.
  2. Log on to the Controller VM.
    > ssh nutanix@192.168.5.254

    Accept the host authenticity warning if prompted.

Placing the Controller VM and Hyper-V Host in Maintenance Mode

It is recommended that you place the Controller VM and Hyper-V host into maintenance mode when performing any maintenance or patch installation for the cluster.

Before you begin

Migrate the VMs that are running on the node to other nodes in the cluster.

About this task

Caution: Verify the data resiliency status of your cluster. You can only place one node in maintenance mode for each cluster.

To place the Controller VM and Hyper-V host in maintenance mode, do the following.

Procedure

  1. Log on to the Controller VM with SSH and get the CVM host ID.
    nutanix@cvm$ ncli host ls
  2. Run the following command to place the CVM in maintenance mode.
    nutanix@cvm$ ncli host edit id=host_id enable-maintenance-mode=true
    Replace host_id with the CVM host ID
  3. Log on to the Hyper-V host with Remote Desktop Connection and pause the Hyper-V host in the failover cluster using PowerShell.
    > Suspend-ClusterNode

Shutting Down a Node in a Cluster (Hyper-V)

Shut down a node in a Hyper-V cluster.

Before you begin

Shut down guest VMs that are running on the node, or move them to other nodes in the cluster.

In a Hyper-V cluster, you do not need to put the node in maintenance mode before you shut down the node. The steps to shut down the guest VMs running on the node or moving them to another node, and shutting down the CVM are adequate.

About this task

Caution: Verify the data resiliency status of your cluster. If the cluster only has replication factor 2 (RF2), you can only shut down one node for each cluster. If an RF2 cluster would have more than one node shut down, shut down the entire cluster.

Perform the following procedure to shut down a node in a Hyper-V cluster.

Procedure

  1. Log on to the Controller VM with SSH and shut down the Controller VM.
    nutanix@cvm$ cvm_shutdown -P now
    Note:

    Always use the cvm_shutdown command to reset, or shutdown the Controller VM. The cvm_shutdown command notifies the cluster that the Controller VM is unavailable.

  2. Log on to the Hyper-V host with Remote Desktop Connection and start PowerShell.
  3. Do one of the following to shut down the node.
    • > shutdown /s /t 0
    • > Stop-Computer -ComputerName localhost

    See the Microsoft documentation for up-to-date and additional details about how to shut down a Hyper-V node.

Starting a Node in a Cluster (Hyper-V)

After you start or restart a node in a Hyper-V cluster, verify if the Controller VM (CVM) is powered on and if the CVM is added to the metadata.

About this task

Perform the following steps to start a node in a Hyper-V cluster.

Procedure

  1. Power on the node. Do one of the following:
    • Press the power button on the front of the physical hardware server.
    • Use a remote tool such as iDRAC, iLO, or IPMI depending on your hardware.
  2. Log on to Hyper-V Manager and start PowerShell.
  3. Determine if the Controller VM is running.
    > Get-VM | Where {$_.Name -match 'NTNX.*CVM'}
    • If the Controller VM is off, a line similar to the following should be returned:
      NTNX-13SM35230026-C-CVM Stopped -           -             - Opera...

      Make a note of the Controller VM name in the second column.

    • If the Controller VM is on, a line similar to the following should be returned:
      NTNX-13SM35230026-C-CVM Running 2           16384             05:10:51 Opera...
  4. If the CVM is not powered on, power on the CVM by using Hyper-V Manager.
  5. Log on to the CVM with SSH and verify if the CVM is added back to the metadata.
    nutanix@cvm$ nodetool -h 0 ring

    The state of the IP address of the CVM you started must be Normal as shown in the following output.

    nutanix@cvm$ nodetool -h 0 ring
    Address         Status State      Load            Owns    Token                                                          
                                                              kV0000000000000000000000000000000000000000000000000000000000   
    XX.XXX.XXX.XXX  Up     Normal     1.84 GB         25.00%  000000000000000000000000000000000000000000000000000000000000   
    XX.XXX.XXX.XXX  Up     Normal     1.79 GB         25.00%  FV0000000000000000000000000000000000000000000000000000000000   
    XX.XXX.XXX.XXX  Up     Normal     825.49 MB       25.00%  V00000000000000000000000000000000000000000000000000000000000   
    XX.XXX.XXX.XXX  Up     Normal     1.87 GB         25.00%  kV0000000000000000000000000000000000000000000000000000000000
  6. Power on or failback the guest VMs by using Hyper-V Manager or Failover Cluster Manager.

Enabling 1 GbE Interfaces (Hyper-V)

If 10 GbE networking is specified during cluster setup, 1 GbE interfaces are disabled on Hyper-V nodes. Follow these steps if you need to enable the 1 GbE interfaces later.

About this task

To enable the 1 GbE interfaces, do the following on each host:

Procedure

  1. Log on to the Hyper-V host with Remote Desktop Connection and start PowerShell.
  2. List the network adapters.
    > Get-NetAdapter | Format-List Name,InterfaceDescription,LinkSpeed

    Output similar to the following is displayed.

    Name                 : vEthernet (InternalSwitch)
    InterfaceDescription : Hyper-V Virtual Ethernet Adapter #3
    LinkSpeed            : 10 Gbps
    
    Name                 : vEthernet (ExternalSwitch)
    InterfaceDescription : Hyper-V Virtual Ethernet Adapter #2
    LinkSpeed            : 10 Gbps
    
    Name                 : Ethernet
    InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection
    LinkSpeed            : 10 Gbps
    
    Name                 : Ethernet 3
    InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection #2
    LinkSpeed            : 10 Gbps
    
    Name                 : NetAdapterTeam
    InterfaceDescription : Microsoft Network Adapter Multiplexor Driver
    LinkSpeed            : 20 Gbps
    
    Name                 : Ethernet 4
    InterfaceDescription : Intel(R) I350 Gigabit Network Connection #2
    LinkSpeed            : 0 bps
    
    Name                 : Ethernet 2
    InterfaceDescription : Intel(R) I350 Gigabit Network Connection
    LinkSpeed            : 1 Gbps

    Make a note of the Name of the 1 GbE interfaces you want to enable.

  3. Configure the interface.

    Replace interface_name with the name of the 1 GbE interface as reported by Get-NetAdapter .

    1. Enable the interface.
      > Enable-NetAdapter -Name "interface_name"
    2. Add the interface to the NIC team.
      > Add-NetLBFOTeamMember -Team NetAdapterTeam -Name "interface_name"

      If you want to configure the interface as a standby for the 10 GbE interfaces, include the parameter -AdministrativeMode Standby

    Perform these steps once for each 1 GbE interface you want to enable.

Changing the Hyper-V Host Password

The cluster software needs to be able to log into each host as Admin to perform standard cluster operations, such as querying the status of VMs in the cluster. Therefore, after changing the Administrator password it is critical to update the cluster configuration with the new password.

About this task

Tip: Although it is not required for the Administrator user to have the same password on all hosts, doing so makes cluster management and support much easier. If you do select a different password for one or more hosts, make sure to note the password for each host.

Procedure

  1. Change the Admin password of all hosts.
    Perform these steps on every Hyper-V host in the cluster.
    1. Log on to the Hyper-V host with Remote Desktop Connection.
    2. Press Ctrl+Alt+End to display the management screen.
    3. Click Change a Password .
    4. Enter the old password and the new password in the specified fields and click the right arrow button.
    5. Click Ok to acknowledge the password change.
  2. Update the Administrator user password for all hosts in the cluster configuration.
    Warning: If you do not perform this step, the web console no longer shows correct statistics and alerts, and other cluster operations fail.
    1. Log on to any CVM in the cluster with SSH.
    2. Find the host IDs.

      On the clusters running the AOS release 4.5.x, type:

      nutanix@cvm$ ncli host list | grep -E 'ID|Hypervisor Address'

      On the clusters running the AOS release 4.6.x or later, type:

      nutanix@cvm$ ncli host list | grep -E 'Id|Hypervisor Address'

      Note the host ID for each hypervisor host.

    3. Update the hypervisor host password.
      nutanix@cvm$ ncli managementserver edit name=host_addr \
       password='host_password' 
      nutanix@cvm$ ncli host edit id=host_id \
       hypervisor-password='host_password'
      • Replace host_addr with the IP address of the hypervisor host.
      • Replace host_id with a host ID you determined in the preceding step.
      • Replace host_password with the Admin password on the corresponding hypervisor host.

      Perform this step for every hypervisor host in the cluster.

Changing a Host IP Address

Perform these steps once for every hypervisor host in the cluster. Complete the entire procedure on a host before proceeding to the next host.

Before you begin

Remove the host from the failover cluster and domain before changing the host IP address.

Procedure

  1. Configure networking on the node by following Configuring Host Networking for Hyper-V Manually.
  2. Log on to every Controller VM in the cluster and restart genesis.
    nutanix@cvm$ genesis restart

    If the restart is successful, output similar to the following is displayed.

    Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
    Genesis started on pids [30378, 30379, 30380, 30381, 30403]

Changing the VLAN ID for Controller VM

About this task

Perform the following procedure to change the VLAN ID of the Controller VM.

Procedure

  1. Log on to the Hyper-V host with the IPMI remote console and run the following PowerShell command to get the VLAN settings configured.
    > Get-VMNetworkAdapterVlan
  2. Change the VLAN ID.
    > Set-VMNetworkAdapterVlan -VMName cvm_name -VMNetworkAdapterName External -Access -VlanID vlan_ID
    Replace cvm_name with the name of the Nutanix Controller VM.

    Replace vlan_ID with the new VLAN ID.

    Note: The VM name of the Nutanix Controller VM must begin with NTNX-

Configuring VLAN for Hyper-V Host

About this task

Perform the following procedure to configure Hyper-V host VLANs.

Procedure

  1. Log on to the Hyper-V host with the IPMI remote console.
  2. Start a PowerShell prompt and run the following command to create a variable for the ExternalSwitch.
    >$netAdapter = Get-VMNetworkAdapter -Name "ExternalSwitch" -ManagementOS
  3. To set a new VLAN ID for the ExternalSwitch.
    >Set-VMNetworkAdapterVlan -VMNetworkAdapter $netAdapter -Access -VlanId vlan_ID
    Replace vlan_ID with the new VLAN ID.
    You can now communicate to the Hyper-V host on the new subnet.

Configuring Host Networking for Hyper-V Manually

Perform the following procedure to manually configure the Hyper-V host networking.

About this task

Note: Do not add any other device (including guest VMs) to the VLAN to which the CVM and hypervisor host are assigned. Isolate guest VMs on one or more separate VLANs.

Procedure

  1. Log on to the Hyper-V host with the IPMI remote console and start a Powershell prompt.
  2. List the network adapters.
    > Get-NetAdapter | Format-List Name,InterfaceDescription,LinkSpeed

    Output similar to the following is displayed.

    Name                 : vEthernet (InternalSwitch)
    InterfaceDescription : Hyper-V Virtual Ethernet Adapter #3
    LinkSpeed            : 10 Gbps
    
    Name                 : vEthernet (ExternalSwitch)
    InterfaceDescription : Hyper-V Virtual Ethernet Adapter #2
    LinkSpeed            : 10 Gbps
    
    Name                 : Ethernet
    InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection
    LinkSpeed            : 10 Gbps
    
    Name                 : Ethernet 3
    InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection #2
    LinkSpeed            : 10 Gbps
    
    Name                 : NetAdapterTeam
    InterfaceDescription : Microsoft Network Adapter Multiplexor Driver
    LinkSpeed            : 20 Gbps
    
    Name                 : Ethernet 4
    InterfaceDescription : Intel(R) I350 Gigabit Network Connection #2
    LinkSpeed            : 0 bps
    
    Name                 : Ethernet 2
    InterfaceDescription : Intel(R) I350 Gigabit Network Connection
    LinkSpeed            : 0 bps

    Make a note of the InterfaceDescription for the vEthernet adapter that links to the physical interface you want to modify.

  3. Start the Server Configuration utility.
    > sconfig
  4. Select Networking Settings by typing 8 and pressing Enter .
  5. Change the IP settings.
    1. Select a network adapter by typing the Index number of the adapter you want to change (refer to the InterfaceDescription you found in step 2) and pressing Enter .
      Warning: Do not select the network adapter with the IP address 192.168.5.1 . This IP address is required for the Controller VM to communicate with the host.
    2. Select Set Network Adapter Address by typing 1 and pressing Enter .
    3. Select Static by typing S and pressing Enter .
    4. Enter the IP address for the host and press Enter .
    5. Enter the subnet mask and press Enter .
    6. Enter the IP address for the default gateway and press Enter .
      The host networking settings are changed.
  6. (Optional) Change the DNS servers.
    DNS servers must be configured for a host to be part of a domain. You can either change the DNS servers in the sconfig utility or with setup_hyperv.py .
    1. Select Set DNS Servers by typing 2 .
    2. Enter the primary and secondary DNS servers and press Enter .
      The DNS servers are updated.
  7. Exit the Server Configuration utility by typing 4 and pressing Enter then 15 and pressing Enter .

Joining a Host to a Domain Manually

About this task

For information about how to join a host to a domain by using utilities provided by Nutanix, see Joining the Cluster and Hosts to a Domain . Perform these steps for each Hyper-V host in the cluster to manually join a host to a domain.

Procedure

  1. Log on to the Hyper-V host with the IPMI remote console and start a Powershell prompt.
  2. Join the host to the domain and rename it.
    > Add-Computer -DomainName domain_name -NewName node_name `
     -Credential domain_name\domain_admin_user -Restart -Force
    • Replace domain_name with the name of the join for the host to join.
    • Replace node_name with a new name for the host.
    • Replace domain_admin_user with the domain administrator username.
    The host restarts and joins the domain.

Changing CVM Memory Configuration (Hyper-V)

About this task

You can increase the memory reserved for each Controller VM in your cluster by using the 1-click Controller VM Memory Upgrade available from the Prism Element web console. Increase memory size depending on the workload type or to enable certain AOS features. For more information about CVM memory sizing recommendations and instructions about how to increase the CVM memory, see Increasing the Controller VM Memory Size in the Prism Web Console Guide .

Hyper-V Configuration

Before configuring Nutanix storage on Hyper-V, you need to ensure that you meet the requirements of Hyper-V installation. For more information, see Hyper-V Installation Requirements. After you configure all the pre-requisites for installing and setting up Hyper-V, you need to join the Hyper-V cluster and its constituent hosts to the domain and then create a failover cluster.

Hyper-V Installation Requirements

Ensure that the following requirements are met before installing Hyper-V.

Windows Active Directory Domain Controller

Requirements:

  • For a fresh installation, you need a version of Nutanix Foundation that is compatible with the version of Windows Server you want to install.
    Note: To install Windows Server 2016, you need Foundation 3.11.2 or later. For more information, see the Field Installation Guide.
  • The primary domain controller version must at least be 2008 R2.
    Note: If you have Volume Shadow Copy Service (VSS) based backup tool (for example Veeam), functional level of Active Directory must be 2008 or higher.
  • Active Directory Web Services (ADWS) must be installed and running. By default, connections are made over TCP port 9389, and firewall policies must enable an exception on this port for ADWS.

    To test that ADWS is installed and running on a domain controller, log on by using a domain administrator account in a Windows host other than the domain controller host that is joined to the same domain and has the RSAT-AD-Powershell feature installed, and run the following PowerShell command. If the command prints the primary name of the domain controller, then ADWS is installed and the port is open.

> (Get-ADDomainController).Name
  • The domain controller must run a DNS server.
    Note: If any of the above requirements are not met, you need to manually create an Active Directory computer object for the Nutanix storage in the Active Directory, and add a DNS entry for the name.
  • Ensure that the Active Directory domain is configured correctly for consistent time synchronization.
  • Place the AD server in a separate virtual or physical host residing in storage that is not dependent on the domains that the AD server manages.
    Note: Do not run a virtual Active Directory domain controller (DC) on a Nutanix Hyper-V cluster and join the cluster to the same domain.

Accounts and Privileges:

  • An Active Directory account with permission to create new Active Directory computer objects for either a storage container or Organizational Unit (OU) where Nutanix nodes are placed. The credentials of this account are not stored anywhere.
  • An account that has sufficient privileges to join a Windows host to a domain. The credentials of this account are not stored anywhere. These credentials are only used to join the hosts to the domain.

Additional Information Required:

  • The IP address of the primary domain controller.
    Note: The primary domain controller IP address is set as the primary DNS server on all the Nutanix hosts. It is also set as the NTP server in the Nutanix storage cluster to keep the Controller VM, host, and Active Directory time synchronized.
  • The fully qualified domain name to which the Nutanix hosts and the storage cluster is going to be joined.

SCVMM

Note: Relevant only if you have SCVMM in your environment.

Requirements:

  • The SCVMM version must be at least 2016 and it must be installed on Windows Server 2016. If you have SCVMM on an earlier release, upgrade it to 2016 before you register a Nutanix cluster running Hyper-V.
  • Kerberos authentication for storage is optional for Windows Server 2012 R2 (see Enabling Kerberos for Hyper-V), but it is required for Windows Server 2016. However, for Kerberos authentication to work with Windows Server 2016, the active directory server must reside outside the Nutanix cluster.
  • The SCVMM server must allow PowerShell remoting.

    To test this scenario, log on by using the SCVMM administrator account in a Windows host and run the following PowerShell command on a Windows host that is different to the SCVMM host (for example, run the command from the domain controller). If they print the name of the SCVMM server, then PowerShell remoting to the SCVMM server is not blocked.

    > Invoke-Command -ComputerName scvmm_server -ScriptBlock {hostname} -Credential MYDOMAIN\username

    Replace scvmm_server with the SCVMM host name and MYDOMAIN with Active Directory domain name.

    Note: If the SCVMM server does not allow PowerShell remoting, you can perform the SCVMM setup manually by using the SCVMM user interface.
  • The ipconfig command must run in a PowerShell window on the SCVMM server. To verify run the following command.

    > Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {ipconfig} -Credential MYDOMAIN\username

    Replace scvmm_server_name with the SCVMM host name and MYDOMAIN with Active Directory domain name.

  • The SMB client configuration in the SCVMM server should have RequireSecuritySignature set to False. To verify, run the following command.

    > Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {Get-SMBClientConfiguration | FL RequireSecuritySignature}

    Replace scvmm_server_name with the SCVMM host name.

    This can be set to True by a domain policy. In this case, the domain policy should be modified to set it to False. Also, if it is True, this can be configured back to False, but might not get changed throughout if there is a policy that reverts it back to True. To change it, you can use the following command in the PowerShell on the SCVMM host by logging in as a domain administrator.

    Set-SMBClientConfiguration -RequireSecuritySignature $False -Force

    If you are changing it from True to False, it is important to confirm that the policies that are on the SCVMM host have the correct value. On the SCVMM host run rsop.msc to review the resultant set of policy details, and verify the value by navigating to, Servername > Computer Configuration > Windows Settings > Security Settings > Local Policies > Security Options: Policy Microsoft network client: Digitally sign communications (always). The value displayed in RSOP must be, Disabled or Not Defined for the change to persist. Also, the group policies that have been configured in the domain to apply to the SCVMM server should to be updated to change this to Disabled, if the RSOP shows it as Enabled. Otherwise, the RequireSecuritySignature changes back to True at a later time. After setting the policy in Active Directory and propagating to the domain controllers, refresh the SCVMM server policy by running the command gpupdate /force . Confirm in RSOP that the value is Disabled .
    Note: If security signing is mandatory, then you need to enable Kerberos in the Nutanix cluster. In this case, it is important to ensure that the time remains synchronized between the Active Directory server, the Nutanix hosts, and the Nutanix Controller VMs. The Nutanix hosts and the Controller VMs set their NTP server as the Active Directory server, so it should be sufficient to ensure that Active Directory domain is configured correctly for consistent time synchronization.

Accounts and Privileges:

  • When adding a host or a cluster to the SCVMM, the run-as account you are specifying for managing the host or cluster must be different from the service account that was used to install SCVMM.
  • Run-as account must be a domain account and must have local administrator privileges on the Nutanix hosts. This can be a domain administrator account. When the Nutanix hosts are joined to the domain, the domain administrator accounts automatically takes administrator privileges on the host. If the domain account used as the run-as account in SCVMM is not a domain administrator account, you need to manually add it to the list of local administrators on each host by running sconfig .
    • SCVMM domain account with administrator privileges on SCVMM and PowerShell remote execution privileges.
  • If you want to install SCVMM server, a service account with local administrator privileges on the SCVMM server.

IP Addresses

  • One IP address for each Nutanix host.
  • One IP address for each Nutanix Controller VM.
  • One IP address for each Nutanix host IPMI interface.
  • One IP address for the Nutanix storage cluster.
  • One IP address for the Hyper-V failover cluster.
Note: For N nodes, (3*N + 2) IP addresses are required. All IP addresses must be in the same subnet.

DNS Requirements

  • Each Nutanix host must be assigned a name of 15 characters or less, which gets automatically added to the DNS server during domain joining.
  • The Nutanix storage cluster needs to be assigned a name of 15 characters or less, which must be added to the DNS server when the storage cluster is joined to the domain.
  • The Hyper-V failover cluster must be assigned a name of 15 characters or less, which gets automatically added to the DNS server when the failover cluster is created.
  • After the Hyper-V configuration, all names must resolve to an IP address in the Nutanix hosts, the SCVMM server (if applicable), or any other host that needs access to the Nutanix storage, for example, a host running the Hyper-V Manager.

Storage Access Requirements

  • Virtual machine and virtual disk paths must always refer to the Nutanix storage cluster by name, not the external IP address. If you use the IP address, it directs all the I/O to a single node in the cluster and thereby compromises performance and scalability.
    Note: For external non-Nutanix host that needs to access Nutanix SMB shares, see Nutanix SMB Shares Connection Requirements from Outside the Cluster topic.

Host Maintenance Requirements

  • When applying Windows updates to the Nutanix hosts, the hosts should be restarted one at a time, ensuring that Nutanix services comes up fully in the Controller VM of the restarted host before updating the next host. This can be accomplished by using Cluster Aware Updating and using a Nutanix-provided script, which can be plugged into the Cluster Aware Update Manager as a pre-update script. This pre-update script ensures that the Nutanix services go down on only one host at a time ensuring availability of storage throughout the update procedure. For more information about cluster-aware updating, see Installing Windows Updates with Cluster-Aware Updating.
    Note: Ensure that automatic Windows updates are not enabled for the Nutanix hosts in the domain policies.

General Host Requirements

  • Hyper-V hosts must have the remote script execution policy set at least to RemoteSigned . A Restricted setting might cause issues when you reboot the CVM.
Note: Nutanix supports the installation of language packs for Hyper-V hosts.

Limitations and Guidelines

Nutanix clusters running Hyper-V have the following limitations. Certain limitations might be attributable to other software or hardware vendors:

Guidelines

Hyper-V 2016 Clusters and Support for Windows Server 2016
  • VHD Set files (.vhds) are a new shared Virtual Disk model for guest clusters in Microsoft Server 2016 and are not supported. You can import existing shared .vhdx disks to Windows Server 2016 clusters. New VHDX format sharing is supported. Only fixed-size VHDX sharing is supported.

    Use the PowerShell Add-VMHardDiskDrive command to attach any existing or new VHDX file in shared mode to VMs. For example: Add-VMHardDiskDrive -VMName Node1 -Path \\gogo\smbcontainer\TestDisk\Shared.vhdx -SupportPersistentReservations .

Upgrading Hyper-V Hypervisor Hosts
  • When upgrading hosts to Hyper-V 2016, 2019, and later versions, the local administrator user name and password is reset to the default administrator name Administrator and password of nutanix/4u. Any previous changes to the administrator name and/or password are overwritten.
General Guidelines
  • Note: Do not add any other device (including guest VMs) to the VLAN to which the CVM and hypervisor host are assigned. Isolate guest VMs on one or more separate VLANs.
  • If you are destroying a cluster and creating a new one and want to reuse the hostnames, failover cluster name, and storage object name of the previous cluster, remove their computer accounts and objects from AD and DNS first.

Limitations

  • Intel Advanced Network Services (ANS) is not compatible with Load Balancing and Failover (LBFO), the built-in NIC teaming feature in Hyper-V. For more information, see the Intel support article, Teaming with Intel® Advanced Network Services .
  • Nutanix does not support the online resizing of the shared virtual hard disks (VHDX files).

Configuration Scenarios

After using Foundation to create a cluster, you can use the Nutanix web console to join the Hyper-V cluster and its constituent hosts to the domain, create the Hyper-V failover cluster, and enable Kerberos.

Note: If you are installing Windows Server 2016, you do not have to enable Kerberos. Kerberos is enabled during cluster creation.

You can then use the setup_hyperv.py script to add host and storage to SCVMM, configure a Nutanix library share in SCVMM, and register Nutanix storage containers as file shares in SCVMM.

Note: You can use the setup_hyperv.py script only with a standalone SCVMM instance. The script does not work with an SCVMM cluster.

The usage of the setup_hyperv.py script is as follows.

nutanix@cvm$ setup_hyperv.py flags command
commands:
register_shares
setup_scvmm

Nonconfigurable Components

The components listed here are configured by the Nutanix manufacturing and installation processes. Do not modify any of these components except under the direction of Nutanix Support.

Nutanix Software

Modifying any of the following Nutanix software settings may inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.

  • Local datastore name.
  • Configuration and contents of any CVM (except memory configuration to enable certain features).
Important: Note the following important considerations about CVMs.
  • Do not delete the Nutanix CVM.
  • Do not take a snapshot of the CVM for backup.
  • Do not rename, modify, or delete the admin and nutanix user accounts of the CVM.
  • Do not create additional CVM user accounts.

    Use the default accounts ( admin or nutanix ), or use sudo to elevate to the root account.

  • Do not decrease CVM memory below recommended minimum amounts required for cluster and add-in features.

    Nutanix Cluster Checks (NCC), preupgrade cluster checks, and the AOS upgrade process detect and monitor CVM memory.

  • Nutanix does not support the usage of third-party storage on the host part of Nutanix clusters.

    Normal cluster operations might be affected if there are connectivity issues with the third-party storage you attach to the hosts in a Nutanix cluster.

  • Do not run any commands on a CVM that are not in the Nutanix documentation.

Hyper-V Settings

  • Cluster name (using the web console)
  • Controller VM name
  • Controller VM virtual hardware configuration file (.xml file in Hyper-V version 2012 R2 and earlier and .vmcx file in Hyper-V version 2016 and later). Each AOS version and upgrade includes a specific Controller VM virtual hardware configuration. Therefore, do not edit or otherwise modify the Controller VM virtual hardware configuration file.
  • Host name (you can configure the host name only at the time of creating and expanding the cluster)
  • Internal switch settings (internal virtual switch and internal virtual network adapter) and external network adapter name

    Two virtual switches are created on the Nutanix host, ExternalSwitch and InternalSwitch. Two virtual network adapters are created on the host corresponding to these virtual switches, vEthernet (ExternalSwitch) and vEthernet (InternalSwitch).

    Note: Do not delete these switches and adapters. Do not change the names of the internal virtual switch, internal virtual network adapter, and external virtual network adapter. You can change the name of the external virtual switch. For more information about changing the name of the external virtual switch, see Updating the Cluster After Renaming the Hyper-V External Virtual Switch.
  • Windows roles and features

    No new Windows roles or features must be installed on the Nutanix hosts. This especially includes the Multipath IO feature, which can cause the Nutanix storage to become unavailable.

    Do not apply GPOs to the Nutanix nodes that impact Log on as a service. It is recommended not to remove the default entries of the following service.

    NT Service\All Services

    NT Virtual Machine\Virtual Machines

  • Note: This best practice helps keep the host operating system free of roles, features, and applications that aren't required to run Hyper-V. For more information, see the Hyper-V should be the only enabled role document in the Microsoft documentation portal.
  • Controller VM pre-configured VM setting of Automatic Start Action
  • Controller VM high-availability setting
  • Controller VM operations: migrating, saving state, or taking checkpoints of the Controller VM

Adding the Cluster and Hosts to a Domain

After completing foundation of the cluster, you need to add the cluster and its constituent hosts to the Active Directory (AD) domain. The adding of cluster and hosts to the domain facilitates centralized administration and security through the use of other Microsoft services such as Group Policy and enables administrators to manage the distribution of updates and hotfixes.

Before you begin

  • If you have a VLAN segmented network, verify that you have assigned the VLAN tags to the Hyper-V hosts and Controller VMs. For information about how to configure VLANs for the Controller VM, see the Advanced Setup Guide.
  • Ensure that you have valid credentials of the domain account that has the privileges to create a new computer account or modify an existing computer account in the Active Directory domain. An Active Directory domain created by using non-ASCII text may not be supported. For more information about usage of ASCII or non-ASCII text in Active Directory configuration, see Internationalization (i18n) .

Procedure

  1. Log on to the Web Console by using one of the Controller VM IP address or by using cluster virtual IP address.
  2. Click the gear icon in the main menu and select Join Cluster and Hosts to the Domain on the Settings page.
    Figure. Join Cluster and Hosts to the Domain
    Click to enlarge A sample image of the Join Cluster and Hosts to the Domain menu used to add a cluster and its constituent hosts to an AD domain.

  3. Enter the fully qualified name of the domain that you want to join the cluster and its constituent hosts to in the Full Domain Name box.
  4. Enter the IP address of the name server in the Name Server IP Address box that can resolve the domain name that you have entered in the Full Domain Name box.
  5. In the Base OU Path box, type the OU (organizational unit) path where the computer accounts must be stored after the host joins a domain. For example, if the organization is nutanix.com and the OU is Documentation, the Base OU Path can be specified as OU=Documentation,DC=nutanix,DC=com
    Specifying the Base OU Path is optional. When you specify the Base OU Path, the computer accounts are stored in the Base OU Path within the Active Directory after the hosts join a domain. If the Base OU Path is not specified, the computer accounts are stored in the default Computers OU.
  6. Enter a name for the cluster in the Nutanix Cluster Name box.
    The maximum length of the cluster name should not be more than 15 characters and it should be a valid NetBIOS name.
  7. Enter the virtual IP address of the cluster in the Nutanix Cluster Virtual IP Address box.
    If you have not already configured the virtual IP address of the cluster, you can configure it by using this box.
  8. Enter the prefix that should be used to name the hosts (according to your convention) in the Prefix box.
    • The prefix name should not end with a period.
    • The maximum length of the prefix name should not be more than 11 characters.
    • Should be a valid NetBIOS name.

      For example, if you enter prefix name as Tulip, the hosts are named as Tulip-1, Tulip-2, and so on, in the increasing order of the external IP address of the hosts.

    If you do not provide any prefix, the default name of NTNX- block-number is used. Click Advanced View to see the expanded view of all the hosts in all the blocks of the cluster and to rename them individually.
  9. In the Credentials field, enter the logon name and password of the domain account that has the privileges to create a new or modify an existing computer accounts in the Active Directory domain.
    Ensure that the logon name is in the DOMAIN\USERNAME format. The cluster and its constituent hosts require these credentials to join the AD domain. Nutanix does not store the credentials.
  10. When all the information is correct, click Join .
    The cluster is added to the domain. Also, all the hosts are renamed, added to the domain, and restarted. Allow the hosts and Controller VMs a few time to start-up. After the cluster is ready, the logon page is displayed.

What to do next

Create a Microsoft failover cluster. For more information, see Creating a Failover Cluster for Hyper-V.

Creating a Failover Cluster for Hyper-V

Before you begin

Perform the following tasks before you create a failover cluster:

Perform the following procedure to create a failover cluster that includes all the hosts in the cluster.

Procedure

  1. Log on to the Prism Element web console by using one of the Controller VM IP addresses or by using the cluster virtual IP address.
  2. Click the gear icon in the main menu and select Configure Failover Cluster from the Settings page.
    Figure. Configure Failover Cluster
    Click to enlarge Configuring Failover Cluster

  3. Type the failover cluster name in the Failover Cluster Name text box.
    The maximum length of the failover cluster name must not be more than 15 characters and must be a valid NetBIOS name.
  4. Type an IP address for the Hyper-V failover cluster in the Failover Cluster IP Address text box.
    This address is for the cluster of Hyper-V hosts that are currently being configured. It must be unique, different from the cluster virtual IP address and from all other IP addresses assigned to the hosts and Controller VMs. It must be in the same network range as the Hyper-V hosts.
  5. In the Credentials field, type the logon name and password of the domain account that has the privileges to create a new account or modify existing accounts in the Active Directory domain.
    The logon name must be in the format DOMAIN\USERNAME . The credentials are required to create a failover cluster. Nutanix does not store the credentials.
  6. Click Create Cluster .
    A failover cluster is created by the name that has been provided and it includes all the hosts in the cluster.
    For information on manually creating a failover cluster, see Manually Creating a Failover Cluster (SCVMM User Interface).

Manually Creating a Failover Cluster (SCVMM User Interface)

Join the hosts to the domain as described in Adding the Cluster and Hosts to a Domain in the Hyper-V Administration for Acropolis guide.

About this task

Perform the following procedure to manually create a failover cluster for Hyper-V by using System Center VM Manager (SCVMM).

If you are not using SCVMM or are using Hyper-V Manager, see Creating a Failover Cluster for Hyper-V.

Procedure

  1. Start the Failover Cluster Manager utility.
  2. Right-click and select Create Cluster , and click Next .
  3. Enter all the hosts that you want to add to the Failover cluster, and click Next .
  4. Select the No. I do not require support from Microsoft for this cluster, and therefore do not want to run the validation tests. When I click Next continue creating the cluster option, and click Next .
    Note:

    If you select Yes , two tests fail when you run the cluster validation tests. The tests fail because the internal network adapter on each host is configured with the same IP address (192.168.5.1). The network validation tests fail with the following error message:

    Duplicate IP address

    The failures occur despite the internal network being reachable only within a host, so the internal adapter can have the same IP address on different hosts. The second test, Validate Network Communication, fails due to the presence of the internal network adapter. Both failures are benign and can be ignored.

  5. Enter a name for the cluster, specify a static IP address, and click Next .
  6. Clear the All eligible storage to the cluster check box, and click Next .
  7. Wait until the cluster is created. After you receive the message that the cluster is successfully created, click Finish to exit the Cluster Creation wizard.
  8. Go to Networks in the cluster tree and select Cluster Network 1 and ensure it is in the internal network by verifying the IP address in the summary pane. The IP address must be 192.168.5.0/24 as shown in the following screen shot.
    Figure. Failover Cluster Manager Click to enlarge

  9. Click the Action tab on the toolbar and select Live Migration Settings .
  10. Remove Cluster Network 1 from Networks for Live Migration and click OK .
    Note: If you do not perform this step, live migrations fail because the internal network is added to the live migration network lists. Log on to SCVMM, add the cluster to SCVMM, check the host migration setting, and ensure that the internal network is not listed.

Changing the Failover Cluster IP Address

About this task

Perform the following procedure to change your Hyper-V failover cluster IP address.

Procedure

  1. Open Failover Cluster Manager and connect to your cluster.
  2. Enter the name of any one of the Hyper-V hosts and click OK .
  3. In the Failover Cluster Manager pane, select your cluster and expand Cluster Core Resources .
  4. Right-click select the cluster, and select Properties > IP address .
  5. Change the IP address of your failover cluster using the Edit option and click OK .
  6. Click Apply .

Enabling Kerberos for Hyper-V

If you are running Windows Server 2012 R2, perform the following procedure to configure Kerberos to secure the storage. You do not have to perform this procedure for Windows Server 2016 because Kerberos is enabled automatically during failover cluster creation.

Before you begin

  • Join the hosts to the domain as described in Adding the Cluster and Hosts to a Domain.
  • Verify that you have configured a service account for delegation. For more information on enabling delegation, see the Microsoft documentation .

Procedure

  1. Log on to the web console by using one of the Controller VM IP addresses or by using the cluster virtual IP address.
  2. Click the gear icon in the main menu and select Kerberos Management from the Settings page.
    Figure. Configure Failover Cluster Click to enlarge Enabling Kerberos

  3. Set the Kerberos Required option to enabled.
  4. In the Credentials field, type the logon name and password of the domain account that has the privileges to create and modify the virtual computer object representing the cluster in Active Directory. The credentials are required for enabling Kerberos.
    The logon name must be in the format DOMAIN\USERNAME . Nutanix does not store the credentials.
  5. Click Save .

Configuring the Hyper-V Computer Object by Using Kerberos

About this task

Perform the following procedure to complete the configuration of the Hyper-V Computer Object by using Kerberos and SMB signing (for enhanced security).
Note: Nutanix recommends you to configure Kerberos during a maintenance window to ensure cluster stability and prevent loss of storage access for user VMs.

Procedure

  1. Log on to Domain Controller and perform the following for each Hyper-V host computer object.
    1. Right-click the host object, and go to Properties . In the Delegation tab, select the Trust this computer for delegation to specified services only option, and select Use any authentication protocol .
    2. Click Add to add the cifs of the Nutanix storage cluster object.
    Figure. Adding the cifs of the Nutanix storage cluster object Click to enlarge

  2. Check the Service Principal Name (SPN) of the Nutanix storage cluster object.
    > Setspn -l name_of_cluster_object

    Replace name_of_cluster_object with the name of the Nutanix storage cluster object.

    Output similar to the following is displayed.

    Figure. SPN Registration Click to enlarge

    If the SPN is not registered for the Nutanix storage cluster object, create the SPN by running the following commands.

    > Setspn -S cifs/name_of_cluster_object name_of_cluster_object
    > Setspn -S cifs/FQDN_of_the_cluster_object name_of_cluster_object

    Replace name_of_cluster_object with the name of the Nutanix storage cluster object and FQDN_of_the_cluster_object with the domain name of the Nutanix storage cluster object.

    Example

    > Setspn -S cifs/virat virat
    > Setspn -S cifs/virat.sre.local virat
    
  3. [Optional] To enable SMB signing feature, log on to each Hyper-V host by using RDP and run the following PowerShell command to change the Require Security Signature setting to True .
    > Set-SMBClientConfiguration -RequireSecuritySignature $True –Force
    Caution: The SMB server will only communicate with an SMB client that can perform SMB packet signing, therefore if you decide to enable the SMB signing feature, it must be enabled for all the Hyper-V hosts in the cluster.

Disabling Kerberos for Hyper-V

Perform the following procedure to disable Kerberos.

Procedure

  1. Disable SMB signing.
    Log on to each Hyper-V host by using RDP and run the following PowerShell command to change the Require Security Signature setting to False .
    Set-SMBClientConfiguration -RequireSecuritySignature $False –Force
  2. Disable Kerberos from the Prism web console.
    1. Log into the web console by using one of the Controller VM IP address or by using cluster virtual IP address.
    2. From the gear icon, click Kerberos Management .
    3. Set Kerberos Required button to disabled.
    4. In the Credentials field, type the logon name and password of the domain account that has the privileges to create modify the virtual computer object representing the cluster in the Active Directory. The credentials are required for enabling Kerberos.
      This logon name must be in the format DOMAIN\USERNAME . Nutanix does not store the credentials.
    5. Click Save .

Setting Up Hyper-V Manager

Perform the following steps to set up Hyper-V Manager.

Before you begin

  • Add the server running Hyper-V Manager to the allowlist by using the Prism user interface. For more information, see Configuring a Filesystem Whitelist in the Prism Web Console Guide .
  • If Kerberos is enabled for accessing storage (by default it is disabled), enable SMB delegation.

Procedure

  1. Log into the Hyper-V Manager.
  2. Right-click the Hyper-V Manager and select Connect to Server .
  3. Type the name of the host that you want to add and click OK .
  4. Right-click the host and select Hyper-V Settings .
  5. Click Virtual Hard Disks and verify that the location to store virtual hard disk files is same that you have specified during storage container creation.
    For more information, see Creating a Storage Container in the Prism Web Console Guide .
  6. Click Virtual Machines and verify that the location to store virtual machine configuration files is same that you have specified during storage container creation.
    For more information, see Creating a Storage Container in the Prism Web Console Guide .
    After performing these steps, you are ready to create and manage virtual machines by using Hyper-V Manager.
    Warning: Virtual machines created by using Hyper-V should never be defined on storage using IP-based SMB share location.

Cluster Management

Installing Windows Updates with Cluster-Aware Updating

With storage containers that are configured with a replication factor of 2, Nutanix clusters can tolerate only a single node being down at a time. For such clusters, you need a way to update nodes one node at a time.

If your Nutanix cluster runs Microsoft Hyper-V, you can use the Cluster-Aware Updating (CAU) utility, which ensures that only one node is down at a time when Windows updates are applied.

Note: Nutanix does not recommend performing a manual patch installation for a Hyper-V cluster running on the Nutanix platform.

The procedure for configuring CAU for a Hyper-V cluster running on the Nutanix platform is the same as that for a Hyper-V cluster running on any other platform. However, for a Hyper-V cluster running on Nutanix, you need to use a Nutanix pre-update script created specifically for Nutanix clusters. The pre-update script ensures that the CAU utility does not proceed to the next node until the Controller VM on the node that was updated is fully back up, preventing a condition in which multiple Controller VMs are down at the same time.

The CAU utility might not install all the recommended updates, and you might have to install some updates manually. For a complete list of recommended updates, see the following articles in the Microsoft documentation portal.

  • Recommended hotfixes, updates, and known solutions for Windows Server 2012 R2 Hyper-V environments
  • Recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters

Revisit these articles periodically and install any updates that are added to the list.

Note: Ensure that the Nutanix Controller VM and the Hyper-V host are placed in maintenance mode before any maintenance or patch installation. For more information, see Placing the Controller VM and Hyper-V Host in Maintenance Mode.

Preparing to Configure Cluster-Aware Updating

Configure your environment to run the Nutanix pre-update script for Cluster-Aware Updating. The Nutanix pre-update script is named cau_preupdate.ps1 and is, by default, located on each Hyper-V host in C:\Program Files\Nutanix\Utils\ . To ensure smooth configuration, make sure you have everything you need before you begin to configure CAU.

Before you begin

  • Review the required and recommended Windows updates for your cluster.
  • See the Microsoft documentation for information about the Cluster-Aware Updating feature. In particular, see the requirements and best practices for Cluster-Aware Updating in the Micorosoft documentation portal.
  • To enable the migration of virtual machines from one node to another, configure the virtual machines for high availability.

About this task

To configure your environment to run the Nutanix pre-update script, do the following:

Procedure

  1. If you plan to use self-updating mode, do the following:
    1. On each Hyper-V host and on the management workstation that you are using to configure CAU, create a directory such that the path to the directory and the directory name do not contain spaces (for example, C:\cau ).
      Note: The location of the directory must be the same on the hosts and the management workstation.
    2. From C:\Program Files\Nutanix\Utils\ on each host, copy the Nutanix pre-update file cau_preupdate.ps1 to the directory you created on the hosts and on the management workstation.

    A directory whose path does not contain spaces is necessary because Microsoft does not support the use of spaces in the PreUpdateScript field. The space in the default path ( C:\Program Files\Nutanix\Utils\ ) prevents the cluster from updating itself in the self-updating mode. However, that space does not cause issues if you update the cluster by using the remote-updating mode. If you plan to use only the remote-updating mode, you can use the pre-update script from its default location. If you plan to use the self-updating mode or both self-updating and remote-updating modes, use a directory whose path does not contain spaces.

  2. On each host, do the following.
    1. Unblock the script file.
      > powershell.exe Unblock–File -Path 'path-to-pre-update-script'

      Replace path-to-pre-update-script with the full path to the pre-update script (for example, C:\cau\cau_preupdate.ps1 ).

    2. Allow Windows PowerShell to run unsigned code.
      > powershell.exe  Set-ExecutionPolicy remoteSigned

Accessing the Cluster-Aware Updating Dialog Box

You configure CAU by using the Cluster-Aware Updating dialog box.

About this task

To access the Cluster-Aware Updating dialog box, do the following:

Procedure

  1. Open Failover Cluster Manager and connect to your cluster.
  2. In the Configure section, click Cluster-Aware Updating .
    Figure. Cluster-Aware Updating Dialog Box Click to enlarge "The Cluster-Aware Updating dialog box connects to a failover cluster. The dialog box displays the nodes in the cluster, a last update summary, logs of updates in progress, and links to CAU configuration options and wizards."

    The Cluster-Aware Updating dialog box appears. If the dialog box indicates that you are not connected to the cluster, in the Connect to a failover cluster field, enter the name of the cluster, and then click Connect .

Specifying the Nutanix Pre-Update Script in an Updating Run Profile

Specify the Nutanix pre-update script in an Updating Run and save the configuration to an Updating Run Profile in the XML format. This is a one-time task. The XML file contains the configuration for the cluster-update operation. You can reuse this file to drive cluster updates through both self-updating and remote-updating modes.

About this task

To specify the Nutanix pre-update script in an Updating Run Profile, do the following:

Procedure

  1. In the Cluster-Aware Updating dialog box, click Create or modify Updating Run Profile .
    You can see the current location of the XML file under the Updating Run profile to start from: field.
    Note: You cannot overwrite the default CAU configuration file, because non-local administrative users, including the AD administrative users, do not have permissions to modify files in the C:\Windows\System32\ directory.
  2. Click Save As .
  3. Select a new location for the file and rename the file. For example, you can rename the file to msfc_updating_run_profile.xml and save it to the following location: C:\Users\administrator\Documents .
  4. Click Save .
  5. In the Cluster-Aware Updating dialog box, under Cluster Actions , click Configure cluster self-updating options .
  6. Go to Input Settings > Advanced Options and, in the Updating Run options based on: field, click Browse to select the location to which you saved the XML file in an earlier step.
  7. In the Updating Run Profile Editor dialog box, in the PreUpdateScript field, specify the full path to the cau_preupdate.ps1 script. The default full path is C:\Program Files\Nutanix\Utils\cau_preupdate.ps1 . The default path is acceptable if you plan to use only the remote-updating mode. If you plan to use the self-updating mode, place cau_preupdate.ps1 in a directory such that the path does not include spaces. For more information, see Preparing to Configure Cluster-Aware Updating.
    Note: You can also place the script on the SMB file share if you can access the SMB file share from all your hosts and the workstation that you are configuring the CAU from.
  8. Click Save .
    Caution: Do not change the auto-populated ConfigurationName field value. Otherwise, the script fails.
    The CAU configuration is saved to an XML file in the following folder: C:\Windows\System32

What to do next

Save the Updating Run Profile to another location and use it for any other cluster updates.

Updating a Cluster by Using the Remote-Updating Mode

You can update the cluster by using the remote-updating mode to verify that CAU is configured and working correctly. You might need to use the remote-updating mode even when you have configured the self-updating mode, but mostly for updates that cannot wait until the next self-updating run.

About this task

Note: Do not turn off your workstation until all updates have been installed.
To update a cluster by using the remote-updating mode, do the following:

Procedure

  1. In the Cluster-Aware Updating dialog box, click Apply updates to this cluster .
    The Cluster-Aware Updating Wizard appears.
  2. Read the information on the Getting Started page, and then click Next .
  3. On the Advanced Options page, do the following.
    1. In the Updating Run options based on field, enter the full path to the CAU configuration file that you created in Specifying the Nutanix Pre-Update Script in an Updating Run Profile .
    2. Ensure that the full path to the downloaded script is shown in the PreUpdateScript field and that the value in the CauPluginName field is Microsoft.WindowsUpdatePlugin .
  4. On the Additional Update Options page, do the following.
    1. If you want to include recommended updates, select the Give me recommended updates the same way that I receive important updates check box.
    2. Click Next .
  5. On the Completion page, click Close .
    The update process begins.
  6. In the Cluster-Aware Updating dialog box, click the Log of Updates in Progress tab and monitor the update process.

Updating a Cluster by Using the Self-Updating Mode

The self-updating mode ensures that the cluster is up-to-date at all times.

About this task

To configure the self-updating mode, do the following:

Procedure

  1. In the Cluster-Aware Updating dialog box, click Configure cluster self-updating options .
    The Configure Self-Updating Options Wizard appears.
  2. Read the information on the Getting Started page, and then click Next .
  3. On the Add Clustered Role page, do the following.
    1. Select the Add the CAU clustered role, with self-updating mode enabled, to this cluster check box.
    2. If you have a prestaged computer account, select the I have a prestaged computer object for the CAU clustered role check box. Otherwise, leave the check box clear.
  4. On the Self-updating schedule page, specify details such as the self-updating frequency and start date.
  5. On the Advanced Options page, do the following.
    1. In the Updating Run options based on field, enter the full path to the CAU configuration file that you created in Specifying the Nutanix Pre-Update Script in an Updating Run Profile .
    2. Ensure that the full path to the Nutanix pre-update script is shown in the PreUpdateScript field and that the value in the CauPluginName field is Microsoft.WindowsUpdatePlugin .
  6. On the Additional Update Options page, do the following.
    1. If you want to include recommended updates, select the Give me recommended updates the same way that I receive important updates check box.
    2. Click Next .
  7. Click Close .

Moving a Hyper-V Cluster to a Different Domain

This topic describes the supported procedure to move all the hosts on a Nutanix cluster running Hyper-V from one domain to another domain. For example, you might need to do this when you are ready to transition a test cluster to your production environment. Ensure that you merge all the VM checkpoints before moving them to another domain. The VMs fail to start in another domain if there are multiple VM checkpoints.

Before you begin

This method involves cluster downtime. Therefore, schedule a maintenance window to perform the following operations.

Procedure

  1. Note: If you are using System Center Virtual Machine Manager (SCVMM) to manage the cluster, remove the cluster from the SCVMM console. Right-click the cluster in the SCVMM console, and select Remove .
    Destroy the Hyper-V failover cluster using the Failover Cluster Manager or PowerShell commands.
    Note:

    • Remove all the roles from the cluster before destroying the cluster by doing either of the following:
      • Open Failover Cluster Manager, and select Roles from the left navigation pane. Select all the VM's, and select Remove .
      • Log on to any Hyper-V host with domain administrator user credentials and remove the roles with the PowerShell command Get-ClusterGroup | Remove-ClusterGroup -RemoveResources -Force .
    • Destroying the cluster removes any non-VM role permanently. This does not impact the VMs, and the VMs are visible in Hyper-V manager only.
    • Open Failover Cluster Manager, right-click select the cluster, and select More Actions > Destroy Cluster .
    • Log on to any Hyper-V host with domain administrator user credentials and remove the cluster with the PowerShell command Remove-Cluster -Force -CleanupAD , which ensures that all Active Directory objects (all hosts in the Nutanix cluster, Hyper-V failover cluster object, Nutanix storage cluster object) and any corresponding entries are deleted.
  2. Log on to any controller VM in the cluster and remove the Nutanix cluster from the domain by using nCLI; ensure that you also specify the Active Directory administrator user name.
    nutanix@cvm$ ncli cluster unjoin-domain logon-name=domain\username
  3. Log on to each host as the domain administrator user and remove the domain security identifiers from the virtual machines.
    > $d = (Get-WMIObject Win32_ComputerSystem).Domain.Split(".")[0]
    > Get-VMConnectAccess | Where {$_.username.StartsWith("$d\")} | `
      Foreach {Revoke-VMConnectAccess -VMName * -UserName $_.UserName} 
  4. Caution:

    Ensure all the user VM's are powered off before performing this step.
    Log on to any controller VM in the cluster and remove all hosts in the Nutanix cluster from the domain.
    nutanix@cvm$ allssh 'source /etc/profile > /dev/null 2>&1; winsh "\$x=hostname; netdom \
      remove \$x /domain /force"'
  5. Restart all hosts.
  6. If a controller VM fails to restart, use the Repair-CVM Nutanix PowerShell cmdlet to help you recover from this issue. Otherwise, skip this step and perform the next step.
    1. Log on to the Hyper-V host with Remote Desktop Connection and start PowerShell.
    2. Start the controller VM repair process.
      > Repair-CVM
      The CVM will be shutdown. Proceed (Y/N)? Y

      Progress is displayed in the PowerShell command-line shell. When the process is complete, the controller VM configuration information is displayed:

      Using the following configuration:
      
      Name                           Value
      ----                           -----
      internal_adapter_name          Internal
      name                           cvm-host-name
      external_adapter_name          External
      processor_count                8
      memory_weight                  100
      svmboot_iso_path               C:\Program Files\Nutanix\Cvm\cvm_name\svmboot.iso
      nutanix_path                   C:\Program Files\Nutanix
      vm_repository                  C:\Users\Administrator\Virtual Machines
      internal_vswitch_name          InternalSwitch
      processor_weight               200
      external_vswitch_name          ExternalSwitch
      memory_size_bytes              12884901888
      pipe_name                      \\.\pipe\SVMPipe

What to do next

Add the hosts to the new domain as described in Adding the Cluster and Hosts to a Domain.

Recover a Controller VM by Using Repair-CVM

The Repair-CVM PowerShell cmdlet can repair an unusable or deleted Controller VM by removing the existing Controller VM (if present) and creating a new one. In the Nutanix enterprise cloud platform design, no data associated with the unusable or deleted Controller VM is lost.

About this task

If a Controller VM already exists and is running, Repair-CVM prompts you to shut down the Controller VM so it can be deleted and re-created. If the Controller VM has been deleted, the cmdlet creates a new one. In all cases, the new CVM automatically powers on and joins the cluster.

A Controller VM is considered unusable when:

  • The Controller VM is accidentally deleted.
  • The Controller VM configuration is accidentally or unintentionally changed and the original configuration parameters are unavailable.
  • The Controller VM fails to restart after unjoining the cluster from a Hyper-V domain as part of a domain move procedure.

To use the cmdlet, log on to the Hyper-V host, type Repair-CVM, and follow any prompts. The repair process creates a new Controller VM based on any available existing configuration information. If the process cannot find the information or the information does not exist, the cmdlet prompts you for:

  • Controller VM name
  • Controller VM memory size in GB
  • Number of processors to assign to the Controller VM
Note: After running this command, you need to manually re-apply all the custom configuration that you have performed, for example increased RAM size.

Procedure

  1. Log on to the Hyper-V host with Remote Desktop Connection and start PowerShell.
  2. Start the controller VM repair process.
    > Repair-CVM
    The CVM will be shutdown. Proceed (Y/N)? Y

    Progress is displayed in the PowerShell command-line shell. When the process is complete, the controller VM configuration information is displayed:

    Using the following configuration:

    Name                 Value
    ----                           -----
    internal_adapter_name          Internal
    name                           cvm-host-name
    external_adapter_name          External
    processor_count                8
    memory_weight                  100
    svmboot_iso_path               C:\Program Files\Nutanix\Cvm\cvm_name\svmboot.iso
    nutanix_path                   C:\Program Files\Nutanix
    vm_repository                  C:\Users\Administrator\Virtual Machines
    internal_vswitch_name          InternalSwitch
    processor_weight               200
    external_vswitch_name          ExternalSwitch
    memory_size_bytes              12884901888
    pipe_name                      \\.\pipe\SVMPipe

Connect to a Controller VM by Using Connect-CVM

Nutanix installs Hyper-V utilities on each Hyper-V host for troubleshooting and Controller VM access. This procedure describes how to use Connect-CVM to launch the FreeRDP utility to access a Controller VM console when a secure shell (SSH) is not available or cannot be used.

About this task

FreeRDP launches when you run: > Connect-CVM .

Procedure

  1. Log on to a Hyper-V host in your environment and open a PowerShell command window.
  2. Start Connect-CVM.
    > Connect-CVM
  3. In the authentication dialog box, type the local administrator credentials and click OK .
  4. Log on to the Controller VM at the FreeRDP console window.
  5. Login to the Controller VM by using Controller VM credentials.

Changing the Name of the Nutanix Storage Cluster

The name of the Nutanix Storage clusters cannot be changed by using the Web console.

About this task

To change the name of the Nutanix storage cluster, do the following:

Procedure

  1. Log on to the CVM with SSH.
  2. Unjoin the existing Nutanix storage cluster object from the domain.
    ncli> cluster unjoin-domain logon-name=domain\username
  3. Change the cluster name.
    ncli> cluster edit-params new-name=cluster_name

    Replace cluster_name with the new cluster name.

  4. Create a new AD object corresponding to the new storage cluster name.
    nutanix@cvm$ ncli cluster join-domain cluster-name=new_name domain=domain_name \
    external-ip-address=external_ip_address name-server-ip=dns_ip logon-name=domain\username
  5. Restart genesis on each Controller VM in the cluster.
    nutanix@cvm$ allssh 'genesis restart'
    A new entry for the cluster is created in \Windows\System32\drivers\etc\hosts on the Hyper-V hosts.

Changing the Nutanix Cluster External IP Address

About this task

To change the external IP address of the Nutanix cluster, do the following.

Procedure

  1. Log on to the Controller VM with SSH.
  2. Run the following command to change the cluster external IP address.
    nutanix@cvm$ ncli cluster edit-params external-ip-address external_ip_address
    Replace external_ip_address with the new Nutanix cluster external IP address.

Fast Clone a VM Based on Nutanix SMB Shares by using New-VMClone

This cmdlet provides for fast-cloning virtual machines that are based off of Nutanix SMB shares. This cmdlet provide options for creating one or more clones from a given virtual machine.

About this task

Run Get-Help New-VMClone -Full to get detailed help on using the cmdlet with all the options that are available.

Note: This cmdlet does not support creating clones of VMs that have Hyper-V checkpoints.

Procedure

Log on to the Hyper-V host with a Remote Desktop Connection and open a PowerShell command window.
  • The syntax to create single clone is as follows.
    > New-VMClone -VM vm_name -CloneName clone_name -ComputerName computer_name`
     -DestinationUncPath destination_unc_path -PowerOn`
    -Credential prism_credential common_parameters
  • The syntax to create multiple clones is as follows.
    > New-VMClone -VM vm_name -CloneNamePrefix  clone_name_prefix`
    -CloneNameSuffixBegin clone_name_suffix_begin -NCopies n_copies`
    -ComputerName computer_name -DestinationUncPath destination_unc_path -PowerOn`
    -Credential prism_credential -MaxConcurrency max_concurrency common_parameters
  • Replace vm_name with the name of the VM that you are cloning.
  • Replace clone_name with the name of the VM that you are creating.
  • Replace clone_name_prefix with the prefix that should be used for naming the clones.
  • Replace clone_name_suffix_begin with the starting number of the suffix.
  • Replace n_copies with the number of clones that you need to create.
  • Replace computer_name with the name of the computer on which you are creating the clone.
  • Replace destination_unc_path with path on the Nutanix SMB share to store the clone on.
  • Replace prism_credential with the credential to access the Prism (the Nutanix Management service).
  • Replace max_concurrency with the number of clones that you need to create in parallel.
  • Replace common_parameters with any additional parameters that you want to define. For example, -Verbose flag.

Change the Path of a VM Based on Nutanix SMB shares by using Set-VMPath

This cmdlet provides for repairing the UNC paths in the metadata of the VMs that are based off of Nutanix SMB shares and has the following two forms.

About this task

  • Replaces the specified IP address with the supplied DNS name for every occurrence of the IP address in the UNC paths in the VM metadata or configuration file.
  • Replaces the specified SMB server name with the supplied alternative in the UNC paths in the VM metadata without taking the case into consideration.
Note: You cannot use the Set-VMPath cmdlet in 4.5 release. You can use this cmdlet for 4.5.1 or later releases.

Procedure

Log on to the Hyper-V host with a Remote Desktop Connection and open a PowerShell command window.
  • The syntax to change the IP address to DNS name is as follows.
    > Set-VMPath -VMId vm_id -IPAddress ip_address -DNSName dns_name common_parameters
  • The syntax to change the SMB server name is as follows.
    > Set-VMPath -VMId vm_id -SmbServerName smb_server_name`
    -ReplacementSmbServerName replacement_smb_server_name common_parameters
  • Replace vm_id with the ID of the VM.
  • Replace ip_address with the IP address that you want to replace in the VM metadata or configuration file.
  • Replace dns_name with the DNS name that you want to replace the IP address with.
  • Replace smb_server_name with the SMB server name that you want to replace.
  • Replace replacement_smb_server_name with the SMB server name that you want as a replacement.
  • Replace common_parameters with any additional parameters that you want to define. For example, -Verbose flag.
Note: The target VM must be powered off for the operation to complete.

Nutanix SMB Shares Connection Requirements from Outside the Cluster

For external non-Nutanix host that needs to access Nutanix SMB shares must conform to following requirements.

  • Any external non-Nutanix host that needs to access Nutanix SMB shares must run at least Windows 8 or later version if it is a desktop client, and Windows 2012 or later version if it is running Windows Server. This requirement is because SMB 3.0 support is required for accessing Nutanix SMB shares.
  • The IP address of the host must be allowed in the Nutanix storage cluster.
    Note: The SCVMM host IP address is automatically included in the allowlist during the setup. For other IP addresses, you can add those source addresses to the allowlist after the setup configuration is completed by using the Web Console or the nCLI cluster add-to-nfs-whitelist command.
  • For accessing a Nutanix SMB share from Windows 10 or Windows Server 2016, you must enable Kerberos on the Nutanix cluster.
  • If Kerberos is not enabled in the Nutanix storage cluster (the default configuration), then the SMB client in the host must not have RequireSecuritySignature set to True. For more information about checking the policy, see System Center Virtual Machine Manager Configuration . You can verify this by running Get-SmbClientConfiguration in the host. If the SMB client is running in a Windows desktop instead of Windows Server, the account used to log on into the desktop should not be linked to an external Microsoft account.
  • If Kerberos is enabled in the Nutanix storage cluster, you can access the storage only by using the DNS name of the Nutanix storage cluster, and not by using the external IP address of the cluster.
Warning: Nutanix does not support using SMB shares of Hyper-V for storing anything other than virtual machine disks (e.g VHD, VHDX files) and their associated configuration files. This includes, but is not limited to, using Nutanix SMB shares of Hyper-V for general file sharing, virtual machine and configuration files for VMs running on outside of the Nutanix nodes, or any other type of hosted repository not based on virtual machine disks.

Updating the Cluster After Renaming the Hyper-V External Virtual Switch

About this task

You can rename the external virtual switch on your Hyper-V cluster to a name of your choice. After you rename the external virtual switch, you must update the new name in AOS so that AOS upgrades and VM migrations do not fail.

Note: In releases earlier than AOS 5.11, the name of the external virtual switch in your Hyper-V cluster must be ExternalSwitch .

See the Microsoft documentation for instructions about how to rename the external virtual switch.

Perform the following steps after you rename the external virtual switch.

Procedure

  1. Log on to a CVM with SSH.
  2. Restart Genesis on all the CVMs in the cluster.
    nutanix@cvm$ genesis restart
  3. Refresh all the guest VMs.
    1. Log on to a Hyper-V host.
    2. Go to Hyper-V Manager, select the VM and, in Settings , click the Refresh icon.
    See the Microsoft documentation for the updated instructions about how to refresh the guest VMs.

Upgrade to Windows Server Version 2016 and 2019

The following procedures describe how to upgrade earlier releases of Windows Server to Windows Server 2016 and 2019. For information about fresh installation of Windows Server, see Hyper-V Configuration.
Note: If you are upgrading from Windows Server 2012 R2 and if the AOS version is less than 5.11, then upgrade to Windows Server 2016 first and then upgrade to AOS 5.17. Proceed with upgrading to Windows Server 2019 if necessary.

Hyper-V Hypervisor Upgrade Recommendations and Requirements

This section provides the requirements, recommendations, and limitations to upgrade Hyper-V.

Requirements

Note:
  • From Hyper-V 2019, if you do not choose LACP/LAG, SET is the default teaming mode. NX Series G5 and later models support Hyper-V 2019.
  • For Hyper-V 2016, if you do not choose LACP/LAG, the teaming mode is Switch Independent LBFO teaming.
  • For Hyper-V (2016 and 2019), if you choose LACP/LAG, the teaming mode is Switch Dependant LBFO teaming.
  • The platform must not be a light-compute platform.
  • Before upgrading, disable or uninstall third-party antivirus or security filter drivers that modify Windows firewall rules. Windows firewalls must accept inbound and outbound SSH traffic outside of the domain rules.
  • Enable Kerberos when upgrading from Windows Server 2012 R2 to Windows Server 2016. For more information, see Enabling Kerberos for Hyper-V .
    Note: Kerberos is enabled by default when upgrading from Windows Server 2016 to Windows Server 2019.
  • Enable virtual machine migration on the host. Upgrading reimages the hypervisor. Any custom or non-standard hypervisor configurations could be lost after the upgrade is completed.
  • If you are using System Center for Virtual Machine Management (SCVMM) 2012, upgrade to SCVMM 2016 first before upgrading to Hyper-V 2016. Similarly, before upgrading to Hyper-V 2019 upgrade to SCVMM 2019.
  • Upgrade using ISOs and Nutanix JSON File
    • Upgrade using ISOs. The Prism web console supports 1-click upgrade ( Upgrade Software dialog box) of Hyper-V 2016 or 2019 by using metadata upgrade JSON file. This file is available in the Nutanix Support portal Hypervisor Details page and the Microsoft Hyper-V ISO file.
    • The Hyper-V upgrade JSON file, when used on clusters where Foundation 4.0 or later is installed, is available for Nutanix NX series G4 and later, Dell EMC XC series, or Lenovo HX series platforms. You can upgrade hosts to Hyper-V 2016 or Hyper-V 2019 (except for NX series G4 and Lenovo HX series platforms) on these platforms by using this JSON file.
      Note: Lenovo HX series platforms do not support upgrades to Hyper-V 2019 or later versions.

Limitations

  • When upgrading hosts to Hyper-V 2016, 2019, and later versions, the local administrator user name and password is reset to the default administrator name Administrator and password of nutanix/4u. Any previous changes to the administrator name and/or password are overwritten.
  • VMs with any associated files on local storage are lost.
    • Logical networks are not restored immediately after upgrade. If you configure logical switches, the configuration is not retained and VMs could become unavailable.
    • Any VMs created during hypervisor upgrade (including as part of disaster recovery operations) and not marked as HA (High Availability) experiences unavailability.
    • Disaster recovery: VMs with the Automatic Stop Action property set to Save is marked as CBR Not Capable if they are upgraded to version 8.0 after upgrading the hypervisor. Change the value of Automatic Stop Action to ShutDown or TurnOff when the VM is upgraded so that it is not marked as CBR Not Capable
  • Enabling Link Aggregation Control Protocol (LACP) for your cluster deployment is not supported when upgrading hypervisor hosts from Windows Server 2012 R2 to 2016. Preupgrade hypervisor host configuration checks fail in this case.
    Note: Enabling LACP is supported on AOS 5.10.10 when upgrading from Windows Server 2012 R2 to 2016. From AOS 5.17 and later, it is enabled for upgrades from Windows Server 2016 to 2019.

Recommendations

Nutanix recommends that you schedule a sufficiently long maintenance window to upgrade your Hyper-V clusters.

  • Budget sufficient time to upgrade: Depending on the number of VMs running on a node before the upgrade, a node could take more than 1.5 hours to upgrade. The total time to upgrade a Hyper-V cluster from Hyper-V 2012 R2 to Hyper-V 2016 is approximately the time per node multiplied by the number of nodes.
    Upgrading from Windows Server 2012 R2 to Windows Server 2019 can take longer, considering the following total time:
    1. Upgrading Windows Server 2012 R2 to Windows Server 2016
    2. Upgrading from AOS 5.16 to AOS 5.17
    3. Upgrading from Windows Server 2016 to Windows Server 2019
    During the upgrade process, Nutanix performs the following operations:
    • Each node in the cluster is reimaged by using Foundation (duration per node: approximately 45 minutes).
    • Each node is restarted once after the reimaging is complete.
    • The VMs are migrated out of the node that is being upgraded (duration: depends on the number of VMs and workloads running on that node).

Upgrading to Windows Server Version 2016 and 2019

About this task

Note:
  • It is possible that clusters running Windows Server 2012 R2 and AOS have time synchronization issues. Therefore, before you upgrade to Windows Server 2016 or Windows Server 2019 and AOS, make sure that the cluster is free from time synchronization issues.
  • Windows Server 2016 also implements Discrete Device Assignment (DDA) for passing through PCI Express devices to guest VMs. This feature is available in Windows Server 2019 too. Therefore, DiskMonitorService, which was used in earlier AOS releases for passing disks through to the CVM, no longer exists. For more information about DDA, see the Microsoft documentation.

Procedure

  1. Make sure that AOS, host, and hypervisor upgrade prerequisites are met.
    See Hyper-V Hypervisor Upgrade Recommendations and Requirements and the Acropolis Upgrade Guide.
  2. Upgrade AOS by either using the one-click upgrade procedure or uploading the installation files manually. The Prism web console performs both procedures.
    • After upgrading AOS and before upgrading your cluster hypervisor, perform a Life Cycle Manager (LCM) inventory, update LCM, and upgrade any recommended firmware. See the Life Cycle Manager documentation for more information.
    • See the Acropolis Upgrade Guide for more details, including recommended installation or upgrade order.
  3. Do one of the following if you want to manage your VMs with SCVMM:
    1. If you register the Hyper-V cluster with an SCVMM installation with a version that is earlier to 2016, do the following in any order:
      • Unregister the cluster from SCVMM.
      • Upgrade SCVMM to version 2016. See Microsoft documentation for this upgrade procedure.
        Note: Similarly, do the same when upgrading from Hyper-V 2016 to 2019. Upgrade SCVMM to version 2019 and register the cluster to SCVMM 2019.
    2. If you do not have SCVMM, deploy SCVMM 2016 / 2019. See Microsoft documentation for this installation procedure.
    Regardless of whether you deploy a new instance of SCVMM 2016 or you upgrade an existing SCVMM installation, do not register the Hyper-V cluster with SCVMM now. To minimize the steps in the overall upgrade workflow, register the cluster with SCVMM 2016 after you upgrade the Hyper-V hosts.
  4. If you are upgrading from Windows Server 2012 R2 to Windows Server 2016, then enable Kerberos. See Enabling Kerberos for Hyper-V.
  5. Upgrade the Hyper-V hosts.
  6. After the cluster is up, add the cluster to SCVMM 2016. The procedure for adding the cluster to SCVMM 2016 is the procedure used for earlier versions of SCVMM. See Registering a Cluster with SCVMM.
  7. Any log redirection (for example, SCOM log redirection) configurations are lost during the hypervisor upgrade process. Reconfigure log redirection.

System Center Virtual Machine Manager Configuration

System Center Virtual Machine Manager (SCVMM) is a management platform for Hyper-V clusters. Nutanix provides a utility for joining Hyper-V hosts to a domain and adding Hyper-V hosts and storage to SCVMM. If you cannot or do not want to use this utility, you must join to hosts to the domain and add the hosts and storage to SCVMM manually.

Note: The Validate Cluster feature of the Microsoft System Center VM Manager (SCVMM) is not supported for Nutanix clusters managed by SCVMM.

SCVMM Configuration

After joining cluster and its constituent hosts to the domain and creating a failover cluster, you can configure SCVMM.

Registering a Cluster with SCVMM

Perform the following procedure to register a cluster with SCVMM.

Before you begin

  • Join the hosts in the Nutanix cluster to a domain manually or by following Adding the Cluster and Hosts to a Domain.
  • Make sure that the hosts are not registered with SCVMM.

Procedure

  1. Log on to any CVM in the cluster with SSH.
  2. Verify that the status of all services on all the CVMs are Up.
    nutanix@cvm$ cluster status
    If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix cluster.
    CVM: <host IP-Address> Up
                                    Zeus   UP       [9935, 9980, 9981, 9994, 10015, 10037]
                               Scavenger   UP       [25880, 26061, 26062]
                                  Xmount   UP       [21170, 21208]
                        SysStatCollector   UP       [22272, 22330, 22331]
                               IkatProxy   UP       [23213, 23262]
                        IkatControlPlane   UP       [23487, 23565]
                           SSLTerminator   UP       [23490, 23620]
                          SecureFileSync   UP       [23496, 23645, 23646]
                                  Medusa   UP       [23912, 23944, 23945, 23946, 24176]
                      DynamicRingChanger   UP       [24314, 24404, 24405, 24558]
                                  Pithos   UP       [24317, 24555, 24556, 24593]
                              InsightsDB   UP       [24322, 24472, 24473, 24583]
                                  Athena   UP       [24329, 24504, 24505]
                                 Mercury   UP       [24338, 24515, 24516, 24614]
                                  Mantle   UP       [24344, 24572, 24573, 24634]
                              VipMonitor   UP       [18387, 18464, 18465, 18466, 18474]
                                Stargate   UP       [24993, 25032]
                    InsightsDataTransfer   UP       [25258, 25348, 25349, 25388, 25391, 25393, 25396]
                                   Ergon   UP       [25263, 25414, 25415]
                                 Cerebro   UP       [25272, 25462, 25464, 25581]
                                 Chronos   UP       [25281, 25488, 25489, 25547]
                                 Curator   UP       [25294, 25528, 25529, 25585]
                                   Prism   UP       [25718, 25801, 25802, 25899, 25901, 25906, 25941, 25942]
                                     CIM   UP       [25721, 25829, 25830, 25856]
                            AlertManager   UP       [25727, 25862, 25863, 25990]
                                Arithmos   UP       [25737, 25896, 25897, 26040]
                                 Catalog   UP       [25749, 25989, 25991]
                               Acropolis   UP       [26011, 26118, 26119]
                                   Uhura   UP       [26037, 26165, 26166]
                                    Snmp   UP       [26057, 26214, 26215]
                       NutanixGuestTools   UP       [26105, 26282, 26283, 26299]
                              MinervaCVM   UP       [27343, 27465, 27466, 27730]
                           ClusterConfig   UP       [27358, 27509, 27510]
                                Aequitas   UP       [27368, 27567, 27568, 27600]
                             APLOSEngine   UP       [27399, 27580, 27581]
                                   APLOS   UP       [27853, 27946, 27947]
                                   Lazan   UP       [27865, 27997, 27999]
                                  Delphi   UP       [27880, 28058, 28060]
                                    Flow   UP       [27896, 28121, 28124]
                                 Anduril   UP       [27913, 28143, 28145]
                                   XTrim   UP       [27956, 28171, 28172]
                           ClusterHealth   UP       [7102, 7103, 27995, 28209,28495, 28496, 28503, 28510,	
    28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645, 28646, 28648, 28792,	
    28793, 28837, 28838, 28840, 28841, 28858, 28859, 29123, 29124, 29127, 29133, 29135, 29142, 29146, 29150,	
    29161, 29162, 29163, 29179, 29187, 29219, 29268, 29273]
  3. Add the Nutanix hosts and storage to SCVMM.
    nutanix@cvm$ setup_hyperv.py setup_scvmm

    This script performs the following functions.

    • Adds the cluster to SCVMM.
    • Sets up the library share in SCVMM.
    • Unregisters the deleted storage containers from SCVMM.
    • Registers the new storage containers in SCVMM.

    Alternatively, you can specify all the parameters as given in the following steps as command-line arguments. If you do so, enclose the values in single quotation marks since the Controller VM shell does not otherwise correctly interpret the backslash (\).

    The utility prompts for the necessary parameters, for example:

    Getting the cluster configuration ... Done
    Getting information about each host ... Done
    The hosts are joined to domain hyperv.nutanix.com
    
    Please enter the domain account username that has local administrator rights on
    the hosts: hyperv.nutanix.com\Administrator
    Please enter the password for hyperv.nutanix.com\Administrator:
    Verifying credentials for accessing localhost ... Done
    
    Please enter the name of the SCVMM server: scvmmhyperv
    Getting the SCVMM server IP address ... 10.4.34.44
    Adding 10.4.34.44 to the IP address whitelist ... Done
    
    Please enter the domain account username (e.g. username@corp.contoso.com or
     CORP.CONTOSO.COM\username) that has administrator rights on the SCVMM server
    and is a member of the domain administrators group (press ENTER for hyperv.nutanix.com\Administrator):
    Verifying credentials for accessing scvmmhyperv ... Done
    
    Verifying SCVMM service account ... HYPERV\scvmm
    
    All nodes are already part of the Hyper-V failover cluster msfo-tulip.
    Preparing to join the Nutanix storage cluster to domain ... Already joined
    Creating an SCVMM run-as account ... hyperv-Administrator
    Verifying the DNS entry tulip.hyperv.nutanix.com -> 10.4.36.191 ... Done
    Verifying that the Hyper-V failover cluster IP address has been added to DNS ... 10.4.36.192
    Verifying SCVMM security settings ... Done
    Initiating adding the failover cluster to SCVMM ... Done
    Step 2 of adding the failover cluster to SCVMM ... Done
    Final step of adding the failover cluster to SCVMM ... Done
    Querying registered Nutanix library shares ... None
    Add a Nutanix share to the SCVMM library for storing VM templates, useful for deploying VMs using Fast File Copy ([Y]/N)? Y
    Querying the registered library servers ... Done
    Using library server scvmmhyperv.hyperv.nutanix.com.
    Please enter the name of the Nutanix library share to be created (press ENTER for "msfo-tulip-library"): 
    Creating container msfo-tulip-library ... Done
    Registering msfo-tulip-library as a library share with server scvmmhyperv.hyperv.nutanix.com in SCVMM ... Done
    Please enter the Prism password: 
    Registering the SMI-S provider with SCVMM ... Done
    Configuring storage in SCVMM ... Done
    Registered default-container-11962
    
    1. Type the domain account username and password.
      This username must include the fully-qualified domain name, for example hyperv.nutanix.com\Administrator .
    2. Type the SCVMM server name.
      The name must resolve to an IP address.
    3. Type the SCVMM username and password if they are different from the domain account; otherwise press Enter to use the domain account.
    4. Choose whether to create a library share.
      Add a Nutanix share to the SCVMM library for storing VM templates, useful for
       deploying VMs using Fast File Copy ([Y]/N)?

      If you choose to create a library share, output similar to the following is displayed.

      Querying the registered library servers ... Done
      Add a Nutanix share to the SCVMM library for storing VM templates, useful for deploying VMs using Fast File Copy ([Y]/N)? Y
      Querying the registered library servers ... Done
      Using library server scvmmhyperv.hyperv.nutanix.com.
      Please enter the name of the Nutanix library share to be created (press ENTER
       for "NTNX-HV-library"):
      Creating container NTNX-HV-library ... Done
      Registering NTNX-HV-library as a library share with server scvmmhyperv.hyperv.nutanix.com ... Done
      
      Finally the following output is displayed.
      Registering the SMI-S provider with SCVMM ... Done
      Configuring storage in SCVMM ... Done
      Registered share ctr1
      
      Setup complete.
    Note: You can also register Nutanix Cluster by using SCVMM. For more information, see Adding Hosts and Storage to SCVMM Manually (SCVMM User Interface).
    Warning: If you change the Prism password, you must change the Prism run as account in SCVMM.

Adding Hosts and Storage to SCVMM Manually (SCVMM User Interface)

If you are unable to add hosts and storage to SCVMM by using the utility provided by Nutanix, you can add the hosts and storage to SCVMM by using the SCVMM user interface.

Before you begin

  • Verify that the SCVMM server IP address is on the cluster allowlist.
  • Verify that the SCVMM library server has a run-as account specified. Right-click the library server, click Properties , and ensure that Library management credential is populated.

Procedure

  1. Log into the SCVMM user interface and click VMs and Services .
  2. Right-click All Hosts and select Add Hyper-V Hosts and Clusters , and click Next .
    The Specify the Credentials to use for discovery screen appears.
  3. Click Browse and select an existing Run As Account or create a new Run As Account by clicking Create Run As Account . Click OK and then click Next .
    The Specify the search scope for virtual machine host candidates screen appears.
  4. Type the failover cluster name in the Computer names text box, and click Next .
  5. Select the failover cluster that you want to add, and click Next .
  6. Select Reassociate this host with this VMM environment check box, and click Next .
    The Confirm the settings screen appears.
  7. Click Finish .
    Warning: If you are adding the cluster for the first time, the addition action fails with the following error message.
    Error (10400)
    Before Virtual Machine Manager can perform the current operation, the virtualization server must be restarted.

    Remove the cluster that you were adding and perform the same procedure again.

  8. Register a Nutanix SMB share as a library share in SCVMM by clicking Library and then adding the Nutanix SMB share.
    1. Right-click the Library Servers and click Add Library Shares .
    2. Click Add Unmanaged Share and type the SMB file share path, click OK , and click Next .
    3. Click Add Library Shares .
      If all the parameters are correct, the library share is added.
  9. Register the Nutanix SMI-S provider.
    1. Go to Settings > Security > Run As Accounts and click Create Run As Account .
    2. Enter the Prism user name and password, de-select Validate domain credentials , and click Finish .
      Note:

      Only local Prism accounts are supported and even if AD authentication in Prism is configured, SMI-S provider cannot use it for authentication.

    3. Go to Fabric > Storage > Providers .
    4. Right-click Providers and select Add Storage Devices .
    5. Select SAN and NAS devices discovered and managed by a SMI-S provider check box, and Click Next .
    6. Specify the protocol and address of the storage SMI-S provider.
      • In the Protocol drop-down menu, select SMI-S CIMXML .
      • In the Provider IP Address or FQDN text box, provide the Nutanix storage cluster name. For example, clus-smb .
        Note: The Nutanix storage cluster name is not the same as the Hyper-V cluster name. You should get the storage cluster name from the cluster details in the web console.
      • Select the Use Secure sockets layer SSL connection check box.
      • In the Run As Account field, click Browse and select the Prism Run As Account that you have created earlier, and click Next .
      Note: If you encounter the following error when attempting to add an SMI-S provider, see KB 5070:
      Could not retrieve a certificate from the <clustername> server because of the error:
      The request was aborted: Could not create SSL/TLS secure channel.
    7. Click Import to verify the identity of the storage provider.
      The discovery process starts and at the completion of the process, the storage is displayed.
    8. Click Next and select all the SMB shares exported by the Nutanix cluster except the library share and click Next .
    9. Click Finish .
      The newly added provider is displayed under Providers. Go to Storage > File Clusters to verify that the Managed column is Yes .
  10. Add the file shares to the Nutanix cluster by navigating to VMs and Services .
    1. Right-click the cluster name and select Properties .
    2. Go to File Share Storage , and click Add to add file shares to the cluster.
    3. From the File share path drop-down menu, select all the shares that you want to add, and click OK .
    4. Right-click the cluster and click Refresh . Wait for the refresh job to finish.
    5. Right-click the cluster name and select Properties > File Share Storage . You should see the access status with a green check mark, which means that the shares are successfully added.
    6. Select all the virtual machines in the cluster, right-click, and select Refresh .

SCVMM Operations

You can perform the operational procedures on a Hyper-V mode by using SCVMM such as placing a host in the maintenance mode.

Placing a Host in Maintenance Mode

If you try to place a host that is managed by SCVMM in maintenance mode, by default the Controller VM running on the host is placed in the saved state, which might create issues. Perform the following procedure to properly place a host in the maintenance mode.

Procedure

  1. Log into the Controller VM of the host that you are planning to place in maintenance mode by using SSH and shut down the Controller VM.
    nutanix@cvm$ cvm_shutdown -P now

    Wait for the Controller VM to completely shut down.

  2. Select the host and place it in the maintenance mode by navigating to the Host tab in the Host group and clicking Start Maintenance Mode .
    Wait for the operation to complete before performing any maintenance activity on the host.
  3. After the maintenance activity is completed, bring out the host from the maintenance mode by navigating to the Host tab in the Host group and clicking Stop Maintenance Mode .
  4. Start the Controller VM manually.

Migration Guide

AOS 5.20

Product Release Date: 2021-05-17

Last updated: 2022-07-14

This Document Has Been Removed

Nutanix Move is the Nutanix-recommended tool for migrating a VM. Please see the Move documentation at the Nutanix Support portal.

Read article

vSphere Administration Guide for Acropolis

AOS 5.20

Product Release Date: 2021-05-17

Last updated: 2022-08-04

Overview

Nutanix Enterprise Cloud delivers a resilient, web-scale hyperconverged infrastructure (HCI) solution built for supporting your virtual and hybrid cloud environments. The Nutanix architecture runs a storage controller called the Nutanix Controller VM (CVM) on every Nutanix node in a cluster to form a highly distributed, shared-nothing infrastructure.

All CVMs work together to aggregate storage resources into a single global pool that guest VMs running on the Nutanix nodes can consume. The Nutanix Distributed Storage Fabric manages storage resources to preserve data and system integrity if there is node, disk, application, or hypervisor software failure in a cluster. Nutanix storage also enables data protection and High Availability that keep critical data and guest VMs protected.

This guide describes the procedures and settings required to deploy a Nutanix cluster running in the VMware vSphere environment. To know more about the VMware terms referred to in this document, see the VMware Documentation.

Hardware Configuration

See the Field Installation Guide for information about how to deploy and create a Nutanix cluster running ESXi for your hardware. After you create the Nutanix cluster by using Foundation, use this guide to perform the management tasks.

Limitations

For information about ESXi configuration limitations, see Nutanix Configuration Maximums webpage.

Nutanix Software Configuration

The Nutanix Distributed Storage Fabric aggregates local SSD and HDD storage resources into a single global unit called a storage pool. In this storage pool, you can create several storage containers, which the system presents to the hypervisor and uses to host VMs. You can apply a different set of compression, deduplication, and replication factor policies to each storage container.

Storage Pools

A storage pool on Nutanix is a group of physical disks from one or more tiers. Nutanix recommends configuring only one storage pool for each Nutanix cluster.

Replication factor
Nutanix supports a replication factor of 2 or 3. Setting the replication factor to 3 instead of 2 adds an extra data protection layer at the cost of more storage space for the copy. For use cases where applications provide their own data protection or high availability, you can set a replication factor of 1 on a storage container.
Containers
The Nutanix storage fabric presents usable storage to the vSphere environment as an NFS datastore. The replication factor of a storage container determines its usable capacity. For example, replication factor 2 tolerates one component failure and replication factor 3 tolerates two component failures. When you create a Nutanix cluster, three storage containers are created by default. Nutanix recommends that you do not delete these storage containers. You can rename the storage container named default - xxx and use it as the main storage container for hosting VM data.
Note: The available capacity and the vSphere maximum of 2,048 VMs limits the number of VMs a datastore can host.

Capacity Optimization

  • Nutanix recommends enabling inline compression unless otherwise advised.
  • Nutanix recommends disabling deduplication for all workloads except VDI.

    For mixed-workload Nutanix clusters, create a separate storage container for VDI workloads and enable deduplication on that storage container.

Nutanix CVM Settings

CPU
Keep the default settings as configured by the Foundation during the hardware configuration.

Change the CPU settings only if Nutanix Support recommends it.

Memory
Most workloads use less than 32 GB RAM memory per CVM. However, for mission-critical workloads with large working sets, Nutanix recommends more than 32 GB CVM RAM memory.
Tip: You can increase CVM RAM memory up to 64 GB using the Prism one-click memory upgrade procedure. For more information, see Increasing the Controller VM Memory Size in the Prism Web Console Guide .
Networking
The Nutanix CVM uses the standard Ethernet MTU (maximum transmission unit) of 1500 bytes for all the network interfaces by default. The standard 1500 byte MTU helps deliver enhanced excellent performance and stability. Nutanix does not support configuring the MTU on a network interface of CVMs to higher values.
Caution: Do not use jumbo Frames for the Nutanix CVM.
Caution: Do not change the vSwitchNutanix or the internal vmk (VMkernel) interface.

Nutanix Cluster Settings

Nutanix recommends that you do the following.

  • Map a Nutanix cluster to only one vCenter Server.

    Due to the way the Nutanix architecture distributes data, there is limited support for mapping a Nutanix cluster to multiple vCenter Servers. Some Nutanix products (Move, Era, Calm, Files, Prism Central), and features (disaster recovery solution) are unstable when a Nutanix cluster maps to multiple vCenter Servers.

  • Configure a Nutanix cluster with replication factor 2 or replication factor 3.
    Tip: Nutanix recommends using replication factor 3 for clusters with more than 16 nodes. Replication factor 3 requires at least five nodes so that the data remains online even if two nodes fail concurrently.
  • Use the advertised capacity feature to ensure that the resiliency capacity is equivalent to one node of usable storage for replication factor 2 or two nodes for replication factor 3.

    The advertised capacity of a storage container must equal the total usable cluster space minus the capacity of either one or two nodes. For example, in a 4-node cluster with 20 TB usable space per node with replication factor 2, the advertised capacity of the storage container must be 60 TB. That spares 20 TB capacity to sustain and rebuild one node for self-healing. Similarly, in a 5-node cluster with 20 TB usable space per node with replication factor 3, advertised capacity of the storage container must be 60 TB. That spares 40 TB capacity to sustain and rebuild two nodes for self-healing.

  • Use the default storage container and mounting it on all the ESXi hosts in the Nutanix cluster.

    You can also create a single storage container. If you are creating multiple storage containers, ensure that all the storage containers follow the advertised capacity recommendation.

  • Configure the vSphere cluster according to settings listed in vSphere Cluster Settings Checklist.

Software Acceptance Level

The Foundation sets the software acceptance level of an ESXi image to CommunitySupported by default. If there is a requirement to upgrade the software acceptance level, run the following command to upgrade the software acceptance level to the maximum acceptance level of PartnerSupported .

root@esxi# esxcli software acceptance set --level=PartnerSupported

Scratch Partition Settings

ESXi uses the scratch partition (/scratch) to dump the logs when it encounters a purple screen of death (PSOD) or a kernel dump. The Foundation install automatically creates this partition on the SATA DOM or M.2 device with the ESXi installation. Moving the scratch partition to any location other than the SATA DOM or M.2 device can cause issues with LCM, 1-click hypervisor updates, and the general stability of the Nutanix node.

vSphere Networking

vSphere on the Nutanix platform enables you to dynamically configure, balance, or share logical networking components across various traffic types. To ensure availability, scalability, performance, management, and security of your infrastructure, configure virtual networking when designing a network solution for Nutanix clusters.

You can configure networks according to your requirements. For detailed information about vSphere virtual networking and different networking strategies, refer to the Nutanix vSphere storage solution document. This chapter describes the configuration elements required to run VMware vSphere on the Nutanix Enterprise infrastrucutre.

Virtual Networking Configuration Options

vSphere on Nutanix supports the following types of virtual switches.

vSphere Standard Switch (vSwitch)
vSphere Standard Switch (vSS) with Nutanix vSwitch is the default configuration for Nutanix deployments and suits most use cases. A vSwitch detects which VMs are connected to each virtual port and uses that information to forward traffic to the correct VMs. You can connect a vSwitch to physical switches by using physical Ethernet adapters (also referred to as uplink adapters) to join virtual networks with physical networks. This type of connection is similar to connecting physical switches together to create a larger network.
Tip: A vSwitch works like a physical Ethernet switch.
vSphere Distributed Switch (vDS)

Nutanix recommends vSphere Distributed Switch (vDS) coupled with network I/O control (NIOC version 2) and load-based teaming. This combination provides simplicity, ensures traffic prioritization if there is contention, and reduces operational management overhead. A vDS acts as a single virtual switch across all associated hosts on a datacenter. It enables VMs to maintain consistent network configuration as they migrate across multiple hosts. For more information about vDS, see NSX-T Support on Nutanix Platform.

Nutanix recommends setting all vNICs as active on the port group and dvPortGroup unless otherwise specified. The following table lists the naming convention, port groups, and the corresponding VLAN Nutanix uses for various traffic types.

Table 1. Port Groups and Corresponding VLAN
Port group VLAN Description
MGMT_10 10 VM kernel interface for host management traffic
VMOT_20 20 VM kernel interface for vMotion traffic
FT_30 30 Fault tolerance traffic
VM_40 40 VM traffic
VM_50 50 VM traffic
NTNX_10 10 Nutanix CVM to CVM cluster communication traffic (public interface)
Svm-iscsi-pg N/A Nutanix CVM to internal host traffic
VMK-svm-iscsi-pg N/A VM kernel port for CVM to hypervisor communication (internal)

All Nutanix configurations use an internal-only vSwitch for the NFS communication between the ESXi host and the Nutanix CVM.This vSwitch remains unmodified regardless of the virtual networking configuration for ESXi management, VM traffic, vMotion, and so on.

Caution: Do not modify the internal-only vSwitch (vSwitch-Nutanix). vSwitch-Nutanix facilitates communication between the CVM and the internal hypervisor.

VMware NSX Support

Running VMware NSX on Nutanix infrastructure ensures that VMs always have access to fast local storage and compute, consistent network addressing and security without the burden of physical infrastructure constraints. The supported scenario connects the Nutanix CVM to a traditional VLAN network, with guest VMs inside NSX virtual networks. For more information, see the Nutanix vSphere storage solution document.

NSX-T Support on Nutanix Platform

Nutanix platform relies on communication with vCenter to work with networks backed by vSphere standard switch (vSS) or vSphere Distributed Switch (vDS). With the introduction of a new management plane, that enables network management agnostic to the compute manager (vCenter), network configuration information is available through the NSX-T manager. To collect the network configuration information from the NSX-T Manager, you must modify the Nutanix infrastructure workflows (AOS upgrades, LCM upgrades, and so on).

Figure. Nutanix and the NSX-T Workflow Overview Click to enlarge Nutanix and NSX-T Workflow Overview

The Nutanix platform supports the following in the NSX-T configuration.

  • ESXi hypervisor only.
  • vSS and vDS virtual switch configurations.
  • Nutanix CVM connection to VLAN backed NSX-T segments only.
  • The NSX-T Manager credentials registration using the CLI.

The Nutanix platform does not support the following in the NSX-T configuration.

  • Network segmentation with N-VDS.
  • Nutanix CVM connection to overlay NSX-T segments.
  • Link Aggregation/LACP for the uplinks backing the NVDS host switch connecting Nutanix CVMs.
  • The NSX-T Manager credentials registration through Prism.

NSX-T Segments

Nutanix supports NSX-T logical segments to co-exist on Nutanix clusters running ESXi hypervisors. All infrastructure workflows that include the use of the Foundation, 1-click upgrades, and AOS upgrades are validated to work in the NSX-T configurations where CVM is backed by the NSX-T VLAN logical segment.

NSX-T has the following types of segments.

VLAN backed
VLAN backed segments operate similar to the standard port group in a vSphere switch. A port group is created on the NVDS, and VMs that are connected to the port group have their network packets tagged with the configured VLAN ID.
Overlay backed
Overlay backed segments use the Geneve overlay to create a logical L2 network over L3 network. Encapsulation occurs at the transport layer (which is the NVDS module on the host).

Multicast Filtering

Enabling multicast snooping on a vDS with a Nutanix CVM attached affects the ability of CVM to discover and add new nodes in the Nutanix cluster (the cluster expand option in Prism and the Nutanix CLI).

Creating Segment for NVDS

This procedure provides details about creating a segment for nVDS.

About this task

To check the vSwitch configuration of the host and verify if NSX-T network supports the CVM port-group, perform the following steps.

Procedure

  1. Log on to vCenter sever and go to the NSX-T Manager.
  2. Click Networking , and go to Connectivity > Segments in the left pane.
  3. Click ADD SEGMENT under the SEGMENTS tab on the right pane and specify the following information.
    Figure. Create New Segment Click to enlarge Create New Segment

    1. Segment Name : Enter a name for the segment.
    2. Transport Zone : Select the VLAN-based transport zone.
      This transport name is associated with the Transport Zone when configuring the NSX switch .
    3. VLAN : Enter the number 0 for native VLAN.
  4. Click Save to create a segment for NVDS.
  5. Click Yes when the system prompts to continue with configuring the segment.
    The newly created segment appears below the prompt.
    Figure. New Segment Created Click to enlarge New Segment Created

Creating NVDS Switch on the Host by Using NSX-T Manager

This procedure provides instructions to create an NVDS switch on the ESXi host. The management and CVM external interface of the host is migrated to the NVDS switch.

About this task

To create an NVDS switch and configure the NSX-T Manager, do the following.

Procedure

  1. Log on to NSX-T Manager.
  2. Click System , and go to Configuration > Fabric > Nodes in the left pane.
    Figure. Add New Node Click to enlarge Add New Node

  3. Click ADD HOST NODE under the HOST TRANSPORT NODES in the right pane.
    1. Specify the following information in the Host Details dialog box.
      Figure. Add Host Details Click to enlarge Add Host Details

        1. Name : Enter an identifiable ESXi host name.
        2. Host IP : Enter the IP address of the ESXi host.
        3. Username : Enter the username used to log on to the ESXi host.
        4. Password : Enter the password used to log on to the ESXi host.
        5. Click Next to move to the NSX configuration.
    2. Specify the following information in the Configure NSX dialog box.
      Figure. Configure NSX Click to enlarge Configure NSX

        1. Mode : Select Standard option.

          Nutanix recommends the Standard mode only.

        2. Name : Displays the default name of the virtual switch that appears on the host. You can edit the default name and provide an identifiable name as per your configuration requirements.
        3. Transport Zone : Select the transport zone that you selected in Creating Segment for NVDS.

          These segments operate in the way similar to the standard port group in a vSphere switch. A port group is created on the NVDS, and VMs that are connected to the port group have their network packets tagged with the configured VLAN ID.

        4. Uplink Profile : Select an uplink profile for the new nVDS switch.

          This selected uplink profile represents the NICs connected to the host. For more information about uplink profiles, see the VMware Documentation .

        5. LLDP Profile : Select the LLDP profile for the new nVDS switch.

          For more information about LLDP profiles, see the VMware Documentation .

        6. Teaming Policy Uplink Mapping : Map the uplinks with the physical NICs of the ESXi host.
          Note: To verify the active physical NICs on the host, select ESXi host > Configure > Networking > Physical Adapters .

          Click Edit icon and enter the name of the active physical NIC in the ESXi host selected for migration to the NVDS.

          Note: Always migrate one physical NIC at a time to avoid connectivity failure with the ESXi host.
        7. PNIC only Migration : Turn on the switch to Yes if there are no VMkernal Adapters (vmks) associated with the PNIC selected for migration from vSS switch to the nVDS switch.
        8. Network Mapping for Install . Click Add Mapping to migrate the VMkernels (vmks) to the NVDS switch.
        9. Network Mapping for Uninstall : To revert the migration of VMKernels.
  4. Click Finish to create the ESXi host to the NVDS switch.
    The newly created nVDS switch appears on the ESXi host.
    Figure. NVDS Switch Created Click to enlarge NVDS Switch Created

Registering NSX-T Manager with Nutanix

After migrating the external interface of the host and the CVM to the NVDS switch, it is mandatory to inform Genesis about the registration of the cluster with the NSX-T Manager. This registration helps Genesis communicate with the NSX-T Manager and avoid failures during LCM, 1-click, and AOS upgrades.

About this task

This procedure demonstrates an AOS upgrade error encountered for a non-registered NSX-T Manager with Nutanix and how to register the the NSX-T Manager with Nutanix and resolve the issue.

To register an the NSX-T Manager with Nutanix, do the following.

Procedure

  1. Log on to the Prism Element web console.
  2. Select VM > Settings > Upgrade Software > Upgrade > Pre-upgrade to upgrade AOS on the host.
    Figure. Upgrade AOS Click to enlarge

  3. The upgrade process throws an error if the NSX-T Manager is not registered with Nutanix.
    Figure. AOS Upgrade Error for Unregistered NSX-T Click to enlarge

    The AOS upgrade determines if the NSX-T networks supports the CVM, its VLAN, and then attempts to get the VLAN information of those networks. To get VLAN information for the CVM, the NSX-T Manager information must be configured in the Nutanix cluster.

  4. To fix this upgrade issue, log on to the Prism Element web console using SSH.
  5. Access the cluster directory.
    nutanix@cvm$ cd ~/cluster/bin
  6. Verify if the NSX-T Manager was registered with the CVM earlier.
    nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -l

    If the NSX-T Manager was not registered earlier, you get the following message.

    No MX-T manager configured in the cluster
  7. Register the NSX-T Manager with the CVM if it was not registered earlier. Specify the credentials of the NSX-T Manager to the CVM.
    nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -a
    IP address: 10.10.10.10
    Username: admin
    Password: 
    /usr/local/nutanix/cluster/lib/py/requests-2.12.0-py2.7.egg/requests/packages/urllib3/conectionpool.py:843:
     InsecureRequestWarning: Unverified HTTPS request is made. Adding certificate verification is strongly advised. 
    See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
    Successfully persisted NSX-T manager information
  8. Verify the registration of NSX-T Manager with the CVM.
    nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -l

    If there are no errors, the system displays a similar output.

    IP address: 10.10.10.10
    Username: admin
  9. In the Prism Element Web Console, click the Pre-upgrade to continue the AOS upgrade procedure.

    The AOS upgrade is completed successfully.

Networking Components

IP Addresses

All CVMs and ESXi hosts have two network interfaces.
Note: An empty interface eth2 is created on CVM during deployment by Foundation. The eth2 interface is used for backplane when backplane traffic isolation (Network Segmentation) is enabled in the cluster. For more information about backplane interface and traffic segmentation, see Securing Traffic Through Network Segmentation section in Security Guide .
Interface IP address vSwitch
ESXi host vmk0 User-defined vSwitch0
CVM eth0 User-defined vSwitch0
ESXi host vmk1 192.168.5.1 vSwitchNutanix
CVM eth1 192.168.5.2 vSwitchNutanix
CVM eth1:1 192.168.5.254 vSwitchNutanix
CVM eth2 User-defined vSwitch0
Note: The ESXi and CVM interfaces on vSwitch0 cannot use IP addresses in any subnets that overlap with subnet 192.168.5.0/24.

vSwitches

A Nutanix node is configured with the following two vSwitches.

  • vSwitchNutanix

    Local communications between the CVM and the ESXi host use vSwitchNutanix. vSwitchNutanix has no uplinks.

    Caution: To manage network traffic between VMs with greater control, create more port groups on vSwitch0. Do not modify vSwitchNutanix.
    Figure. vSwitchNutanix Configuration Click to enlarge vSwitchNutanix Configuration

  • vSwitch0

    All other external communications like CVM to a differnet host (in case of HA re-direction) use vSwitch0 that has uplinks to the physical network interfaces. Since network segmentation is disabled by default, the backplane traffic uses vSwitch0.

    vSwitch0 has the following two networks.

    • Management Network

      HA, vMotion, and vCenter communications use the Management Network.

    • VM Network

      All VMs use the VM Network.

    Caution:
    • The Nutanix CVM uses the standard Ethernet maximum transmission unit (MTU) of 1,500 bytes for all the network interfaces by default. The standard 1,500-byte MTU delivers excellent performance and stability. Nutanix does not support configuring the MTU on a network interface of CVMs to higher values.
    • You can enable jumbo Frames (MTU of 9,000 bytes) on the physical network interfaces of ESXi hosts and guest VMs if the applications on your guest VMs require them. If you choose to use jumbo Frames on hypervisor hosts, ensure to enable them end-to-end in the desired network and consider both the physical and virtual network infrastructure impacted by the change.
    Figure. vSwitch0 Configuration Click to enlarge vSwitch0 Configuration

Configuring Host Networking (Management Network)

After you create the Nutanix cluster by using Foundation, configure networking for your ESXi hosts.

About this task

Figure. Configure Management Network Click to enlarge Ip Configuration image

Procedure

  1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.
  2. Press the down arrow key until Configure Management Network highlights and then press Enter .
  3. Select Network Adapters and then press Enter .
  4. Ensure that the connected network adapters are selected.
    If they are not selected, press Space key to select them and press Enter key to return to the previous screen.
    Figure. Network Adapters Click to enlarge Select a Network Adapters
  5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional) and press Enter . In the dialog box, provide the VLAN ID and press Enter .
    Note: Do not add any other device (including guest VMs) to the VLAN to which the CVM and hypervisor host are assigned. Isolate guest VMs on one or more separate VLANs.
  6. Select IP Configuration and press Enter .
    Figure. Configure Management Network Click to enlarge IP Address Configuration
  7. If necessary, highlight the Set static IP address and network configuration option and press Space to update the setting.
  8. Provide values for the following: IP Address , Subnet Mask , and Default Gateway fields based on your environment and then press Enter .
  9. Select DNS Configuration and press Enter .
  10. If necessary, highlight the Use the following DNS server addresses and hostname option and press Space to update the setting.
  11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your environment and then press Enter .
  12. Press Esc and then Y to apply all changes and restart the management network.
  13. Select Test Management Network and press Enter .
  14. Press Enter to start the network ping test.
  15. Verify that the default gateway and DNS servers reported by the ping test match those that you specified earlier in the procedure and then press Enter .

    Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct IP addresses are configured.

    Figure. Test Management Network Click to enlarge Test Management Network

    Press Enter to close the test window.

  16. Press Esc to log off.

Changing a Host IP Address

About this task

To change a host IP address, perform the following steps. Perform the following steps once for each hypervisor host in the Nutanix cluster. Complete the entire procedure on a host before proceeding to the next host.
Caution: The cluster cannot tolerate duplicate host IP addresses. For example, when swapping IP addresses between two hosts, temporarily change one host IP address to an interim unused IP address. Changing this IP address avoids having two hosts with identical IP addresses on the cluster. Then complete the address change or swap on each host using the following steps.
Note: All CVMs and hypervisor hosts must be on the same subnet. The hypervisor can be multihomed provided that one interface is on the same subnet as the CVM.

Procedure

  1. Configure networking on the Nutanix node. For more information, see Configuring Host Networking (Management Network).
  2. Update the host IP addresses in vCenter. For more information, see Reconnecting a Host to vCenter.
  3. Log on to every CVM in the Nutanix cluster and restart Genesis service.
    nutanix@cvm$ genesis restart

    If the restart is successful, output similar to the following is displayed.

    Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
    Genesis started on pids [30378, 30379, 30380, 30381, 30403]

Reconnecting a Host to vCenter

About this task

If you modify the IP address of a host, you must reconnect the host with the vCenter. To reconnect the host to the vCenter, perform the following procedure.

Procedure

  1. Log on to vCenter with the web client.
  2. Right-click the host with the changed IP address and select Disconnect .
  3. Right-click the host again and select Remove from Inventory .
  4. Right-click the Nutanix cluster and then click Add Hosts... .
    1. Enter the IP address or fully qualified domain name (FQDN) of the host you want to reconnect in the IP address or FQDN under New hosts .
    2. Enter the host logon credentials in the User name and Password fields, and click Next .
      If a security or duplicate management alert appears, click Yes .
    3. Review the Host Summary and click Next .
    4. Click Finish .
    You can see the host with the updated IP address in the left pane of vCenter.

Selecting a Management Interface

Nutanix tracks the management IP address for each host and uses that IP address to open an SSH session into the host to perform management activities. If the selected vmk interface is not accessible through SSH from the CVMs, activities that require interaction with the hypervisor fail.

If multiple vmk interfaces are present on a host, Nutanix uses the following rules to select a management interface.

  1. Assigns weight to each vmk interface.
    • If vmk is configured for the management traffic under network settings of ESXi, then the weight assigned is 4. Otherwise, the weight assigned is 0.
    • If the IP address of vmk belongs to the same IP subnet as eth0 of the CVMs interface, then 2 is added to its weight.
    • If the IP address of vmk belongs to the same IP subnet as eth2 of the CVMs interface, then 1 is added to its weight.
  2. The vmk interface that has the highest weight is selected as the management interface.

Example of Selection of Management Network

Consider an ESXi host with following configuration.

  • vmk0 IP address and mask: 2.3.62.204, 255.255.255.0
  • vmk1 IP address and mask: 192.168.5.1, 255.255.255.0
  • vmk2 IP address and mask: 2.3.63.24, 255.255.255.0

Consider a CVM with following configuration.

  • eth0 inet address and mask: 2.3.63.31, 255.255.255.0
  • eth2 inet address and mask: 2.3.62.12, 255.255.255.0

According to the rules, the following weights are assigned to the vmk interfaces.

  • vmk0 = 4 + 0 + 1 = 5
  • vmk1 = 0 + 0 + 0 = 0
  • vmk2 = 0 + 2 + 0 = 2

Since vmk0 has the highest weight assigned, vmk0 interface is used as a management IP address for the ESXi host.

To verify that vmk0 interface is selected for management IP address, use the following command.

root@esx# esxcli network ip interface tag get -i vmk0

You see the following output.

Tags: Management, VMmotion

For the other two interfaces, no tags are displayed.

If you want any other interface to act as the management IP address, enable management traffic on that interface by following the procedure described in Selecting a New Management Interface.

Selecting a New Management Interface

You can mark the vmk interface to select as a management interface on an ESXi host by using the following method.

Procedure

  1. Log on to vCenter with the web client.
  2. Do the following on the ESXi host.
    1. Go to Configure > Networking > VMkernel adapters .
    2. Select the interface on which you want to enable the management traffic.
    3. Click Edit settings of the port group to which the vmk belongs.
    4. Select Management check box from the Enabled services option to enable management traffic on the vmk interface.
  3. Open an SSH session to the ESXi host and enable the management traffic on the vmk interface.
    root@esx# esxcli network ip interface tag add -i vmkN --tagname=Management

    Replace vmkN with the vmk interface where you want to enable the management traffic.

Updating Network Settings

After you configure networking of your vSphere deployments on Nutanix Enterprise Cloud, you may want to update the network settings.

  • To know about the best practice of ESXi network teaming policy, see Network Teaming Policy.

  • To migrate an ESXi host networking from a vSphere Standard Switch (vSwitch) to a vSphere Distributed Switch (vDS) with LACP/LAG configuration, see Migrating to a New Distributed Switch with LACP/LAG.

  • To migrate an ESXi host networking from a vSphere standard switch (vSwitch) to a vSphere Distributed Switch (vDS) without LACP, see Migrating to a New Distributed Switch without LACP/LAG.

    .

Network Teaming Policy

On an ESXi host, NIC teaming policy allows you to bundle two or more physical NICs into a single logical link to provide more network bandwidth aggregation and link redundancy to a vSwitch. The NIC teaming policies in the ESXi networking configuration for a vSwitch consists of the following.

  • Route based on originating virtual port.
  • Route based on IP hash.
  • Route based on source MAC hash.
  • Explicit failover order.

In addition to the earlier mentioned NIC teaming policy, vDS uses an extra teaming policy that consists of - Route based on physical NIC load.

When Foundation or Phoenix imaging is performed on a Nutanix cluster, the following two standard virtual switches are created on ESXi hosts:

  • vSwitch0
  • vSwitchNutanix

On vSwitch0, the Nutanix best practice guide (see Nutanix vSphere Networking Solution Document) provides the following recommendations for NIC teaming:

  • vSwitch. Route based on originating virtual port
  • vDS. Route based on physical NIC load

On vSwitchNutanix, there are no uplinks to the virtual switch, so there is no NIC teaming configuration required.

Migrate from a Standard Switch to a Distributed Switch

This topic provides detailed information about how to migrate from a vSphere Standard Switch (vSS) to a vSphere Distributed Switch (vDS).

The following are the two types of virtual switches (vSwitch) in vSphere.

  • vSphere standard switch (vSwitch) (see vSphere Standard Switch (vSwitch) in vSphere Networking).
  • vSphere Distributed Switch (vDS) (see vSphere Distributed Switch (vDS) in vSphere Networking).
Tip: For more information about vSwitches and the associated network concepts, see the VMware Documentation .

For migrating from a vSS to a vDS with LACP/LAG configuration, see Migrating to a New Distributed Switch with LACP/LAG.

For migrating from a vSS to a vDS without LACP/LAG configuration, see Migrating to a New Distributed Switch without LACP/LAG.

Standard Switch Configuration

The standard switch configuration consists of the following.

vSwitchNutanix
vSwitchNutanix handles internal communication between the CVM and the ESXi host. There are no uplink adapters associated with this vSwitch. This virtual switch enables the communication between the CVM and the host. Administrators must not modify the settings of this virtual switch or its port groups. The only members of this port group must be the CVM and its host. Do not modify this virtual switch configuration as it can disrupt communication between the host and the CVM.
vSwitch0
vSwitch0 consists of the vmk (VMkernel) management interface, vMotion interface, and VM port groups. This virtual switch connects to uplink network adapters that are plugged into a physical switch.

Planning the Migration

It is important to plan and understand the migration process. An incorrect configuration can disrupt communication, which can require downtime to resolve.

Consider the following while or before planning the migration.

  • Read Nutanix Best Practice Guide for VMware vSphere Networking .

  • Understand the various teaming and load-balancing algorithms on vSphere.

    For more information, see the VMware Documentation .

  • Confirm communication on the network through all the connected uplinks.
  • Confirm access to the host using IPMI when there are network connectivity issues during migration.

    Access the host to troubleshoot the network issue or move the management network back to the standard switch depending on the issue.

  • Confirm that the hypervisor external management IP address and the CVM IP address are in the same public subnet for the data path redundancy functionality to work.
  • When performing migration to the distributed switch, migrate one host at a time and verify that networking is working as desired.
  • Do not migrate the port groups and vmk (VMkernel) interfaces that are on vSwitchNutanix to the distributed switch (dvSwitch).

Unassigning Physical Uplink of the Host for Distributed Switch

All the physical adapters connect to the vSwitch0 of the host. A live distributed switch must have a physical uplink connected to it to work. To assign the physical adapter of the host to the distributed switch, unassign the physical adapter of the host and assign it to the new distributed switch.

About this task

To unassign the physical uplink of the host, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Networking > Virtual Switches .
  4. Click MANAGE PHYSICAL ADAPTERS tab and select the active adapters from the Assigned adapters that you want to unassign from the list of physical adapters of the host.
    Figure. Managing Physical Adapters Click to enlarge Managing Physical Adapters

  5. Click X on the top.
    The selected adapter is unassigned from the list of physical adapters of the host.
    Tip: Ping the host to check and confirm if you are able to communicate with the active physical adapter of the host. If you lose network connectivity to the ESXi host during this test, review your network configuration.

Migrating to a New Distributed Switch without LACP/LAG

Migrating to a new distributed switch without LACP/LAG consists of the following workflow.

  1. Creating a Distributed Switch
  2. Creating Port Groups on the Distributed Switch
  3. Configuring Port Group Policies

Creating a Distributed Switch

Connect to vCenter and create a distributed switch.

About this task

To create a distributed switch, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Distributed Switch Creation Click to enlarge Distributed Switch Creation

  3. Right-click the host, select Distributed Switch > New Distributed Switch , and specify the following information in the New Distributed Switch dialog box.
    1. Name and Location : Enter name for the distributed switch.
    2. Select Version : Select a distributed switch version that is compatible with all your hosts in that datacenter.
    3. Configure Settings : Select the number of uplinks you want to connect to the distributed switch.
      Select Create a default port group checkbox to create a port group. To configure a port group later, see Creating Port Groups on the Distributed Switch.
    4. Ready to complete : Review the configuration and click Finish .
    A new distributed switch is created with the default uplink port group. The uplink port group is the port group to which the uplinks connect. This uplink is different from the vmk (VMkernel) or the VM port groups.
    Figure. New Distributed Switch Created in the Host Click to enlarge New Distributed Switch Created in the Host

Creating Port Groups on the Distributed Switch

Create one or more vmk (VMkernel) port groups and VM port groups depending on the vSphere features you plan to use and or the physical network layout. The best practice is to have the vmk Management interface, vmk vMotion interface, and vmk iSCSI interface on separate port groups.

About this task

To create port groups on the distributed switch, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Creating Distributed Port Groups Click to enlarge Creating Distributed Port Groups

  3. Right-click the host, select Distributed Switch > Distributed Port Group > New Distributed Port Group , and follow the wizard to create the remaining distributed port group (vMotion interface and VM port groups).
    You would need the following port groups because you would be migrating from the standard switch to the distributed switch.
    • VMkernel Management interface . Use this port group to connect to the host for all management operations.
    • VMNetwork . Use this port group to connect to the new VMs.
    • vMotion . This port group is an internal interface and the host will use this port during failover for vMotion traffic.
    Note: Nutnaix recommends you to use static port binding instead of ephemeral port binding when you create a port group.
    Figure. Distibuted Port Groups Created Click to enlarge Distibuted Port Groups Created

    Note: The port group for vmk management interface is created during the distributed switch creation. See Creating a Distributed Switch for more information.

Configuring Port Group Policies

To configure port groups, you must configure VLANs, Teaming and failover, and other distributed port groups policies at the port group layer or at the distributed switch layer. Refer to the following topics to configure the port group policies.

  1. Configuring Policies on the Port Group Layer
  2. Configuring Policies on the Distributed Switch Layer
  3. Adding ESXi Host to the Distributed Switch

Configuring Policies on the Port Group Layer

Ensure that the distributed switches port groups have VLANs tagged if the physical adapters of the host have a VLAN tagged to them. Update the policies for the port group, VLANs, and teaming algorithms to configure the physical network switch. Configure the load balancing policy as per the network configuration requirements on the physical switch.

About this task

To configure the port group policies, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Configure Port Group Policies on the Distributed Switch Click to enlarge Configure Port Group Policies on the Distributed Switch

  3. Right-click the host, select Distributed Switch > Distributed Port Group > Edit Settings , and follow the wizard to configure the VLAN, Teaming and failover, and other options.
    Note: For more information about configuring port group policies, see the VMware Documentation .
  4. Click OK to complete the configuration.
  5. Repeat steps 2–4 to configure the other port groups.
Configuring Policies on the Distributed Switch Layer

You can configure the same policy for all the port groups simultaneously.

About this task

To configure the same policy for all the port groups, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Manage Distributed Port Groups Click to enlarge Manage Distributed Port Groups

  3. Right-click the host, select Distributed Switch > Distributed Port Group > Manage Distributed Port Groups , and specify the following information in Manage Distributed Port Group dialog box.
    1. In the Select port group policies tab, select the port group policies that you want to configure and click Next .
      Note: For more information about configuring port group policies, see the VMware Documentation .
    2. In the Select port groups tab, select the distributed port groups on which you want to configure the policy and click Next .
    3. In the Teaming and failover tab, configure the Load balancing policy, Active uplinks , and click Next .
    4. In the Ready to complete window, review the configuration and click Finish .
Adding ESXi Host to the Distributed Switch

Migrate the management interface and CVM of the host to the distributed switch.

About this task

To migrate the Management interface and CVM of the ESXi host, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Add ESXi Host to Distributed Switch Click to enlarge Add ESXi Host to Distributed Switch

  3. Right-click the host, select Distributed Switch > Add and Manage Hosts , and specify the following information in Add and Manage Hosts dialog box.
    1. In the Select task tab, select Add hosts to add new host to the distributed switch and click Next .
    2. In the Select hosts tab, click New hosts to select the ESXi host and add it to the distributed switch.
      Note: Add one host at a time to the distributed switch and then migrate all the CVMs from the host to the distributed switch.
    3. In the Manage physical adapters tab, configure the physical NICs (PNICs) on the distributed switch.
      Tip: For consistent network configuration, you can connect the same physical NIC on every host to the same uplink on the distributed switch.
        1. Select a PNIC from the On other switches/unclaimed section and click Assign uplink .
          Figure. Select Physical Adapter for Uplinking Click to enlarge Select Physical Adapter for Uplinking

          Important: If you select physical NICs connected to other switches, those physical NICs migrate to the current distributed switch.
        2. Select the Uplink in the distributed switch to which you want to assign the PNIC of the host and click OK .
        3. Click Next .
    4. In the Manage VMkernel adapters tab, configure the vmk adapters.
        1. Select a VMkernel adapter from the On other switches/unclaimed section and click Assign port group .
        2. Select the port group in the distributed switch to which you want to assign the VMkernel of the host and click OK .
          Figure. Select a Port Group Click to enlarge Select a Port Group

        3. Click Next .
    5. (optional) In the Migrate VM networking tab, select Migrate virtual machine networking to connect all the network adapters of a VM to a distributed port group.
        1. Select the VM to connect all the network adapters of the VM to a distributed port group, or select an individual network adapter to connect with the distributed port group.
        2. Click Assign port group and select the distributed port group to which you want to migrate the VM or network adapter and click OK .
        3. Click Next .
    6. In the Ready to complete tab, review the configuration and click Finish .
  4. Go to the Hosts and Clusters view in the vCenter web client and Hosts > Configure to review the network configuration for the host.
    Note: Run a ping test to confirm that the networking on the host works as expected.
  5. Follow the steps 2–4 to add the remaining hosts to the distributed switch and migrate the adapters.

Migrating to a New Distributed Switch with LACP/LAG

Migrating to a new distributed switch without LACP/LAG consists of the following workflow.

  1. Creating a Distributed Switch
  2. Creating Port Groups on the Distributed Switch
  3. Creating Link Aggregation Group on Distributed Switch
  4. Creating Port Groups to use the LAG
  5. Adding ESXi Host to the Distributed Switch

Creating Link Aggregation Group on Distributed Switch

Using Link Aggregation Group (LAG) on a distributed switch, you can connect the ESXi host to physical switched by using dynamic link aggregation. You can create multiple link aggregation groups (LAGs) on a distributed switch to aggregate the bandwidth of physical NICs on ESXi hosts that are connected to LACP port channels.

About this task

To create a LAG, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
  3. Right-click the host, select Distributed Switch > Configure > LACP .
    Figure. Create LAG on Distributed Switch Click to enlarge Create LAG on Distributed Switch

  4. Click New and enter the following details in the New Link Aggregation Group dialog box.
    1. Name : Enter a name for the LAG.
    2. Number of Ports : Enter the number of ports.
      The number of ports must match the physical ports per host in the LACP LAG. For example, if the Number of Ports two, then you can attach two physical ports per ESXi host to the LAG.
    3. Mode : Specify the state of the physical switch.
      Based on the configuration requirements, you can set the mode to Active or Passive .
    4. Load balancing mode : Specify the load balancing mode for the physical switch.
      For more information about the various load balancing options, see the VMware Documentation .
    5. VLAN trunk range : Specify the VLANs if you have VLANs configured in your environment.
  5. Click OK .
    LAG is created on the distributed switch.

Creating Port Groups to Use LAG

To use LAG as the uplink you have to edit the settings of the port group created on the distributed switch.

About this task

To edit the settings on the port group to use LAG, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
  3. Right-click the host, select Management port Group > Edit Setting .
  4. Go to the Teaming and failover tab in the Edit Settings dialog box and specify the following information.
    Figure. Configure the Management Port Group Click to enlarge Configure the Management Port Group

    1. Load Balancing : Select Route based IP hash .
    2. Active uplinks : Move the LAG under the Unused uplinks section to Active Uplinks section.
    3. Unused uplinks : Select the physical uplinks ( Uplink 1 and Uplink 2 ) and move them to the Unused uplinks section.
  5. Repeat steps 2–4 to configure the other port groups.

Adding ESXi Host to the Distributed Switch

Add the ESXi host to the distributed switch and migrate the network from the standard switch to the distributed switch. Migrate the management interface and CVM of the ESXi host to the distributed switch.

About this task

To migrate the Management interface and CVM of ESXi host, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Add ESXi Host to Distributed Switch Click to enlarge Add ESXi Host to Distributed Switch

  3. Right-click the host, select Distributed Switch > Add and Manage Hosts , and specify the following information in Add and Manage Hosts dialog box.
    1. In the Select task tab, select Add hosts to add new host to the distributed switch and click Next .
    2. In the Select hosts tab, click New hosts to select the ESXi host and add it to the distributed switch.
      Note: Add one host at a time to the distributed switch and then migrate all the CVMs from the host to the distributed switch.
    3. In the Manage physical adapters tab, configure the physical NICs (PNICs) on the distributed switch.
      Tip: For consistent network configuration, you can connect the same physical NIC on every host to the same uplink on the distributed switch.
        1. Select a PNIC from the On other switches/unclaimed section and click Assign uplink .
          Important: If you select physical NICs connected to other switches, those physical NICs migrate to the current distributed switch.
        2. Select the LAG Uplink in the distributed switch to which you want to assign the PNIC of the host and click OK .
        3. Click Next .
    4. In the Manage VMkernel adapters tab, configure the vmk adapters.
      Select the VMkernel adapter that is associated with vSwitch0 as your management VMkernel adapter. Migrate this adapter to the corresponding port group on the distributed switch.
      Note: Do not migrate the VMkernel adapter associated with vSwitchNutanix.
      Note: If the are any VLANs associated with the port group on the standard switch, ensure that the corresponding distributed port group also has the correct VLAN. Verify the physical network configuration to ensure it is configured as required.
        1. Select a VMkernel adapter from the On other switches/unclaimed section and click Assign port group .
        2. Select the port group in the distributed switch to which you want to assign the VMkernel of the host and click OK .
        3. Click Next .
    5. (optional) In the Migrate VM networking tab, select Migrate virtual machine networking to connect all the network adapters of a VM to a distributed port group.
        1. Select the VM to connect all the network adapters of the VM to a distributed port group, or select an individual network adapter to connect with the distributed port group.
        2. Click Assign port group and select the distributed port group to which you want to migrate the VM or network adapter and click OK .
        3. Click Next .
    6. In the Ready to complete tab, review the configuration and click Finish .

vCenter Configuration

VMware vCenter enables the centralized management of multiple ESXi hosts. You can either create a vCenter Server or use an existing vCenter Server. To create a vCenter Server, refer to the VMware Documentation .

This section considers that you already have a vCenter Server and therefore describes the operations you can perform on an existing vCenter Server. To deploy vSphere clusters running Nutanix Enterprise Cloud, perform the following steps in the vCenter.

Tip: For a single-window management of all your ESXi nodes, you can also integrate the vCenter Server to Prism Central. For more information, see Registering a Cluster to the vCenter Server

1. Create a cluster entity within the existing vCenter inventory and configure its settings according to Nutanix best practices. For more information, see Creating a Nutanix Cluster in the vCenter Server.

2. Configure HA. For more information, see vSphere HA Settings.

3. Configure DRS. For more information, see vSphere DRS Settings.

4. Configure EVC. For more information, see vSphere EVC Settings.

5. Configure override. For more information, see VM Override Settings.

6. Add the Nutanix hosts to the new cluster. For more information, see Adding a Nutanix Node to the vCenter Server.

Registering a Cluster to the vCenter Server

To perform core VM management operations directly from Prism without switching to vCenter Server, you need to register your cluster with the vCenter Server.

Before you begin

Ensure that you have vCenter Server Extension privileges as these privileges provide permissions to perform vCenter registration for the Nutanix cluster.

About this task

Following are some of the important points about registering vCenter Server.

  • Nutanix does not store vCenter Server credentials.
  • Whenever a new node is added to Nutanix cluster, vCenter Sever registration for the new node is automatically performed.

Procedure

  1. Log into the Prism web console.
  2. Click the gear icon in the main menu and then select vCenter Registration in the Settings page.
    The vCenter Server that is managing the hosts in the cluster is auto-discovered and displayed.
  3. Click the Register link.
    The IP address is auto-populated in the Address field. The port number field is also auto-populated with 443. Do not change the port number. For the complete list of required ports, see Port Reference.
  4. Type the administrator user name and password of the vCenter Server in the Admin Username and Admin Password fields.
    Figure. vCenter Registration Figure 1 Click to enlarge vcenter registration

  5. Click Register .
    During the registration process a certificate is generated to communicate with the vCenter Server. If the registration is successful, relevant message is displayed in the Tasks dashboard. The Host Connection field displays as Connected, which implies that all the hosts are being managed by the vCenter Server that is registered.
    Figure. vCenter Registration Figure 2 Click to enlarge vcenter registration

Unregistering a Cluster from the vCenter Server

To unregister the vCenter Server from your cluster, perform the following procedure.

About this task

  • Ensure that you unregister the vCenter Server from the cluster before changing the IP address of the vCenter Server. After you change the IP address of the vCenter Sever, you should register the vCenter Server again with the new IP address with the cluster.
  • The vCenter Server Registration page displays the registered vCenter Server. If for some reason the Host Connection field changes to Not Connected , it implies that the hosts are being managed by a different vCenter Server. In this case, there will be new vCenter entry with host connection status as Connected and you need to register to this vCenter Server.

Procedure

  1. Log into the Prism web console.
  2. Click the gear icon in the main menu and then select vCenter Registration in the Settings page.
    A message that cluster is already registered to the vCenter Server is displayed.
  3. Type the administrator user name and password of the vCenter Server in the Admin Username and Admin Password fields.
  4. Click Unregister .
    If the credentials are correct, the vCenter Server is unregistered from the cluster and a relevant message is displayed in the Tasks dashboard.

Creating a Nutanix Cluster in the vCenter Server

Before you begin

Nutanix recommends creating a storage container in the Prism Element running on the host or using the default container to mount NFS datastore on all ESXi hosts.

About this task

To enable the vCenter to discover the Nutanix clusters, perform the following steps in the vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Do one of the following.
    • If you want the Nutanix cluster to be in an existing datacenter, proceed to step 3.
    • If you want the Nutanix cluster to be in a new datacenter or if there is no datacenter, perform the following steps to create a datacenter.
      Note: Nutanix clusters must be in a datacenter.
    1. Go to the Hosts and Clusters view and right-click the IP address of the vCenter Server in the left pane.
    2. Click New Datacenter .
    3. Enter a meaningful name for the datacenter (for example, NTNX-DC ) and click OK .
  3. Right-click the datacenter node and click New Cluster .
    1. Enter a meaningful name for the cluster in the Name field (for example, NTNX-Cluster ).
    2. Turn on the vSphere DRS switch.
    3. Turn on the Turn on vSphere HA switch.
    4. Uncheck Manage all hosts in the cluster with a single image .
    Nutanix cluster ( NTNX-Cluster ) is created with the default settings for vSphere HA and vSphere DRS.

What to do next

Add all the Nutanix nodes to the Nutanix cluster inventory in vCenter. For more information, see Adding a Nutanix Node to the vCenter Server.

Adding a Nutanix Node to the vCenter Server

Before you begin

Configure the Nutanix cluster according to Nutanix specifications given in Creating a Nutanix Cluster in the vCenter Server and vSphere Cluster Settings Checklist.

About this task

Note: To ensure that vCenter managed ESXi hosts are accessible through vCenter only and are not directly accessible, put the vCenter managed ESXi hosts in lockdown mode. Lockdown mode forces all operations through the vCenter Server.
Tip: Refer to KB-1661 for the default credentials of all cluster components.

Procedure

  1. Log on to vCenter with the web client.
  2. Right-click the Nutanix cluster and then click Add Hosts... .
    1. Enter the IP address or fully qualified domain name (FQDN) of the host you want to reconnect in the IP address or FQDN under New hosts .
    2. Enter the host logon credentials in the User name and Password fields, and click Next .
      If a security or duplicate management alert appears, click Yes .
    3. Review the Host Summary and click Next .
    4. Click Finish .
  3. Select the host under the Nutanix cluster from the left pane and go to Configure > System > Security Profile .
    Ensure that Lockdown Mode is Disabled because Nutanix does not support lockdown mode.
  4. Configure DNS servers.
    1. Go to Configure > Networking > TCP/IP configuration .
    2. Click Default under TCP/IP stack and go to TCP/IP .
    3. Click the pencil icon to configure DNS servers and perform the following.
        1. Select Enter settings manually .
        2. Type the domain name in the Domain field.
        3. Type DNS server addresses in the Preferred DNS Server and Alternate DNS Server fields and click OK .
  5. Configure NTP servers.
    1. Go to Configure > System > Time Configuration .
    2. Click Edit .
    3. Select the Use Network Time Protocol (Enable NTP client) .
    4. Type the NTP server address in the NTP Servers text box.
    5. In the NTP Service Startup Policy, select Start and stop with host from the drop-down list.
      Add multiple NTP servers if necessary.
    6. Click OK .
  6. Click Configure > Storage and ensure that NFS datastores are mounted.
    Note: Nutanix recommends creating a storage container in Prism Element running on the host.
  7. If HA is not enabled, set the CVM to start automatically when the ESXi host starts.
    Note: Automatic VM start and stop is disabled in clusters where HA is enabled.
    1. Go to Configure > Virtual Machines > VM Startup/Shutdown .
    2. Click Edit .
    3. Ensure that Automatically start and stop the virtual machines with the system is checked.
    4. If the CVM is listed in Manual Startup , click the up arrow to move the CVM into the Automatic Startup section.
    5. Click OK .

What to do next

Configure HA and DRS settings. For more information, see vSphere HA Settings and vSphere DRS Settings.

Nutanix Cluster Settings

To ensure the optimal performance of your vSphere deployment running on Nutanix cluster, configure the following settings from the vCenter.

vSphere General Settings

About this task

Configure the following general settings from vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Configuration > General .
    1. Under General , set the Swap file location to Virtual machine directory .
      Setting the swap file location to the VM directory stores the VM swap files in the same directory as the VM.
    2. Under Default VM Compatibility , set the compatibility to Use datacenter setting and host version .
      Do not change the compatibility unless the cluster has to support previous versions of ESXi VMs.
      Figure. General Cluster Settings Click to enlarge General Cluster Settings

vSphere HA Settings

If there is a node failure, vSphere HA (High Availability) settings ensure that there are sufficient compute resources available to restart all VMs that were running on the failed node.

About this task

Configure the following HA settings from vCenter.
Note: Nutanix recommends that you configure vSphere HA and DRS even if you do not use the features. The vSphere cluster configuration preserves the settings, so if you later decide to enable the features, the settings are in place and conform to Nutanix best practices.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Services > vSphere Availability .
  4. Click Edit next to the text showing vSphere HA status.
    Figure. vSphere Availability Settings: Failures and Responses Click to enlarge vSphere Availability Settings: Failures and Responses

    1. Turn on the vSphere HA and Enable Host Monitoring switches.
    2. Specify the following information under the Failures and Responses tab.
        1. Host Failure Response : Select Restart VMs from the drop-down list.

          This option configures the cluster-wide host isolation response settings.

        2. Response for Host Isolation : Select Power off and restart VMs from the drop-down list.
        3. Datastore with PDL : Select Disabled from the drop-down list.
        4. Datastore with APD : Select Disabled from the drop-down list.
          Note: To enable the VM component protection in vCenter, refer to the VMware Documentation.
        5. VM Monitoring : Select Disabled from the drop-down list.
    3. Specify the following information under the Admission Control tab.
      Note: If you are using replication factor 2 with cluster sizes up to 16 nodes, configure HA admission control settings to tolerate one node failure. For cluster sizes larger than 16 nodes, configure HA admission control to sustain two node failures and use replication factor 3. vSphere 6.7, and newer versions automatically calculate the percentage of resources required for admission control.
      Figure. vSphere Availability Settings: Admission Control Click to enlarge vSphere Availability Settings: Admission Control

        1. Host failures cluster tolerates : Enter 1 or 2 based on the number of nodes in the Nutanix cluster and the replication factor.
        2. Define host failover capacity by : Select Cluster resource Percentage from the drop-down list.
        3. Performance degradation VMs tolerate : Set the percentage to 100.

          For more information about settings of percentage of cluster resources reserved as failover spare capacity, see vSphere HA Admission Control Settings for Nutanix Environment.

    4. Specify the following information under the Heartbeat Datastores tab.
      Note: vSphere HA uses datastore heart beating to distinguish between hosts that have failed and hosts that reside on a network partition. With datastore heart beating, vSphere HA can monitor hosts when a management network partition occurs while continuing to respond to failures.
      Figure. vSphere Availability Settings: Heartbeat Datastores Click to enlarge vSphere Availability Settings: Heartbeat Datastores

        1. Select Use datastores only from the specified list .
        2. Select the named storage container mounted as the NFS datastore (Nutanix datastore).

          If you have more than one named storage container, select all that are applicable.

        3. If the cluster has only one datastore, click Advanced Options tab and add das.ignoreInsufficientHbDatastore with Value of true .
    5. Click OK .

vSphere HA Admission Control Settings for Nutanix Environment

Overview

If you are using redundancy factor 2 with cluster sizes of up to 16 nodes, you must configure HA admission control settings with the appropriate percentage of CPU/RAM to achieve at least N+1 availability. For cluster sizes larger than 16 nodes, you must configure HA admission control with the appropriate percentage of CPU/RAM to achieve at least N+2 availability.

N+2 Availability Configuration

The N+2 availability configuration can be achieved in the following two ways.

  • Redundancy factor 2 and N+2 vSphere HA admission control setting configured.

    Because Nutanix distributed file system recovers in the event of a node failure, it is possible to have a second node failure without data being unavailable if the Nutanix cluster has fully recovered before the subsequent failure. In this case, a N+2 vSphere HA admission control setting is required to ensure sufficient compute resources are available to restart all the VMs.

  • Redundancy factor 3 and N+2 vSphere HA admission control setting configured.
    If you want two concurrent node failures to be tolerated and the cluster has insufficient blocks to use block awareness, redundancy factor 3 in a cluster of five or more nodes is required. In either of these two options, the Nutanix storage pool must have sufficient free capacity to restore the configured redundancy factor (2 or 3). The percentage of free space required is the same as the required HA admission control percentage setting. In this case, redundancy factor 3 must be configured at the storage container layer. An N+2 vSphere HA admission control setting is also required to ensure sufficient compute resources are available to restart all the VMs.
    Note: For redundancy factor 3, a minimum of five nodes is required, which provides the ability that two concurrent nodes can fail while ensuring data remains online. In this case, the same N+2 level of availability is required for the vSphere cluster to enable the VMs to restart following a failure.

For redundancy factor 2 deployments, the recommended minimum HA admission control setting percentage is marked with single asterisk (*) symbol in the following table. For redundancy factor 2 or redundancy factor 3 deployments configured for multiple non-concurrent node failures to be tolerated, the minimum required HA admission control setting percentage is marked with two asterisks (**) in the following table.

Table 1. Minimum Reservation Percentage for vSphere HA Admission Control Setting
Nodes Availability Level
N+1 N+2 N+3 N+4
1 N/A N/A N/A N/A
2 N/A N/A N/A N/A
3 33* N/A N/A N/A
4 25* 50 75 N/A
5 20* 40** 60 80
6 18* 33** 50 66
7 15* 29** 43 56
8 13* 25** 38 50
9 11* 23** 33 46
10 10* 20** 30 40
11 9* 18** 27 36
12 8* 17** 25 34
13 8* 15** 23 30
14 7* 14** 21 28
15 7* 13** 20 26
16 6* 13** 19 25
Nodes Availability Level
N+1 N+2 N+3 N+4
17 6 12* 18** 24
18 6 11* 17** 22
19 5 11* 16** 22
20 5 10* 15** 20
21 5 10* 14** 20
22 4 9* 14** 18
23 4 9* 13** 18
24 4 8* 13** 16
25 4 8* 12** 16
26 4 8* 12** 16
27 4 7* 11** 14
28 4 7* 11** 14
29 3 7* 10** 14
30 3 7* 10** 14
31 3 6* 10** 12
32 3 6* 9** 12

The table also represents the percentage of the Nutanix storage pool, which should remain free to ensure that the cluster can fully restore the redundancy factor in the event of one or more nodes, or even a block failure (where three or more blocks exist within a cluster).

Block Awareness

For deployments of at least three blocks, block awareness automatically ensures data availability when an entire block of up to four nodes configured with redundancy factor 2 can become unavailable.

If block awareness levels of availability are required, the vSphere HA admission control setting must ensure sufficient compute resources are available to restart all virtual machines. In addition, the Nutanix storage pool must have sufficient space to restore redundancy factor 2 to all data.

The vSphere HA minimum availability level must be equal to number of nodes per block.

Note: For block awareness, each block must be populated with a uniform number of nodes. In the event of a failure, a non-uniform node count might compromise block awareness or the ability to restore the redundancy factor, or both.

Rack Awareness

Rack fault tolerance is the ability to provide a rack-level availability domain. With rack fault tolerance, data is replicated to nodes that are not in the same rack. Rack failure can occur in the following situations.

  • All power supplies in a rack fail.
  • Top-of-rack (TOR) switch fails.
  • Network partition occurs: one of the racks becomes inaccessible from the other racks.

With rack fault tolerance enabled, the cluster has rack awareness and guest VMs can continue to run even during the failure of one rack (with replication factor 2) or two racks (with replication factor 3). The redundant copies of guest VM data and metadata persist on other racks when one rack fails.

Table 2. Rack awareness has minimum requirements, described in the following table.
Replication factor Minimum number of nodes Minimum number of Blocks Minimum number of racks Data resiliency
2 3 3 3 Failure of 1 node, block, or rack
3 5 5 5 Failure of 2 nodes, blocks, or racks

vSphere DRS Settings

About this task

Configure the following DRS settings from vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Services > vSphere DRS .
  4. Click Edit next to the text showing vSphere DRS status.
    Figure. vSphere DRS Settings: Automation Click to enlarge vSphere DRS Settings: Automation

    1. Turn on the vSphere DRS switch.
    2. Specify the following information under the Automation tab.
        1. Automation Level : Select Fully Automated from the drop-down list.
        2. Migration Threshold : Set the bar between conservative and aggressive (value=3).

          Migration threshold provides optimal resource utilization while minimizing DRS migrations with little benefit. This threshold automatically manages data locality in such a way that whenever VMs move, writes are always written on one of the replicas locally to maximize the subsequent read performance.

          Nutanix recommends the migration threshold at 3 in a fully automated configuration.

        3. Predictive DRS : Leave the option disabled.

          The value of predictive DRS depends on whether you use other VMware products such as vRealize operations. Unless you use vRealize operations, Nutanix recommends disabling predictive DRS.

        4. Virtual Machine Automation : Enable VM automation.
    3. Specifying anything under the Additional Options tab is optional.
    4. Specify the following information under the Power Management tab.
      Figure. vSphere DRS Settings: Power Management Click to enlarge vSphere DRS Settings: Power Management

        1. DPM : Leave the option disabled.

          Enabling DPM causes nodes in the Nutanix cluster to go offline, affecting cluster resources.

    5. Click OK .

vSphere EVC Settings

vSphere enhanced vMotion compatibility (EVC) ensures that workloads can live migrate, using vMotion, between ESXi hosts in a Nutanix cluster that are running different CPU generations. The general recommendation is to have EVC enabled as it will help you in the future where you will be scaling your Nutanix clusters with new hosts that might contain new CPU models.

About this task

To enable EVC in a brownfield scenario can be challenging. Configure the following EVC settings from vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Shut down all the VMs on the hosts with feature sets greater than the EVC mode.
    Ensure that the Nutanix cluster contains hosts with CPUs from only one vendor, either Intel or AMD.
  4. Click Configure , and go to Configuration > VMware EVC .
  5. Click Edit next to the text showing VMware EVC.
  6. Enable EVC for the CPU vendor and feature set appropriate for the hosts in the Nutanix cluster, and click OK .
    If the Nutanix cluster contains nodes with different processor classes, enable EVC with the lower feature set as the baseline.
    Tip: To know the processor class of a node, perform the following steps.
      1. Log on to Prism Element running on the Nutanix cluster.
      2. Click Hardware from the menu and go to Diagram or Table view.
      3. Click the node and look for the Block Serial field in Host Details .
    Figure. VMware EVC Click to enlarge VMware EVC

  7. Start the VMs in the Nutanix cluster to apply the EVC.
    If you try to enable EVC on a Nutanix cluster with mismatching host feature sets (mixed processor clusters), the lowest common feature set (lowest processor class) is selected. Hence, if VMs are already running on the new host and if you want to enable EVC on the host, you must first shut down the VMs and then enable EVC.
    Note: Do not shut down more than one CVM at the same time.

VM Override Settings

You must exclude Nutanix CVMs from vSphere availability and resource scheduling and therefore tweak the following VM overriding settings.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Configuration > VM Overrides .
  4. Select all the CVMs and click Next .
    If you do not have the CVMs listed, click Add to ensure that the CVMs are added to the VM Overrides dialog box.
    Figure. VM Override Click to enlarge VM Override

  5. In the VM override section, configure override for the following parameters.
    • DRS Automation Level: Disabled
    • VM HA Restart Priority: Disabled
    • VM Monitoring: Disabled
  6. Click Finish .

Migrating a Nutanix Cluster from One vCenter Server to Another

About this task

Perform the following steps to migrate a Nutanix cluster from one vCenter Server to another vCenter Server.
Note: The following steps are to migrate a Nutanix cluster with vSphere Standard Switch (vSwitch). To migrate a Nutanix cluster with vSphere Distributed Switch (vDS), see the VMware Documentation. .

Procedure

  1. Create a vSphere cluster in the vCenter Server where you want to migrate the Nutanix cluster. See Creating a Nutanix Cluster in the vCenter Server.
  2. Configure HA, DRS, and EVC on the created vSphere cluster. See Nutanix Cluster Settings.
  3. Unregister the Nutanix cluster from the source vCenter Server. See Unregistering a Cluster from the vCenter Server.
  4. Move the nodes from the source vCenter Server to the new vCenter Server.
    See the VMware Documentation to know the process.
  5. Register the Nutanix cluster to the new vCenter Server. See Registering a Cluster to the vCenter Server.

Storage I/O Control (SIOC)

SIOC controls the I/O usage of a virtual machine and gradually enforces the predefined I/O share levels. Nutanix converged storage architecture does not require SIOC. Therefore, while mounting a storage container on an ESXi host, the system disables SIOC in the statistics mode automatically.

Caution: While mounting a storage container on ESXi hosts running older versions (6.5 or below), the system enables SIOC in the statistics mode by default. Nutanix recommends disabling SIOC because an enabled SIOC can cause the following issues.
  • The storage can become unavailable because the hosts repeatedly create and delete the access .lck-XXXXXXXX files under the .iorm.sf subdirectory, located in the root directory of the storage container.
  • Site Recovery Manager (SRM) failover and failback does not run efficiently.
  • If you are using Metro Availability disaster recovery feature, activate and restore operations do not work.
    Note: For using Metro Availability disaster recovery feature, Nutanix recommends using an empty storage container. Disable SIOC and delete all the files from the storage container that are related to SIOC. For more information, see KB-3501.
Run the NCC health check (see KB-3358) to verify if SIOC and SIOC in statistics mode are disabled on storage containers. If SIOC and SIOC in statistics mode are enabled on storage containers, disable them by performing the procedure described in Disabling Storage I/O Control (SIOC) on a Container.

Disabling Storage I/O Control (SIOC) on a Container

About this task

Perform the following procedure to disable storage I/O statistics collection.

Procedure

  1. Log on to vCenter with the web client.
  2. Click the Storage view in the left pane.
  3. Right-click the storage container under the Nutanix cluster and select Configure Storage I/O Controller .
    The properties for the storage container are displayed. The Disable Storage I/O statistics collection option is unchecked, which means that SIOC is enabled by default.
    1. Disable Storage I/O Control and statistics collection.
    2. Disable Storage I/O Control but enable statistics collection.
    3. Disable Storage I/O Control and statistics collection: Select the Disable Storage I/O Control and statistics collection option to disable SIOC.
      Uncheck Include I/O Statistics for SDRS option.
    4. Click OK .

Node Management

This chapter describes the management tasks you can do on a Nutanix node.

Nonconfigurable ESXi Components

The Nutanix manufacturing and installation processes done by running Foundation on the Nutanix nodes configures the following components. Do not modify any of these components except under the direction of Nutanix Support.

Nutanix Software

Modifying any of the following Nutanix software settings may inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.

  • Local datastore name.
  • Configuration and contents of any CVM (except memory configuration to enable certain features).
Important: Note the following important considerations about CVMs.
  • Do not delete the Nutanix CVM.
  • Do not take a snapshot of the CVM for backup.
  • Do not rename, modify, or delete the admin and nutanix user accounts of the CVM.
  • Do not create additional CVM user accounts.

    Use the default accounts ( admin or nutanix ), or use sudo to elevate to the root account.

  • Do not decrease CVM memory below recommended minimum amounts required for cluster and add-in features.

    Nutanix Cluster Checks (NCC), preupgrade cluster checks, and the AOS upgrade process detect and monitor CVM memory.

  • Nutanix does not support the usage of third-party storage on the host part of Nutanix clusters.

    Normal cluster operations might be affected if there are connectivity issues with the third-party storage you attach to the hosts in a Nutanix cluster.

  • Do not run any commands on a CVM that are not in the Nutanix documentation.

ESXi

Modifying any of the following ESXi settings can inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.

  • NFS datastore settings
  • VM swapfile location
  • VM startup/shutdown order
  • CVM name
  • CVM virtual hardware configuration file (.vmx file)
  • iSCSI software adapter settings
  • Hardware settings, including passthrough HBA settings.

  • vSwitchNutanix standard virtual switch
  • vmk0 interface in Management Network port group
  • SSH
    Note: An SSH connection is necessary for various scenarios. For example, to establish connectivity with the ESXi server through a control plane that does not depend on additional management systems or processes. The SSH connection is also required to modify the networking and control paths in the case of a host failure to maintain High Availability. For example, the CVM autopathing (Ha.py) requires an SSH connection. In case a local CVM becomes unavailable, another CVM in the cluster performs the I/O operations over the 10GbE interface.
  • Open host firewall ports
  • CPU resource settings such as CPU reservation, limit, and shares of the CVM.
    Caution: Do not use the Reset System Configuration option.
  • ProductLocker symlink setting to point at the default datastore.

    Do not change the /productLocker symlink to point at a non-local datastore.

    Do not change the ProductLockerLocation advanced setting.

Putting the CVM and ESXi Host in Maintenance Mode

About this task

Nutanix recommends placing the CVM and ESXi host into maintenance mode while the Nutanix cluster undergoes maintenance or patch installations.
Caution: Verify the data resiliency status of your Nutanix cluster. Ensure that the replication factor (RF) supports putting the node in maintenance mode.

Procedure

  1. Log on to vCenter with the web client.
  2. If vSphere DRS is enabled on the Nutanix cluster, skip this step. If vSphere DRS is disabled, perform one of the following.
    • Manually migrate all the VMs except the CVM to another host in the Nutanix cluster.
    • Shut down VMs other than the CVM that you do not want to migrate to another host.
  3. Right-click the host and select Maintenance Mode > Enter Maintenance Mode.
  4. In the Enter Maintenance Mode dialog box, check Move powered-off and suspended virtual machines to other hosts in the cluster and click OK .
    The host gets ready to go into maintenance mode, which prevents VMs from running on this host. DRS automatically attempts to migrate all the VMs to another host in the Nutanix cluster.
Note:

In certain rare conditions, even when DRS is enabled, some VMs do not automatically migrate due to user-defined affinity rules or VM configuration settings. The VMs that do not migrate appear under cluster DRS > Faults when a maintenance mode task is in progress. To address the faults, either manually shut down those VMs or ensure the VMs can be migrated.

  1. Log on to the CVM with SSH and shut down the CVM.
    nutanix@cvm$ cvm_shutdown -P now
    Note: Do not reset or shutdown the CVM in any way other than the cvm_shutdown command to ensure that the cluster is aware that the CVM is unavailable.
  2. After the CVM shuts down, wait for the host to go into maintenance mode.
    The host enters maintenance mode after its CVM is shut down.

Shutting Down an ESXi Node in a Nutanix Cluster

Before you begin

Verify the data resiliency status of your Nutanix cluster. If the Nutanix cluster only has replication factor 2 (RF2), you can shut down only one node for each cluster. If an RF2 cluster has more than one node shut down, shut down the entire cluster.

About this task

You can put the ESXi host into maintenance mode and shut it down either from the web client or from the command line. For more information about shutting down a node from the command line, see Shutting Down an ESXi Node in a Nutanix Cluster (vSphere Command Line).

Procedure

  1. Log on to vCenter with the web client.
  2. Put the Nutanix node in the maintenance mode. For more information, see Putting the CVM and ESXi Host in Maintenance Mode.
    Note: If DRS is not enabled, manually migrate or shut down all the VMs excluding the CVM. The VMs that are not migrated automatically even when the DRS is enabled can be because of a configuration option in the VM that is not present on the target host.
  3. Right-click the host and select Shut Down .
    Wait until the vCenter displays that the host is not responding, which may take several minutes. If you are logged on to the ESXi host rather than to vCenter, the web client disconnects when the host shuts down.

Shutting Down an ESXi Node in a Nutanix Cluster (vSphere Command Line)

Before you begin

Verify the data resiliency status of your Nutanix cluster. If the Nutanix cluster only has replication factor 2 (RF2), you can shut down only one node for each cluster. If an RF2 cluster has more than one node shut down, shut down the entire cluster.

About this task

Procedure

  1. Log on to the CVM with SSH and shut down the CVM.
    nutanix@cvm$ cvm_shutdown -P now
  2. Log on to another CVM in the Nutanix cluster with SSH.
  3. Shut down the host.
    nutanix@cvm$ ~/serviceability/bin/esx-enter-maintenance-mode -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    If successful, this command returns no output. If it fails with a message like the following, VMs are probably still running on the host.

    CRITICAL esx-enter-maintenance-mode:42 Command vim-cmd hostsvc/maintenance_mode_enter failed with ret=-1

    Ensure that all VMs are shut down or moved to another host and try again before proceeding.

    nutanix@cvm$ ~/serviceability/bin/esx-shutdown -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host..

    Alternatively, you can put the ESXi host into maintenance mode and shut it down using the vSphere web client. For more information, see Shutting Down an ESXi Node in a Nutanix Cluster.

    If the host shuts down, a message like the following is displayed.

    INFO esx-shutdown:67 Please verify if ESX was successfully shut down using ping hypervisor_ip_addr

    hypervisor_ip_addr is the IP address of the ESXi host.

  4. Confirm that the ESXi host has shut down.
    nutanix@cvm$ ping hypervisor_ip_addr

    Replace hypervisor_ip_addr with the IP address of the ESXi host.

    If no ping packets are answered, the ESXi host shuts down.

Starting an ESXi Node in a Nutanix Cluster

About this task

You can start an ESXi host either from the web client or from the command line. For more information about starting a node from the command line, see Starting an ESXi Node in a Nutanix Cluster (vSphere Command Line).

Procedure

  1. If the node is off, turn it on by pressing the power button on the front. Otherwise, proceed to the next step.
  2. Log on to vCenter (or to the node if vCenter is not running) with the web client.
  3. Right-click the ESXi host and select Exit Maintenance Mode .
  4. Right-click the CVM and select Power > Power on .
    Wait approximately 5 minutes for all services to start on the CVM.
  5. Log on to another CVM in the Nutanix cluster with SSH.
  6. Confirm that the Nutanix cluster services are running on the CVM.
    nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    Output similar to the following is displayed.
        Name                      : 10.1.56.197
        Status                    : Up
        ... ... 
        StatsAggregator           : up
        SysStatCollector          : up

    Every service listed should be up .

  7. Right-click the ESXi host in the web client and select Rescan for Datastores . Confirm that all Nutanix datastores are available.
  8. Verify that the status of all services on all the CVMs are Up.
    nutanix@cvm$ cluster status
    If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix cluster.
    CVM: <host IP-Address> Up
                                    Zeus   UP       [9935, 9980, 9981, 9994, 10015, 10037]
                               Scavenger   UP       [25880, 26061, 26062]
                                  Xmount   UP       [21170, 21208]
                        SysStatCollector   UP       [22272, 22330, 22331]
                               IkatProxy   UP       [23213, 23262]
                        IkatControlPlane   UP       [23487, 23565]
                           SSLTerminator   UP       [23490, 23620]
                          SecureFileSync   UP       [23496, 23645, 23646]
                                  Medusa   UP       [23912, 23944, 23945, 23946, 24176]
                      DynamicRingChanger   UP       [24314, 24404, 24405, 24558]
                                  Pithos   UP       [24317, 24555, 24556, 24593]
                              InsightsDB   UP       [24322, 24472, 24473, 24583]
                                  Athena   UP       [24329, 24504, 24505]
                                 Mercury   UP       [24338, 24515, 24516, 24614]
                                  Mantle   UP       [24344, 24572, 24573, 24634]
                              VipMonitor   UP       [18387, 18464, 18465, 18466, 18474]
                                Stargate   UP       [24993, 25032]
                    InsightsDataTransfer   UP       [25258, 25348, 25349, 25388, 25391, 25393, 25396]
                                   Ergon   UP       [25263, 25414, 25415]
                                 Cerebro   UP       [25272, 25462, 25464, 25581]
                                 Chronos   UP       [25281, 25488, 25489, 25547]
                                 Curator   UP       [25294, 25528, 25529, 25585]
                                   Prism   UP       [25718, 25801, 25802, 25899, 25901, 25906, 25941, 25942]
                                     CIM   UP       [25721, 25829, 25830, 25856]
                            AlertManager   UP       [25727, 25862, 25863, 25990]
                                Arithmos   UP       [25737, 25896, 25897, 26040]
                                 Catalog   UP       [25749, 25989, 25991]
                               Acropolis   UP       [26011, 26118, 26119]
                                   Uhura   UP       [26037, 26165, 26166]
                                    Snmp   UP       [26057, 26214, 26215]
                       NutanixGuestTools   UP       [26105, 26282, 26283, 26299]
                              MinervaCVM   UP       [27343, 27465, 27466, 27730]
                           ClusterConfig   UP       [27358, 27509, 27510]
                                Aequitas   UP       [27368, 27567, 27568, 27600]
                             APLOSEngine   UP       [27399, 27580, 27581]
                                   APLOS   UP       [27853, 27946, 27947]
                                   Lazan   UP       [27865, 27997, 27999]
                                  Delphi   UP       [27880, 28058, 28060]
                                    Flow   UP       [27896, 28121, 28124]
                                 Anduril   UP       [27913, 28143, 28145]
                                   XTrim   UP       [27956, 28171, 28172]
                           ClusterHealth   UP       [7102, 7103, 27995, 28209,28495, 28496, 28503, 28510,	
    28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645, 28646, 28648, 28792,	
    28793, 28837, 28838, 28840, 28841, 28858, 28859, 29123, 29124, 29127, 29133, 29135, 29142, 29146, 29150,	
    29161, 29162, 29163, 29179, 29187, 29219, 29268, 29273]

Starting an ESXi Node in a Nutanix Cluster (vSphere Command Line)

About this task

You can start an ESXi host either from the command line or from the web client. For more information about starting a node from the web client, see Starting an ESXi Node in a Nutanix Cluster .

Procedure

  1. Log on to a running CVM in the Nutanix cluster with SSH.
  2. Start the CVM.
    nutanix@cvm$ ~/serviceability/bin/esx-exit-maintenance-mode -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    If successful, this command produces no output. If it fails, wait 5 minutes and try again.

    nutanix@cvm$ ~/serviceability/bin/esx-start-cvm -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    .

    If the CVM starts, a message like the following is displayed.

    INFO esx-start-cvm:67 CVM started successfully. Please verify using ping cvm_ip_addr

    cvm_ip_addr is the IP address of the CVM on the ESXi host.

    After starting, the CVM restarts once. Wait three to four minutes before you ping the CVM.

    Alternatively, you can take the ESXi host out of maintenance mode and start the CVM using the web client. For more information, see Starting an ESXi Node in a Nutanix Cluster

  3. Verify that the status of all services on all the CVMs are Up.
    nutanix@cvm$ cluster status
    If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix cluster.
    CVM: <host IP-Address> Up
                                    Zeus   UP       [9935, 9980, 9981, 9994, 10015, 10037]
                               Scavenger   UP       [25880, 26061, 26062]
                                  Xmount   UP       [21170, 21208]
                        SysStatCollector   UP       [22272, 22330, 22331]
                               IkatProxy   UP       [23213, 23262]
                        IkatControlPlane   UP       [23487, 23565]
                           SSLTerminator   UP       [23490, 23620]
                          SecureFileSync   UP       [23496, 23645, 23646]
                                  Medusa   UP       [23912, 23944, 23945, 23946, 24176]
                      DynamicRingChanger   UP       [24314, 24404, 24405, 24558]
                                  Pithos   UP       [24317, 24555, 24556, 24593]
                              InsightsDB   UP       [24322, 24472, 24473, 24583]
                                  Athena   UP       [24329, 24504, 24505]
                                 Mercury   UP       [24338, 24515, 24516, 24614]
                                  Mantle   UP       [24344, 24572, 24573, 24634]
                              VipMonitor   UP       [18387, 18464, 18465, 18466, 18474]
                                Stargate   UP       [24993, 25032]
                    InsightsDataTransfer   UP       [25258, 25348, 25349, 25388, 25391, 25393, 25396]
                                   Ergon   UP       [25263, 25414, 25415]
                                 Cerebro   UP       [25272, 25462, 25464, 25581]
                                 Chronos   UP       [25281, 25488, 25489, 25547]
                                 Curator   UP       [25294, 25528, 25529, 25585]
                                   Prism   UP       [25718, 25801, 25802, 25899, 25901, 25906, 25941, 25942]
                                     CIM   UP       [25721, 25829, 25830, 25856]
                            AlertManager   UP       [25727, 25862, 25863, 25990]
                                Arithmos   UP       [25737, 25896, 25897, 26040]
                                 Catalog   UP       [25749, 25989, 25991]
                               Acropolis   UP       [26011, 26118, 26119]
                                   Uhura   UP       [26037, 26165, 26166]
                                    Snmp   UP       [26057, 26214, 26215]
                       NutanixGuestTools   UP       [26105, 26282, 26283, 26299]
                              MinervaCVM   UP       [27343, 27465, 27466, 27730]
                           ClusterConfig   UP       [27358, 27509, 27510]
                                Aequitas   UP       [27368, 27567, 27568, 27600]
                             APLOSEngine   UP       [27399, 27580, 27581]
                                   APLOS   UP       [27853, 27946, 27947]
                                   Lazan   UP       [27865, 27997, 27999]
                                  Delphi   UP       [27880, 28058, 28060]
                                    Flow   UP       [27896, 28121, 28124]
                                 Anduril   UP       [27913, 28143, 28145]
                                   XTrim   UP       [27956, 28171, 28172]
                           ClusterHealth   UP       [7102, 7103, 27995, 28209,28495, 28496, 28503, 28510,	
    28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645, 28646, 28648, 28792,	
    28793, 28837, 28838, 28840, 28841, 28858, 28859, 29123, 29124, 29127, 29133, 29135, 29142, 29146, 29150,	
    29161, 29162, 29163, 29179, 29187, 29219, 29268, 29273]
  4. Verify the storage.
    1. Log on to the ESXi host with SSH.
    2. Rescan for datastores.
      root@esx# esxcli storage core adapter rescan --all
    3. Confirm that cluster VMFS datastores, if any, are available.
      root@esx# esxcfg-scsidevs -m | awk '{print $5}'

Restarting an ESXi Node using CLI

Before you begin

Shut down the guest VMs, including vCenter, that are running on the node, or move them to other nodes in the Nutanix cluster.

About this task

Procedure

  1. Log on to vCenter (or to the ESXi host if the node is running the vCenter VM) with the web client.
  2. Right-click the host and select Maintenance mode > Enter Maintenance Mode .
    In the Confirm Maintenance Mode dialog box, click OK .
    The host is placed in maintenance mode, which prevents VMs from running on the host.
    Note: The host does not enter in the maintenance mode until after the CVM is shut down.
  3. Log on to the CVM with SSH and shut down the CVM.
    nutanix@cvm$ cvm_shutdown -P now
    Note: Do not reset or shutdown the CVM in any way other than the cvm_shutdown command to ensure that the cluster is aware that the CVM is unavailable.
  4. Right-click the node and select Power > Reboot .
    Wait until vCenter shows that the host is not responding and then is responding again, which takes several minutes.

    If you are logged on to the ESXi host rather than to vCenter, the web client disconnects when the host shuts down.

  5. Right-click the ESXi host and select Exit Maintenance Mode .
  6. Right-click the CVM and select Power > Power on .
    Wait approximately 5 minutes for all services to start on the CVM.
  7. Log on to the CVM with SSH.
  8. Confirm that the Nutanix cluster services are running on the CVM.
    nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    Output similar to the following is displayed.
        Name                      : 10.1.56.197
        Status                    : Up
        ... ... 
        StatsAggregator           : up
        SysStatCollector          : up

    Every service listed should be up .

  9. Right-click the ESXi host in the web client and select Rescan for Datastores . Confirm that all Nutanix datastores are available.

Rebooting an ESXi Node in a Nutanix Cluster

About this task

The Request Reboot operation in the Prism web console gracefully restarts the selected nodes one after the other.

Perform the following procedure to restart the nodes in the cluster.

Procedure

  1. Click the gear icon in the main menu and then select Reboot in the Settings page.
  2. In the Request Reboot window, select the nodes you want to restart, and click Reboot .
    Figure. Request Reboot Click to enlarge

    A progress bar is displayed that indicates the progress of the restart of each node.

Changing an ESXi Node Name

After running a bare-metal Foundation, you can change the host (node) name from the command line or by using the vSphere web client.

To change the hostname, see VMware Documentation. .

Changing an ESXi Node Password

Although it is not required for the root user to have the same password on all hosts (nodes), doing so makes cluster management and support much easier. If you do select a different password for one or more hosts, make sure to note the password for each host.

To change the host password, see VMware Documentation .

Changing CVM Memory Configuration (ESXi)

About this task

You can increase the memory reserved for each CVM in your Nutanix cluster by using the 1-click CVM Memory Upgrade available from the Prism Element web console. Increase memory size depending on the workload type or to enable certain AOS features. See Increasing the Controller VM Memory Size in the Prism Web Console Guide for CVM memory sizing recommendations and instructions about how to increase the CVM memory.

VM Management

For the list of supported VMs, see Compatibility and Interoperability Matrix.

VM Management Using Prism Central

You can create and manage a VM on your ESXi from Prism Central. For more information, see Creating a VM through Prism Central (ESXi) and Managing a VM (ESXi).

Creating a VM through Prism Central (ESXi)

In ESXi clusters, you can create a new virtual machine (VM) through Prism Central.

Before you begin

  • See the requirements and limitations section in vCenter Server Integration in the Prism Central Guide before proceeding.
  • Register the vCenter Server with your cluster. For more information, see Registering vCenter Server (Prism Central) in the Prism Central Guide .

About this task

To create a VM, do the following:

Procedure

  1. Go to the List tab of the VMs dashboard (see VM Summary View in the Prism Central Guide ) and click the Create VM button.
    The Create VM wizard appears.
  2. In the Cluster Selection window, select the target cluster from the pull-down list.

    A list of registered clusters appears in the window; you can select a cluster running AHV only. Clicking a cluster name displays the Create VM dialog box for that cluster.

    Figure. Cluster Selection Window Click to enlarge Cluster Selection window display

  3. In the Create VM dialog box, do the following in the indicated fields:
    1. Name : Enter a name for the VM.
    2. Description (optional): Enter a description for the VM.
    3. Guest OS : Type and select the guest operating system.

      The guest operating system that you select affects the supported devices and number of virtual CPUs available for the virtual machine. The Create VM wizard does not install the guest operating system. See the list of supported operating systems in vCenter Server Integration topic.

    4. vCPU(s) : Enter the number of virtual CPUs to allocate to this VM.
    5. Number of Cores per vCPU : Enter the number of cores assigned to each virtual CPU.
    6. Memory : Enter the amount of memory (in GiBs) to allocate to this VM.
  4. To attach a disk to the VM, click the Add New Disk button.
    The Add Disks dialog box appears. Do the following in the indicated fields:
    Figure. Add Disk Dialog Box Click to enlarge configure a disk screen

    1. Type : Select the type of storage device, DISK or CD-ROM , from the pull-down list.

      The following fields and options vary depending on whether you choose DISK or CD-ROM . You can use the CD-ROM type only to create a blank CD-ROM device for mounting NGT or VMware guest tools.

    2. Operation : Specify the device contents from the pull-down list.
      • Select Clone from ADSF file to copy any file from the cluster that can be used as an image onto the disk.
      • Select Allocate on Storage Container to allocate space without specifying an image. (This option appears only when DISK is selected in the previous field.) Selecting this option means you are allocating space only. You have to provide a system image later from a CD-ROM or other source.
    3. Bus Type : Select the bus type from the pull-down list. The choices are IDE or SCSI .
    4. ADSF Path : Enter the path to the desired system image.

      This field appears only when Clone from ADSF file is selected. It specifies the image to copy. Enter the path name as / container_name / vmdk_name .vmdk . For example to clone an image from myvm-flat.vmdk in a storage container named crt1 , enter /crt1/myvm-flat.vmdk . When a user types the storage container name ( / container_name / ), a list appears of the VMDK files in that storage container (assuming one or more VMDK files had previously been copied to that storage container).

      Note: Make sure you are copying from a flat file.
    5. Storage Container : Select the storage container to use from the pull-down list.

      This field appears only when Allocate on Storage Container is selected. The list includes all storage containers created for this cluster.

    6. Size : Enter the disk size in GiBs.
    7. When all the field entries are correct, click the Add button to attach the disk to the VM and return to the Create VM dialog box.
    8. Repeat this step to attach more devices to the VM.
  5. To create a network interface for the VM, click the Add New NIC button.

    The Create NIC dialog box appears. Do the following in the indicated fields:

    1. VLAN Name : Select the target virtual LAN from the pull-down list.

      The list includes all defined networks (see Configuring Network Connections in the Prism Central Guide ).

    2. Network Adapter Type : Select the network adapter type from the pull-down list.

      For information about the list of supported adapter types, see vCenter Server Integration in the Prism Central Guide .

    3. Network UUID : This is a read-only field that displays the network UUID.
    4. Network Address/Prefix : This is a read-only field that displays the network IP address and prefix.
    5. When all the field entries are correct, click the Add button to create a network interface for the VM and return to the Create VM dialog box.
    6. Repeat this step to create more network interfaces for the VM.

Managing a VM (ESXi)

You can manage virtual machines (VMs) in an ESXi cluster through Prism Central.

Before you begin

  • See the requirements and limitations section in vCenter Server Integration in the Prism Central Guide before proceeding.
  • Ensure that you have registered the vCenter Server with your cluster. For more information, see Registering vCenter Server (Prism Central) in the Prism Central Guide .

About this task

After creating a VM (see Creating a VM through Prism Central (ESXi)), you can use Prism Central to update the VM configuration, delete the VM, clone the VM, launch a console window, start (or shut down) the VM, pause (or resume) the VM, assign the VM to a protection policy, take a snapshot, add the VM to a recovery plan, run a playbook, manage categories, install, and manage Nutanix Guest Tools (NGT), manage the VM ownership, or configure QoS settings.

You can perform these tasks by using any of the following methods:

  • Select the target VM in the List tab of the VMs dashboard (see VMs Summary View in the Prism Central Guide ) and choose the required action from the Actions menu.
  • Right-click on the target VM in the List tab of the VMs dashboard and select the required action from the drop-down list.
  • Go to the details page of a selected VM (see VM Details View in the Prism Central Guide ) and select the desired action.
Note: The available actions appear in bold; other actions are unavailable. The available actions depend on the current state of the VM and your permissions.

Procedure

  • To modify the VM configuration, select Update .

    The Update VM dialog box appears, which includes the same fields as the Create VM dialog box. You cannot change the name, number of assigned vCPUs, or memory size of the VM, but you can add or delete disks and NICs.

    Figure. Update VM Window Click to enlarge VM update window display

  • To delete the VM, select Delete . A window prompt appears; click the OK button to delete the VM.
  • To clone the VM, select Clone .

    This displays the Clone VM dialog box, which includes the same fields as the Create VM dialog box but with all fields (except the name) filled in with the current VM settings. Enter a name for the clone and then click the Save button to create the clone. You can create a modified clone by changing some of the settings before clicking the Save button.

    Figure. Clone VM Window Click to enlarge clone VM window display

  • To launch a console window, select Launch Console .

    This opens a virtual network computing (VNC) client and displays the console in a new tab or window. This option is available only when the VM is powered on. The VM power options that you access from the Power On Actions (or Power Off Actions ) action link below the VM table can also be accessed from the VNC console window. To access the VM power options, click the Power button at the top-right corner of the console window.

    Note: A VNC client may not function properly on all browsers. Some keys are not recognized when the browser is Google Chrome. (Firefox typically works best.)
    Figure. Console Window (VNC) Click to enlarge VNC console window display

  • To start (or shut down) the VM, select Power on (or Power off ).
  • To pause (or resume) the VM, select Pause/Suspend (or Resume ). This option is available only when the VM is powered on (off).
  • To assign the VM to a protection policy, select Protect . This opens a page to specify the protection policy to which this VM should be assigned (see Policies Management). To remove the VM from a protection policy, select Unprotect .
  • To take a snapshot of the VM, select Take Snapshot .

    This displays the Take Snapshot dialog box. Enter a name for the snapshot and then click the Submit button to start the backup.

    Warning: The following are the restrictions for naming VM snapshots.
    • The maximum length is 80 characters.
    • Allowed characters are uppercase and lowercase standard Latin letters (A-Z and a-z), decimal digits (0-9), dots (.), hyphens (-), and underscores (_).

    Note: These snapshots (stored locally) cannot be replicated to other sites.
    Figure. Take Snapshot Window Click to enlarge take snapshot window display

  • To add this VM to a recovery plan you created previously, select Add to Recovery Plan . For more information, see Adding Guest VMs Individually to a Recovery Plan in the Leap Administration Guide .
  • To create VM recovery point, select Create Recovery Point .
  • To run a playbook you created previously, select Run Playbook . For more information, see Running a Playbook (Manual Trigger) in the Prism Central Guide .
  • To assign the VM a category value, select Manage Categories .

    This displays the Manage VM Categories page. For more information, see Assigning a Category in the Prism Central Guide .

  • To install Nutanix Guest Tools (NGT), select Install NGT . For more information, see Installing NGT on Multiple VMs in the Prism Central Guide .
  • To enable (or disable) NGT, select Manage NGT Applications . For more information, see Managing NGT Applications in the Prism Central Guide .
    The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
    Note: If you clone a VM, by default NGT is not enabled on the cloned VM. You need to again enable and mount NGT on the cloned VM. If you want to enable NGT on multiple VMs simultaneously, see Enabling NGT and Mounting the NGT Installer on Cloned VMs in the Prism Web Console Guide .

    If you eject the CD, you can mount the CD back again by logging into the Controller VM and running the following nCLI command.

    ncli> ngt mount vm-id=virtual_machine_id

    For example, to mount the NGT on the VM with VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987, type the following command.

    ncli> ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-
    c1601e759987
    Note:
    • Self-service restore feature is not enabled by default on a VM. You must manually enable the self-service restore feature.
    • If you have created the NGT ISO CD-ROMs on AOS 4.6 or earlier releases, the NGT functionality will not work even if you upgrade your cluster because REST APIs have been disabled. You must unmount the ISO, remount the ISO, install the NGT software again, and then upgrade to a later AOS version.
  • To upgrade NGT, select Upgrade NGT . For more information, see Upgrading NGT in the Prism Central Guide .
  • To configure quality of service (QoS) settings, select Set QoS Attributes . For more information, see Setting QoS for an Individual VM in the Prism Central Guide .

VM Management using Prism Element

You can create and manage a VM on your ESXi from Prism Element. For more information, see Creating a VM (ESXi) and Managing a VM (ESXi).

Creating a VM (ESXi)

In ESXi clusters, you can create a new virtual machine (VM) through the web console.

Before you begin

  • See the requirements and limitations section in VM Management through Prism Element (ESXi) in the Prism Web Console Guide before proceeding.
  • Register the vCenter Server with your cluster. For more information, see Registering a Cluster to the vCenter Server.

About this task

When creating a VM, you can configure all of its components, such as number of vCPUs and memory, but you cannot attach a volume group to the VM.

To create a VM, do the following:

Procedure

  1. In the VM dashboard , click the Create VM button.
    The Create VM dialog box appears.
  2. Do the following in the indicated fields:
    1. Name : Enter a name for the VM.
    2. Description (optional): Enter a description for the VM.
    3. Guest OS : Type and select the guest operating system.
      The guest operating system that you select affects the supported devices and number of virtual CPUs available for the virtual machine. The Create VM wizard does not install the guest operating system. For information about the list of supported operating systems, see VM Management through Prism Element (ESXi) in the Prism Web Console Guide .
    4. vCPU(s) : Enter the number of virtual CPUs to allocate to this VM.
    5. Number of Cores per vCPU : Enter the number of cores assigned to each virtual CPU.
    6. Memory : Enter the amount of memory (in GiBs) to allocate to this VM.
  3. To attach a disk to the VM, click the Add New Disk button.
    The Add Disks dialog box appears.
    Figure. Add Disk Dialog Box Click to enlarge configure a disk screen

    Do the following in the indicated fields:
    1. Type : Select the type of storage device, DISK or CD-ROM , from the pull-down list.
      The following fields and options vary depending on whether you choose DISK or CD-ROM .
    2. Operation : Specify the device contents from the pull-down list.
      • Select Clone from ADSF file to copy any file from the cluster that can be used as an image onto the disk.
      • Select Allocate on Storage Container to allocate space without specifying an image. (This option appears only when DISK is selected in the previous field.) Selecting this option means you are allocating space only. You have to provide a system image later from a CD-ROM or other source.
    3. Bus Type : Select the bus type from the pull-down list. The choices are IDE or SCSI .
    4. ADSF Path : Enter the path to the desired system image.
      This field appears only when Clone from ADSF file is selected. It specifies the image to copy. Enter the path name as / storage_container_name / vmdk_name .vmdk . For example to clone an image from myvm-flat.vmdk in a storage container named crt1 , enter /crt1/myvm-flat.vmdk . When a user types the storage container name ( / storage_container_name / ), a list appears of the VMDK files in that storage container (assuming one or more VMDK files had previously been copied to that storage container).
      Note: Make sure you are copying from a flat file.
    5. Storage Container : Select the storage container to use from the pull-down list.
      This field appears only when Allocate on Storage Container is selected. The list includes all storage containers created for this cluster.
    6. Size : Enter the disk size in GiBs.
    7. When all the field entries are correct, click the Add button to attach the disk to the VM and return to the Create VM dialog box.
    8. Repeat this step to attach more devices to the VM.
  4. To create a network interface for the VM, click the Add New NIC button.
    The Create NIC dialog box appears. Do the following in the indicated fields:
    1. VLAN Name : Select the target virtual LAN from the pull-down list.
      The list includes all defined networks. For more information, see Network Configuration for VM Interfaces in the Prism Web Console Guide .
    2. Network Adapter Type : Select the network adapter type from the pull-down list.

      For information about the list of supported adapter types, see VM Management through Prism Element (ESXi) in the Prism Web Console Guide .

    3. Network UUID : This is a read-only field that displays the network UUID.
    4. Network Address/Prefix : This is a read-only field that displays the network IP address and prefix.
    5. When all the field entries are correct, click the Add button to create a network interface for the VM and return to the Create VM dialog box.
    6. Repeat this step to create more network interfaces for the VM.
  5. When all the field entries are correct, click the Save button to create the VM and close the Create VM dialog box.
    The new VM appears in the VM table view. For more information, see VM Table View in the Prism Web Console Guide .

Managing a VM (ESXi)

You can use the web console to manage virtual machines (VMs) in the ESXi clusters.

Before you begin

  • See the requirements and limitations section in VM Management through Prism Element (ESXi) in the Prism Web Console Guide before proceeding.
  • Ensure that you have registered the vCenter Server with your cluster. For more information, see Registering a Cluster to the vCenter Server.

About this task

After creating a VM, you can use the web console to manage guest tools, power operations, suspend, launch a VM console window, update the VM configuration, clone the VM, or delete the VM. To accomplish one or more of these tasks, do the following:

Note: Your available options depend on the VM status, type, and permissions. Unavailable options are unavailable.

Procedure

  1. In the VM dashboard , click the Table view.
  2. Select the target VM in the table (top section of screen).
    The summary line (middle of screen) displays the VM name with a set of relevant action links on the right. You can also right-click on a VM to select a relevant action.

    The possible actions are Manage Guest Tools , Launch Console , Power on (or Power off actions ), Suspend (or Resume ), Clone , Update , and Delete . The following steps describe how to perform each action.

    Figure. VM Action Links Click to enlarge

  3. To manage guest tools as follows, click Manage Guest Tools .
    You can also enable NGT applications (self-service restore, volume snapshot service and application-consistent snapshots) as part of manage guest tools.
    1. Select Enable Nutanix Guest Tools check box to enable NGT on the selected VM.
    2. Select Mount Nutanix Guest Tools to mount NGT on the selected VM.
      Ensure that VM has at least one empty IDE CD-ROM or SATA slot to attach the ISO.

      The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
    3. To enable self-service restore feature for Windows VMs, click Self Service Restore (SSR) check box.
      The self-service restore feature is enabled of the VM. The guest VM administrator can restore the desired file or files from the VM. For more information about the self-service restore feature, see Self-Service Restore in the Data Protection and Recovery with Prism Element guide.

    4. After you select Enable Nutanix Guest Tools check box the VSS and application-consistent snapshot feature is enabled by default.
      After this feature is enabled, Nutanix native in-guest VmQuiesced snapshot service (VSS) agent is used to take application-consistent snapshots for all the VMs that support VSS. This mechanism takes application-consistent snapshots without any VM stuns (temporary unresponsive VMs) and also enables third-party backup providers like Commvault and Rubrik to take application-consistent snapshots on Nutanix platform in a hypervisor-agnostic manner. For more information, see Conditions for Application-consistent Snapshots in the Data Protection and Recovery with Prism Element guide.

    5. To mount VMware guest tools, click Mount VMware Guest Tools check box.
      The VMware guest tools are mounted on the VM.
      Note: You can mount both VMware guest tools and Nutanix Guest Tools at the same time on a particular VM provided the VM has sufficient empty CD-ROM slots.
    6. Click Submit .
      The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
      Note:
      • If you clone a VM, by default NGT is not enabled on the cloned VM. If the cloned VM is powered off, enable NGT from the UI and start the VM. If cloned VM is powered on, enable NGT from the UI and restart the Nutanix guest agent service.
      • If you want to enable NGT on multiple VMs simultaneously, see Enabling NGT and Mounting the NGT Installer on Cloned VMs in the Prism Web Console Guide .
      If you eject the CD, you can mount the CD back again by logging into the Controller VM and running the following nCLI command.
      ncli> ngt mount vm-id=virtual_machine_id

      For example, to mount the NGT on the VM with VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987, type the following command.

      ncli> ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-
      c1601e759987
      Caution: In AOS 4.6, for the powered-on Linux VMs on AHV, ensure that the NGT ISO is ejected or unmounted within the guest VM before disabling NGT by using the web console. This issue is specific for 4.6 version and does not occur from AOS 4.6.x or later releases.
      Note: If you have created the NGT ISO CD-ROMs prior to AOS 4.6 or later releases, the NGT functionality will not work even if you upgrade your cluster because REST APIs have been disabled. You must unmount the ISO, remount the ISO, install the NGT software again, and then upgrade to 4.6 or later version.
  4. To launch a VM console window, click the Launch Console action link.
    This opens a virtual network computing (VNC) client and displays the console in a new tab or window. This option is available only when the VM is powered on. The VM power options that you access from the Power Off Actions action link below the VM table can also be accessed from the VNC console window. To access the VM power options, click the Power button at the top-right corner of the console window.
    Note: A VNC client may not function properly on all browsers. Some keys are not recognized when the browser is Google Chrome. (Firefox typically works best.)
  5. To start (or shut down) the VM, click the Power on (or Power off ) action link.

    Power on begins immediately. If you want to shut down the VMs, you are prompted to select one of the following options:

    • Power Off . Hypervisor performs a hard shut down action on the VM.
    • Reset . Hypervisor performs an ACPI reset action through the BIOS on the VM.
    • Guest Shutdown . Operating system of the VM performs a graceful shutdown.
    • Guest Reboot . Operating system of the VM performs a graceful restart.
    Note: The Guest Shutdown and Guest Reboot options are available only when VMware guest tools are installed.
  6. To pause (or resume) the VM, click the Suspend (or Resume ) action link. This option is available only when the VM is powered on.
  7. To clone the VM, click the Clone action link.
    This displays the Clone VM dialog box, which includes the same fields as the Create VM dialog box but with all fields (except the name) filled in with the current VM settings and number of clones needed. Enter a name for the clone and number of clones of the VM that are required and then click the Save button to create the clone.
    Figure. Clone VM Dialog Box Click to enlarge

    Note: In the Clone window, you cannot update the disks and network interfaces.
  8. To modify the VM configuration, click the Update action link.
    The Update VM dialog box appears, which includes the same fields as the Create VM dialog box. Modify the configuration as needed (see Creating a VM (ESXi)), and in addition you can enable Flash Mode for the VM.
    Note: If you delete a vDisk attached to a VM and snapshots associated with this VM exist, space associated with that vDisk is not reclaimed unless you also delete the VM snapshots.
    1. Click the Enable Flash Mode check box.
      • After you enable this feature on the VM, the status is updated in the VM table view. To view the status of individual virtual disks (disks that are flashed to the SSD), go the Virtual Disks tab in the VM table view.
      • You can disable the Flash Mode feature for individual virtual disks. To update the Flash Mode for individual virtual disks, click the update disk icon in the Disks pane and deselect the Enable Flash Mode check box.
  9. To delete the VM, click the Delete action link. A window prompt appears; click the OK button to delete the VM.
    The deleted VM disappears from the list of VMs in the table. You can also delete a VM that is already powered on.

VM Migration

You can migrate a VM to an ESXi host in a Nutanix cluster. Usually the migration is done in the following cases.

  • Migrate VMs from existing storage platform to Nutanix.
  • Keep VMs running during disruptive upgrade or other downtime of Nutanix cluster.

In migrating VMs between Nutanix clusters running vSphere, the source host and NFS datastore are the ones presently running the VM. The target host and NFS datastore are the ones where the VM runs after migration. The target ESXi host and datastore must be part of a Nutanix cluster.

To accomplish this migration, you have to mount the NFS datastores from the target on the source. After the migration is complete, you must unmount the datastores and block access.

Migrating a VM to Another Nutanix Cluster

Before you begin

Before migrating a VM to another Nutanix cluster running vSphere, verify that you have provisioned the target Nutanix environment.

About this task

The shared storage feature in vSphere allows you to move both compute and storage resources from the source legacy environment to the target Nutanix environment at the same time without disruption. This feature also removes the need to do any sort of file systems allow lists on Nutanix.

You can use the shared storage feature through the migration wizard in the web client.

Procedure

  1. Log on to vCenter with the web client.
  2. Select the VM that you want to migrate.
  3. Right-click the VM and select Migrate .
  4. Under Select Migration Type , select Change both compute resource and storage .
  5. Select Compute Resource and then Storage and click Next .
    If necessary, change the disk format to the one that you want to use during the migration process.
  6. Select a destination network for all VM network adapters and click Next .
  7. Click Finish .
    Wait for the migration process to complete. The process performs the storage vMotion first, and then creates a temporary storage network over vmk0 for the period where the disk files are on Nutanix.

Cloning a VM

About this task

To clone a VM, you must enable the Nutanix VAAI plug-in. For steps to enable and verify Nutanix VAAI plug-in, refer KB-1868.

Procedure

  1. Log on to vCenter with the web client.
  2. Right-click the VM and select Clone .
  3. Follow the wizard to enter a name for the clone, select a cluster, and select a host.
  4. Select the datastore that contains source VM and click Next .
    Note: If you choose a datastore other than the one that contains the source VM, the clone operation uses the VMware implementation and not the Nutanix VAAI plug-in.
  5. If desired, set the guest customization parameters. Otherwise, proceed to the next step.
  6. Click Finish .

vStorage APIs for Array Integration

To improve the vSphere cloning process, Nutanix provides a vStorage API for array integration (VAAI) plug-in. This plug-in is installed by default during the Nutanix factory process.

Without the Nutanix VAAI plug-in, the process of creating a full clone takes a significant amount of time because all the data that comprises a VM is duplicated. This duplication also results in an increase in storage consumption.

The Nutanix VAAI plug-in efficiently makes full clones without reserving space for the clone. Read requests for blocks shared between parent and clone are sent to the original vDisk that was created for the parent VM. As the clone VM writes new blocks, the Nutanix file system allocates storage for those blocks. This data management occurs completely at the storage layer, so the ESXi host sees a single file with the full capacity that was allocated when the clone was created.

vSphere ESXi Hardening Settings

Configure the following settings in /etc/ssh/sshd_config to harden an ESXi hypervisor in a Nutanix cluster.
Caution: When hardening ESXi security, some settings may impact operations of a Nutanix cluster.
HostbasedAuthentication no
PermitTunnel no
AcceptEnv
GatewayPorts no
Compression no
StrictModes yes
KerberosAuthentication no
GSSAPIAuthentication no
PermitUserEnvironment no
PermitEmptyPasswords no
PermitRootLogin no

Match Address x.x.x.11,x.x.x.12,x.x.x.13,x.x.x.14,192.168.5.0/24
PermitRootLogin yes
PasswordAuthentication yes

ESXi Host 1-Click Upgrade

You can upgrade your host either automatically through Prism Element (1-click upgrade) or manually. For more information about automatic and manual upgrades, see ESXi Upgrade and ESXi Host Manual Upgrade respectively.

This paragraph describes the Nutanix hypervisor support policy for vSphere and Hyper-V hypervisor releases. Nutanix provides hypervisor compatibility and support statements that should be reviewed before planning an upgrade to a new release or applying a hypervisor update or patch:
  • Compatibility and Interoperability Matrix
  • Hypervisor Support Policy- See Support Policies and FAQs for the supported Acropolis hypervisors.

Review the Nutanix Field Advisory page also for critical issues that Nutanix may have uncovered with the hypervisor release being considered.

Note: You may need to log in to the Support Portal to view the links above.

The Acropolis Upgrade Guide provides steps that can be used to upgrade the hypervisor hosts. However, as noted in the documentation, the customer is responsible for reviewing the guidance from VMware or Microsoft, respectively, on other component compatibility and upgrade order (e.g. vCenter), which needs to be planned first.

ESXi Upgrade

How to upgrade your ESXi hypervisor host through the Prism Element web console Upgrade Software feature (also known as 1-click upgrade). To install or upgrade VMware vCenter Server or other third-party software, see your vendor documentation for this information.

AOS supports ESXi hypervisor upgrades that you can apply through the web console Upgrade Software feature (also known as 1-click upgrade).

You can view the available upgrade options, start an upgrade, and monitor upgrade progress through the web console. In the main menu, click the gear icon, and then select Upgrade Software in the Settings panel that appears, to see the current status of your software versions (and start an upgrade if warranted).

VMware ESXi Hypervisor Upgrade Recommendations and Limitations

  • To install or upgrade VMware vCenter Server or other third-party software, see your vendor documentation.
  • Always consult the VMware web site for any vCenter and hypervisor installation dependencies. For example, a hypervisor version might require that you upgrade vCenter first.
  • If you have not enabled DRS in your environment and want to upgrade the ESXi host, you need to upgrade the ESXi host manually. For more information about upgrading ESXi hosts manually, see ESXi Host Manual Upgrade.
  • Disable Admission Control to upgrade ESXi on AOS; if enabled, the upgrade process will fail. You can enable it for normal cluster operation otherwise.
Nutanix Support for ESXi Upgrades
Nutanix qualifies specific VMware ESXi hypervisor updates and provides a related JSON metadata upgrade file on the Nutanix Support Portal for one-click upgrade through the Prism web console Software Upgrade feature.

Nutanix does not provide ESXi binary files, only related JSON metadata upgrade files. Obtain ESXi offline bundles (not ISOs) from the VMware web site.

Nutanix supports the ability to patch upgrade ESXi hosts with versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases. See the Nutanix hypervisor support statement in our Support FAQ. For updates that are made available by VMware that do not have a Nutanix-provided JSON metadata upgrade file, obtain the offline bundle and md5sum checksum available from VMware, then use the web console Software Upgrade feature to upgrade ESXi.

Mixing nodes with different processor (CPU) types in the same cluster
If you are mixing nodes with different processor (CPU) types in the same cluster, you must enable enhanced vMotion compatibility (EVC) to allow vMotion/live migration of VMs during the hypervisor upgrade. For example, if your cluster includes a node with a Haswell CPU and other nodes with Broadwell CPUs, open vCenter and enable VMware enhanced vMotion compatibility (EVC) setting and specifically enable EVC for Intel hosts.
Enhanced vMotion Compatibility (EVC)

AOS Controller VMs and Prism Central VMs require a minimum CPU micro-architecture version of Intel Sandy Bridge. For AOS clusters with ESXi hosts, or when deploying Prism Central VMs on any ESXi cluster: if you have set the vSphere cluster enhanced vMotion compatibility (EVC) level, the minimum level must be L4 - Sandy Bridge .

vCenter
Note: You might be unable to log in to vCenter Server as the /storage/seat partition for vCenter Server version 7.0 and later might become full due to a large number of SSH-related events. See KB 10830 at the Nutanix Support portal for symptoms and solutions to this issue.
  • If your cluster is running the ESXi hypervisor and is also managed by VMware vCenter, you must provide vCenter administrator credentials and vCenter IP address as an extra step before upgrading. Ensure that ports 80 / 443 are open between your cluster and your vCenter instance to successfully upgrade.
  • If You Have Just Registered Your Cluster in vCenter. Do not perform any cluster upgrades (AOS, Controller VM memory, hypervisor, and so on) if you have just registered your cluster in vCenter. Wait at least 1 hour before performing upgrades to allow cluster settings to become updated. Also do not register the cluster in vCenter and perform any upgrades at the same time.
  • Cluster Mapped to Two vCenters. Upgrading software through the web console (1-click upgrade) does not support configurations where a cluster is mapped to two vCenters or where it includes host-affinity must rules for VMs.

    Ensure that enough cluster resources are available for live migration to occur and to allow hosts to enter maintenance mode.

  • Do not deploy ESXi 6.5 on Nutanix clusters running AOS 5.x versions if you require or want to configure VMware fault tolerance (FT). Nutanix engineering has discovered and is aware of VMware FT compatibility issues in the ESXi 6.5 release, which have been reported to VMware.
Mixing Different Hypervisor Versions
For ESXi hosts, mixing different hypervisor versions in the same cluster is temporarily allowed for deferring a hypervisor upgrade as part of an add-node/expand cluster operation, reimaging a node as part of a break-fix procedure, planned migrations, and similar temporary operations.

Upgrading ESXi Hosts by Uploading Binary and Metadata Files

Before you begin

About this task

Do the following steps to download Nutanix-qualified ESXi metadata .JSON files and upgrade the ESXi hosts through Upgrade Software in the Prism Element web console. Nutanix does not provide ESXi binary files, only related JSON metadata upgrade files.

Procedure

  1. Before performing any upgrade procedure, make sure you are running the latest version of the Nutanix Cluster Check (NCC) health checks and upgrade NCC if necessary.
  2. Run NCC as described in Run NCC Checks .
  3. Log on to the Nutanix support portal and navigate to the Hypervisors Support page from the Downloads menu, then download the Nutanix-qualified ESXi metadata .JSON files to your local machine or media.
    1. The default view is All . From the drop-down menu, select Nutanix - VMware ESXi , which shows all available JSON versions.
    2. From the release drop-down menu, select the available ESXi version. For example, 7.0.0 u2a .
    3. Click Download to download the Nutanix-qualified ESXi metadata .JSON file.
    Figure. Downloads Page for ESXi Metadata JSON Click to enlarge This picture shows the portal page for ESXi metadata JSON downloads
  4. Log on to the Prism Element web console for any node in the cluster.
  5. Click the gear icon in the main menu, select Upgrade Software in the Settings page, and then click the Hypervisor tab.
  6. Click the upload the Hypervisor binary link.
  7. Click Choose File for the metadata JSON (obtained from Nutanix) and binary files (obtained from VMware), respectively, browse to the file locations, select the file, and click Upload Now .
  8. When the file upload is completed, click Upgrade > Upgrade Now , then click Yes to confirm.
    [optional] To run the pre-upgrade installation checks only on the Controller VM where you are logged on without upgrading, click Upgrade > Pre-upgrade . These checks also run as part of the upgrade procedure.
  9. Type your vCenter IP address and credentials, then click Upgrade .
    Ensure that you are using your Active Directory or LDAP credentials in the form of domain\username or username@domain .
    Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the Upgrade option is not displayed, because the software is already installed.
    The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation checks and uploads, through the Progress Monitor .

Upgrading ESXi by Uploading an Offline Bundle File and Checksum

About this task

  • Do the following steps to download a non-Nutanix-qualified (patch) ESXi upgrade offline bundle from VMware, then upgrade ESXi through Upgrade Software in the Prism Element web console.
  • Typically you perform this procedure to patch your version of ESXi and Nutanix has not yet officially qualified that new patch version. Nutanix supports the ability to patch upgrade ESXi hosts with versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases.

Procedure

  1. From the VMware web site, download the offline bundle (for example, update-from-esxi6.0-6.0_update02.zip ) and copy the associated MD5 checksum. Ensure that this checksum is obtained from the VMware web site, not manually generated from the bundle by you.
  2. Save the files to your local machine or media, such as a USB drive or other portable media.
  3. Log on to the Prism Element web console for any node in the cluster.
  4. Click the gear icon in the main menu of the Prism Element web console, select Upgrade Software in the Settings page, and then click the Hypervisor tab.
  5. Click the upload the Hypervisor binary link.
  6. Click enter md5 checksum and copy the MD5 checksum into the Hypervisor MD5 Checksum field.
  7. Scroll down and click Choose File for the binary file, browse to the offline bundle file location, select the file, and click Upload Now .
    Figure. ESXi 1-Click Upgrade, Unqualified Bundle Click to enlarge ESXi 1-Click Upgrade dialog box
  8. When the file upload is completed, click Upgrade > Upgrade Now , then click Yes to confirm.
    [optional] To run the pre-upgrade installation checks only on the Controller VM where you are logged on without upgrading, click Upgrade > Pre-upgrade . These checks also run as part of the upgrade procedure.
  9. Type your vCenter IP address and credentials, then click Upgrade .
    Ensure that you are using your Active Directory or LDAP credentials in the form of domain\username or username@domain .
    Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the Upgrade option is not displayed, because the software is already installed.
    The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation checks and uploads, through the Progress Monitor .
  10. After the upgrade is complete, click Inventory > Perform Inventory to enable LCM to check, update and display the inventory information.
    For more information, see Performing Inventory With LCM in the Acropolis Upgrade Guide .

ESXi Host Manual Upgrade

If you have not enabled DRS in your environment and want to upgrade the ESXi host, you must upgrade the ESXi host manually. This topic describes all the requirements that you must meet before manually upgrading the ESXi host.

Tip: If you have enabled DRS and want to upgrade the ESXi host, use the one-click upgrade procedure from the Prism web console. For more information on the one-click upgrade procedure, see the ESXi Upgrade.

Nutanix supports the ability to patch upgrade the ESXi hosts with the versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases. See the Nutanix hypervisor support statement in our Support FAQ.

Because ESXi hosts with different versions can co-exist in a single Nutanix cluster, upgrading ESXi does not require cluster downtime.

  • If you want to avoid cluster interruption, you must complete upgrading a host and ensure that the CVM is running before upgrading any other host. When two hosts in a cluster are down at the same time, all the data is unavailable.
  • If you want to minimize the duration of the upgrade activities and cluster downtime is acceptable, you can stop the cluster and upgrade all hosts at the same time.
Warning: By default, Nutanix clusters have redundancy factor 2, which means they can tolerate the failure of a single node or drive. Nutanix clusters with a configured option of redundancy factor 3 allow the Nutanix cluster to withstand the failure of two nodes or drives in different blocks.
  • Never shut down or restart multiple Controller VMs or hosts simultaneously.
  • Always run the cluster status command to verify that all Controller VMs are up before performing a Controller VM or host shutdown or restart.

ESXi Host Upgrade Process

Perform the following process to upgrade ESXi hosts in your environment.

Prerequisites and Requirements

Note: Use the following process only if you do not have DRS enabled in your Nutanix cluster.
  • If you are upgrading all nodes in the cluster at once, shut down all guest VMs and stop the cluster with the cluster stop command.
    Caution: There is downtime if you upgrade all the nodes in the Nutanix cluster at once. If you do not want downtime in your environment, you must ensure that only one CVM is shut down at a time in a redundancy factor 2 configuration.
  • If you are upgrading the nodes while keeping the cluster running, ensure that all nodes are up by logging on to a CVM and running the cluster status command. If any nodes are not running, start them before proceeding with the upgrade. Shut down all guest VMs on the node or migrate them to other nodes in the Nutanix cluster.
  • Disable email alerts in the web console under Email Alert Services or with the nCLI command.
    ncli> alerts update-alert-config enable=false
  • Run the complete NCC health check by using the health check command.
    nutanix@cvm$ ncc health_checks run_all
  • Run the cluster status command to verify that all Controller VMs are up and running, before performing a Controller VM or host shutdown or restart.
    nutanix@cvm$ cluster status
  • Place the host in the maintenance mode by using the web client.
  • Log on to the CVM with SSH and shut down the CVM.
    nutanix@cvm$ cvm_shutdown -P now
    Note: Do not reset or shutdown the CVM in any way other than the cvm_shutdown command to ensure that the cluster is aware that the CVM is unavailable.
  • Start the upgrade using vSphere Upgrade Guide or vCenter Update Manager VUM.

Upgrading ESXi Host

  • See the VMware Documentation for information about the standard ESXi upgrade procedures. If any problem occurs with the upgrade process, an alert is raised in the Alert dashboard.

Post Upgrade

Run the complete NCC health check by using the following command.

nutanix@cvm$ ncc health_checks run_all

vSphere Cluster Settings Checklist

Review the following checklist of the settings that you have to configure to successfully deploy vSphere virtual environment running Nutanix Enterprise cloud.

vSphere Availability Settings

  • Enable host monitoring.
  • Enable admission control and use the percentage-based policy with a value based on the number of nodes in the cluster.

    For more information about settings of percentage of cluster resources reserved as failover spare capacity, vSphere HA Admission Control Settings for Nutanix Environment.

  • Set the VM Restart Priority of all CVMs to Disabled .
  • Set the Host Isolation Response of the cluster to Power Off & Restart VMs .
  • Set the VM Monitoring for all CVMs to Disabled .
  • Enable datastore heartbeats by clicking Use datastores only from the specified list and choosing the Nutanix NFS datastore.

    If the cluster has only one datastore, click Advanced Options tab and add das.ignoreInsufficientHbDatastore with Value of true .

vSphere DRS Settings

  • Set the Automation Level on all CVMs to Disabled .
  • Select Automation Level to accept level 3 recommendations.
  • Leave power management disabled.

Other Cluster Settings

  • Configure advertised capacity for the Nutanix storage container (total usable capacity minus the capacity of one node for replication factor 2 or two nodes for replication factor 3).
  • Store VM swapfiles in the same directory as the VM.
  • Enable enhanced vMotion compatibility (EVC) in the cluster. For more information, see vSphere EVC Settings.
  • Configure Nutanix CVMs with the appropriate VM overrides. For more information, see VM Override Settings.
  • Check Nonconfigurable ESXi Components. Modifying the nonconfigurable components may inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.
Read article

Hyper-V Administration for Acropolis

AOS 6.5

Product Release Date: 2022-07-25

Last updated: 2022-09-20

Node Management

Logging on to a Controller VM

If you need to access a Controller VM on a host that has not been added to SCVMM or Hyper-V Manager, use this method.

Procedure

  1. Log on to the Hyper-V host with Remote Desktop Connection and start PowerShell.
  2. Log on to the Controller VM.
    > ssh nutanix@192.168.5.254

    Accept the host authenticity warning if prompted.

Placing the Controller VM and Hyper-V Host in Maintenance Mode

It is recommended that you place the Controller VM and Hyper-V host into maintenance mode when performing any maintenance or patch installation for the cluster.

Before you begin

Migrate the VMs that are running on the node to other nodes in the cluster.

About this task

Caution: Verify the data resiliency status of your cluster. You can only place one node in maintenance mode for each cluster.

To place the Controller VM and Hyper-V host in maintenance mode, do the following.

Procedure

  1. Log on to the Controller VM with SSH and get the CVM host ID.
    nutanix@cvm$ ncli host ls
  2. Run the following command to place the CVM in maintenance mode.
    nutanix@cvm$ ncli host edit id=host_id enable-maintenance-mode=true
    Replace host_id with the CVM host ID
  3. Log on to the Hyper-V host with Remote Desktop Connection and pause the Hyper-V host in the failover cluster using PowerShell.
    > Suspend-ClusterNode

Shutting Down a Node in a Cluster (Hyper-V)

Shut down a node in a Hyper-V cluster.

Before you begin

Shut down guest VMs that are running on the node, or move them to other nodes in the cluster.

In a Hyper-V cluster, you do not need to put the node in maintenance mode before you shut down the node. The steps to shut down the guest VMs running on the node or moving them to another node, and shutting down the CVM are adequate.

About this task

Caution: Verify the data resiliency status of your cluster. If the cluster only has replication factor 2 (RF2), you can only shut down one node for each cluster. If an RF2 cluster would have more than one node shut down, shut down the entire cluster.

Perform the following procedure to shut down a node in a Hyper-V cluster.

Procedure

  1. Log on to the Controller VM with SSH and shut down the Controller VM.
    nutanix@cvm$ cvm_shutdown -P now
    Note:

    Always use the cvm_shutdown command to reset, or shutdown the Controller VM. The cvm_shutdown command notifies the cluster that the Controller VM is unavailable.

  2. Log on to the Hyper-V host with Remote Desktop Connection and start PowerShell.
  3. Do one of the following to shut down the node.
    • > shutdown /s /t 0
    • > Stop-Computer -ComputerName localhost

    See the Microsoft documentation for up-to-date and additional details about how to shut down a Hyper-V node.

Starting a Node in a Cluster (Hyper-V)

After you start or restart a node in a Hyper-V cluster, verify if the Controller VM (CVM) is powered on and if the CVM is added to the metadata.

About this task

Perform the following steps to start a node in a Hyper-V cluster.

Procedure

  1. Power on the node. Do one of the following:
    • Press the power button on the front of the physical hardware server.
    • Use a remote tool such as iDRAC, iLO, or IPMI depending on your hardware.
  2. Log on to Hyper-V Manager and start PowerShell.
  3. Determine if the Controller VM is running.
    > Get-VM | Where {$_.Name -match 'NTNX.*CVM'}
    • If the Controller VM is off, a line similar to the following should be returned:
      NTNX-13SM35230026-C-CVM Stopped -           -             - Opera...

      Make a note of the Controller VM name in the second column.

    • If the Controller VM is on, a line similar to the following should be returned:
      NTNX-13SM35230026-C-CVM Running 2           16384             05:10:51 Opera...
  4. If the CVM is not powered on, power on the CVM by using Hyper-V Manager.
  5. Log on to the CVM with SSH and verify if the CVM is added back to the metadata.
    nutanix@cvm$ nodetool -h 0 ring

    The state of the IP address of the CVM you started must be Normal as shown in the following output.

    nutanix@cvm$ nodetool -h 0 ring
    Address         Status State      Load            Owns    Token                                                          
                                                              kV0000000000000000000000000000000000000000000000000000000000   
    XX.XXX.XXX.XXX  Up     Normal     1.84 GB         25.00%  000000000000000000000000000000000000000000000000000000000000   
    XX.XXX.XXX.XXX  Up     Normal     1.79 GB         25.00%  FV0000000000000000000000000000000000000000000000000000000000   
    XX.XXX.XXX.XXX  Up     Normal     825.49 MB       25.00%  V00000000000000000000000000000000000000000000000000000000000   
    XX.XXX.XXX.XXX  Up     Normal     1.87 GB         25.00%  kV0000000000000000000000000000000000000000000000000000000000
  6. Power on or failback the guest VMs by using Hyper-V Manager or Failover Cluster Manager.

Enabling 1 GbE Interfaces (Hyper-V)

If 10 GbE networking is specified during cluster setup, 1 GbE interfaces are disabled on Hyper-V nodes. Follow these steps if you need to enable the 1 GbE interfaces later.

About this task

To enable the 1 GbE interfaces, do the following on each host:

Procedure

  1. Log on to the Hyper-V host with Remote Desktop Connection and start PowerShell.
  2. List the network adapters.
    > Get-NetAdapter | Format-List Name,InterfaceDescription,LinkSpeed

    Output similar to the following is displayed.

    Name                 : vEthernet (InternalSwitch)
    InterfaceDescription : Hyper-V Virtual Ethernet Adapter #3
    LinkSpeed            : 10 Gbps
    
    Name                 : vEthernet (ExternalSwitch)
    InterfaceDescription : Hyper-V Virtual Ethernet Adapter #2
    LinkSpeed            : 10 Gbps
    
    Name                 : Ethernet
    InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection
    LinkSpeed            : 10 Gbps
    
    Name                 : Ethernet 3
    InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection #2
    LinkSpeed            : 10 Gbps
    
    Name                 : NetAdapterTeam
    InterfaceDescription : Microsoft Network Adapter Multiplexor Driver
    LinkSpeed            : 20 Gbps
    
    Name                 : Ethernet 4
    InterfaceDescription : Intel(R) I350 Gigabit Network Connection #2
    LinkSpeed            : 0 bps
    
    Name                 : Ethernet 2
    InterfaceDescription : Intel(R) I350 Gigabit Network Connection
    LinkSpeed            : 1 Gbps

    Make a note of the Name of the 1 GbE interfaces you want to enable.

  3. Configure the interface.

    Replace interface_name with the name of the 1 GbE interface as reported by Get-NetAdapter .

    1. Enable the interface.
      > Enable-NetAdapter -Name "interface_name"
    2. Add the interface to the NIC team.
      > Add-NetLBFOTeamMember -Team NetAdapterTeam -Name "interface_name"

      If you want to configure the interface as a standby for the 10 GbE interfaces, include the parameter -AdministrativeMode Standby

    Perform these steps once for each 1 GbE interface you want to enable.

Changing the Hyper-V Host Password

The cluster software needs to be able to log into each host as Admin to perform standard cluster operations, such as querying the status of VMs in the cluster. Therefore, after changing the Administrator password it is critical to update the cluster configuration with the new password.

About this task

Tip: Although it is not required for the Administrator user to have the same password on all hosts, doing so makes cluster management and support much easier. If you do select a different password for one or more hosts, make sure to note the password for each host.

Procedure

  1. Change the Admin password of all hosts.
    Perform these steps on every Hyper-V host in the cluster.
    1. Log on to the Hyper-V host with Remote Desktop Connection.
    2. Press Ctrl+Alt+End to display the management screen.
    3. Click Change a Password .
    4. Enter the old password and the new password in the specified fields and click the right arrow button.
    5. Click Ok to acknowledge the password change.
  2. Update the Administrator user password for all hosts in the cluster configuration.
    Warning: If you do not perform this step, the web console no longer shows correct statistics and alerts, and other cluster operations fail.
    1. Log on to any CVM in the cluster using SSH.
    2. Find the host IDs.

      On the clusters running the AOS release 4.5.x, type:

      nutanix@cvm$ ncli host list | grep -E 'ID|Hypervisor Address'

      On the clusters running the AOS release 4.6.x or later, type:

      nutanix@cvm$ ncli host list | grep -E 'Id|Hypervisor Address'

      Note the host ID for each hypervisor host.

    3. Update the hypervisor host password.
      nutanix@cvm$ ncli managementserver edit name=host_addr \
       password='host_password' 
      nutanix@cvm$ ncli host edit id=host_id \
       hypervisor-password='host_password'
      • Replace host_addr with the IP address of the hypervisor host.
      • Replace host_id with a host ID you determined in the preceding step.
      • Replace host_password with the Admin password on the corresponding hypervisor host.

      Perform this step for every hypervisor host in the cluster.

Changing a Host IP Address

Perform these steps once for every hypervisor host in the cluster. Complete the entire procedure on a host before proceeding to the next host.

Before you begin

Remove the host from the failover cluster and domain before changing the host IP address.

Procedure

  1. Configure networking on the node by following Configuring Host Networking for Hyper-V Manually.
  2. Log on to every Controller VM in the cluster and restart genesis.
    nutanix@cvm$ genesis restart

    If the restart is successful, output similar to the following is displayed.

    Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
    Genesis started on pids [30378, 30379, 30380, 30381, 30403]

Changing the VLAN ID for Controller VM

About this task

Perform the following procedure to change the VLAN ID of the Controller VM.

Procedure

  1. Log on to the Hyper-V host with the IPMI remote console and run the following PowerShell command to get the VLAN settings configured.
    > Get-VMNetworkAdapterVlan
  2. Change the VLAN ID.
    > Set-VMNetworkAdapterVlan -VMName cvm_name -VMNetworkAdapterName External -Access -VlanID vlan_ID
    Replace cvm_name with the name of the Nutanix Controller VM.

    Replace vlan_ID with the new VLAN ID.

    Note: The VM name of the Nutanix Controller VM must begin with NTNX-

Configuring VLAN for Hyper-V Host

About this task

Perform the following procedure to configure Hyper-V host VLANs.

Procedure

  1. Log on to the Hyper-V host with the IPMI remote console.
  2. Start a PowerShell prompt and run the following command to create a variable for the ExternalSwitch.
    >$netAdapter = Get-VMNetworkAdapter -Name "ExternalSwitch" -ManagementOS
  3. To set a new VLAN ID for the ExternalSwitch.
    >Set-VMNetworkAdapterVlan -VMNetworkAdapter $netAdapter -Access -VlanId vlan_ID
    Replace vlan_ID with the new VLAN ID.
    You can now communicate to the Hyper-V host on the new subnet.

Configuring Host Networking for Hyper-V Manually

Perform the following procedure to manually configure the Hyper-V host networking.

About this task

Note: Do not add any other device (including guest VMs) to the VLAN to which the CVM and hypervisor host are assigned. Isolate guest VMs on one or more separate VLANs.

Procedure

  1. Log on to the Hyper-V host with the IPMI remote console and start a Powershell prompt.
  2. List the network adapters.
    > Get-NetAdapter | Format-List Name,InterfaceDescription,LinkSpeed

    Output similar to the following is displayed.

    Name                 : vEthernet (InternalSwitch)
    InterfaceDescription : Hyper-V Virtual Ethernet Adapter #3
    LinkSpeed            : 10 Gbps
    
    Name                 : vEthernet (ExternalSwitch)
    InterfaceDescription : Hyper-V Virtual Ethernet Adapter #2
    LinkSpeed            : 10 Gbps
    
    Name                 : Ethernet
    InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection
    LinkSpeed            : 10 Gbps
    
    Name                 : Ethernet 3
    InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection #2
    LinkSpeed            : 10 Gbps
    
    Name                 : NetAdapterTeam
    InterfaceDescription : Microsoft Network Adapter Multiplexor Driver
    LinkSpeed            : 20 Gbps
    
    Name                 : Ethernet 4
    InterfaceDescription : Intel(R) I350 Gigabit Network Connection #2
    LinkSpeed            : 0 bps
    
    Name                 : Ethernet 2
    InterfaceDescription : Intel(R) I350 Gigabit Network Connection
    LinkSpeed            : 0 bps

    Make a note of the InterfaceDescription for the vEthernet adapter that links to the physical interface you want to modify.

  3. Start the Server Configuration utility.
    > sconfig
  4. Select Networking Settings by typing 8 and pressing Enter .
  5. Change the IP settings.
    1. Select a network adapter by typing the Index number of the adapter you want to change (refer to the InterfaceDescription you found in step 2) and pressing Enter .
      Warning: Do not select the network adapter with the IP address 192.168.5.1 . This IP address is required for the Controller VM to communicate with the host.
    2. Select Set Network Adapter Address by typing 1 and pressing Enter .
    3. Select Static by typing S and pressing Enter .
    4. Enter the IP address for the host and press Enter .
    5. Enter the subnet mask and press Enter .
    6. Enter the IP address for the default gateway and press Enter .
      The host networking settings are changed.
  6. (Optional) Change the DNS servers.
    DNS servers must be configured for a host to be part of a domain. You can either change the DNS servers in the sconfig utility or with setup_hyperv.py .
    1. Select Set DNS Servers by typing 2 .
    2. Enter the primary and secondary DNS servers and press Enter .
      The DNS servers are updated.
  7. Exit the Server Configuration utility by typing 4 and pressing Enter then 15 and pressing Enter .

Joining a Host to a Domain Manually

About this task

For information about how to join a host to a domain by using utilities provided by Nutanix, see Joining the Cluster and Hosts to a Domain . Perform these steps for each Hyper-V host in the cluster to manually join a host to a domain.

Procedure

  1. Log on to the Hyper-V host with the IPMI remote console and start a Powershell prompt.
  2. Join the host to the domain and rename it.
    > Add-Computer -DomainName domain_name -NewName node_name `
     -Credential domain_name\domain_admin_user -Restart -Force
    • Replace domain_name with the name of the join for the host to join.
    • Replace node_name with a new name for the host.
    • Replace domain_admin_user with the domain administrator username.
    The host restarts and joins the domain.

Changing CVM Memory Configuration (Hyper-V)

About this task

You can increase the memory reserved for each Controller VM in your cluster by using the 1-click Controller VM Memory Upgrade available from the Prism Element web console. Increase memory size depending on the workload type or to enable certain AOS features. For more information about CVM memory sizing recommendations and instructions about how to increase the CVM memory, see Increasing the Controller VM Memory Size in the Prism Web Console Guide .

Hyper-V Configuration

Before configuring Nutanix storage on Hyper-V, you need to ensure that you meet the requirements of Hyper-V installation. For more information, see Hyper-V Installation Requirements. After you configure all the pre-requisites for installing and setting up Hyper-V, you need to join the Hyper-V cluster and its constituent hosts to the domain and then create a failover cluster.

Hyper-V Installation Requirements

Ensure that the following requirements are met before installing Hyper-V.

Windows Active Directory Domain Controller

Requirements:

  • For a fresh installation, you need a version of Nutanix Foundation that is compatible with the version of Windows Server you want to install.
    Note: To install Windows Server 2016, you need Foundation 3.11.2 or later. For more information, see the Field Installation Guide.
  • The primary domain controller version must at least be 2008 R2.
    Note: If you have Volume Shadow Copy Service (VSS) based backup tool (for example Veeam), functional level of Active Directory must be 2008 or higher.
  • Active Directory Web Services (ADWS) must be installed and running. By default, connections are made over TCP port 9389, and firewall policies must enable an exception on this port for ADWS.

    To test that ADWS is installed and running on a domain controller, log on by using a domain administrator account in a Windows host other than the domain controller host that is joined to the same domain and has the RSAT-AD-Powershell feature installed, and run the following PowerShell command. If the command prints the primary name of the domain controller, then ADWS is installed and the port is open.

> (Get-ADDomainController).Name
  • The domain controller must run a DNS server.
    Note: If any of the above requirements are not met, you need to manually create an Active Directory computer object for the Nutanix storage in the Active Directory, and add a DNS entry for the name.
  • Ensure that the Active Directory domain is configured correctly for consistent time synchronization.
  • Place the AD server in a separate virtual or physical host residing in storage that is not dependent on the domains that the AD server manages.
    Note: Do not run a virtual Active Directory domain controller (DC) on a Nutanix Hyper-V cluster and join the cluster to the same domain.

Accounts and Privileges:

  • An Active Directory account with permission to create new Active Directory computer objects for either a storage container or Organizational Unit (OU) where Nutanix nodes are placed. The credentials of this account are not stored anywhere.
  • An account that has sufficient privileges to join a Windows host to a domain. The credentials of this account are not stored anywhere. These credentials are only used to join the hosts to the domain.

Additional Information Required:

  • The IP address of the primary domain controller.
    Note: The primary domain controller IP address is set as the primary DNS server on all the Nutanix hosts. It is also set as the NTP server in the Nutanix storage cluster to keep the Controller VM, host, and Active Directory time synchronized.
  • The fully qualified domain name to which the Nutanix hosts and the storage cluster is going to be joined.

SCVMM

Note: Relevant only if you have SCVMM in your environment.

Requirements:

  • The SCVMM version must be at least 2016 and it must be installed on Windows Server 2016. If you have SCVMM on an earlier release, upgrade it to 2016 before you register a Nutanix cluster running Hyper-V.
  • Kerberos authentication for storage is optional for Windows Server 2012 R2 (see Enabling Kerberos for Hyper-V), but it is required for Windows Server 2016. However, for Kerberos authentication to work with Windows Server 2016, the active directory server must reside outside the Nutanix cluster.
  • The SCVMM server must allow PowerShell remoting.

    To test this scenario, log on by using the SCVMM administrator account in a Windows host and run the following PowerShell command on a Windows host that is different to the SCVMM host (for example, run the command from the domain controller). If they print the name of the SCVMM server, then PowerShell remoting to the SCVMM server is not blocked.

    > Invoke-Command -ComputerName scvmm_server -ScriptBlock {hostname} -Credential MYDOMAIN\username

    Replace scvmm_server with the SCVMM host name and MYDOMAIN with Active Directory domain name.

    Note: If the SCVMM server does not allow PowerShell remoting, you can perform the SCVMM setup manually by using the SCVMM user interface.
  • The ipconfig command must run in a PowerShell window on the SCVMM server. To verify run the following command.

    > Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {ipconfig} -Credential MYDOMAIN\username

    Replace scvmm_server_name with the SCVMM host name and MYDOMAIN with Active Directory domain name.

  • The SMB client configuration in the SCVMM server should have RequireSecuritySignature set to False. To verify, run the following command.

    > Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {Get-SMBClientConfiguration | FL RequireSecuritySignature}

    Replace scvmm_server_name with the SCVMM host name.

    This can be set to True by a domain policy. In this case, the domain policy should be modified to set it to False. Also, if it is True, this can be configured back to False, but might not get changed throughout if there is a policy that reverts it back to True. To change it, you can use the following command in the PowerShell on the SCVMM host by logging in as a domain administrator.

    Set-SMBClientConfiguration -RequireSecuritySignature $False -Force

    If you are changing it from True to False, it is important to confirm that the policies that are on the SCVMM host have the correct value. On the SCVMM host run rsop.msc to review the resultant set of policy details, and verify the value by navigating to, Servername > Computer Configuration > Windows Settings > Security Settings > Local Policies > Security Options: Policy Microsoft network client: Digitally sign communications (always). The value displayed in RSOP must be, Disabled or Not Defined for the change to persist. Also, the group policies that have been configured in the domain to apply to the SCVMM server should to be updated to change this to Disabled, if the RSOP shows it as Enabled. Otherwise, the RequireSecuritySignature changes back to True at a later time. After setting the policy in Active Directory and propagating to the domain controllers, refresh the SCVMM server policy by running the command gpupdate /force . Confirm in RSOP that the value is Disabled .
    Note: If security signing is mandatory, then you need to enable Kerberos in the Nutanix cluster. In this case, it is important to ensure that the time remains synchronized between the Active Directory server, the Nutanix hosts, and the Nutanix Controller VMs. The Nutanix hosts and the Controller VMs set their NTP server as the Active Directory server, so it should be sufficient to ensure that Active Directory domain is configured correctly for consistent time synchronization.

Accounts and Privileges:

  • When adding a host or a cluster to the SCVMM, the run-as account you are specifying for managing the host or cluster must be different from the service account that was used to install SCVMM.
  • Run-as account must be a domain account and must have local administrator privileges on the Nutanix hosts. This can be a domain administrator account. When the Nutanix hosts are joined to the domain, the domain administrator accounts automatically takes administrator privileges on the host. If the domain account used as the run-as account in SCVMM is not a domain administrator account, you need to manually add it to the list of local administrators on each host by running sconfig .
    • SCVMM domain account with administrator privileges on SCVMM and PowerShell remote execution privileges.
  • If you want to install SCVMM server, a service account with local administrator privileges on the SCVMM server.

IP Addresses

  • One IP address for each Nutanix host.
  • One IP address for each Nutanix Controller VM.
  • One IP address for each Nutanix host IPMI interface.
  • One IP address for the Nutanix storage cluster.
  • One IP address for the Hyper-V failover cluster.
Note: For N nodes, (3*N + 2) IP addresses are required. All IP addresses must be in the same subnet.

DNS Requirements

  • Each Nutanix host must be assigned a name of 15 characters or less, which gets automatically added to the DNS server during domain joining.
  • The Nutanix storage cluster needs to be assigned a name of 15 characters or less, which must be added to the DNS server when the storage cluster is joined to the domain.
  • The Hyper-V failover cluster must be assigned a name of 15 characters or less, which gets automatically added to the DNS server when the failover cluster is created.
  • After the Hyper-V configuration, all names must resolve to an IP address in the Nutanix hosts, the SCVMM server (if applicable), or any other host that needs access to the Nutanix storage, for example, a host running the Hyper-V Manager.

Storage Access Requirements

  • Virtual machine and virtual disk paths must always refer to the Nutanix storage cluster by name, not the external IP address. If you use the IP address, it directs all the I/O to a single node in the cluster and thereby compromises performance and scalability.
    Note: For external non-Nutanix host that needs to access Nutanix SMB shares, see Nutanix SMB Shares Connection Requirements from Outside the Cluster topic.

Host Maintenance Requirements

  • When applying Windows updates to the Nutanix hosts, the hosts should be restarted one at a time, ensuring that Nutanix services comes up fully in the Controller VM of the restarted host before updating the next host. This can be accomplished by using Cluster Aware Updating and using a Nutanix-provided script, which can be plugged into the Cluster Aware Update Manager as a pre-update script. This pre-update script ensures that the Nutanix services go down on only one host at a time ensuring availability of storage throughout the update procedure. For more information about cluster-aware updating, see Installing Windows Updates with Cluster-Aware Updating.
    Note: Ensure that automatic Windows updates are not enabled for the Nutanix hosts in the domain policies.

General Host Requirements

  • Hyper-V hosts must have the remote script execution policy set at least to RemoteSigned . A Restricted setting might cause issues when you reboot the CVM.
Note: Nutanix supports the installation of language packs for Hyper-V hosts.

Limitations and Guidelines

Nutanix clusters running Hyper-V have the following limitations. Certain limitations might be attributable to other software or hardware vendors:

Guidelines

Hyper-V 2016 Clusters and Support for Windows Server 2016
  • VHD Set files (.vhds) are a new shared Virtual Disk model for guest clusters in Microsoft Server 2016 and are not supported. You can import existing shared .vhdx disks to Windows Server 2016 clusters. New VHDX format sharing is supported. Only fixed-size VHDX sharing is supported.

    Use the PowerShell Add-VMHardDiskDrive command to attach any existing or new VHDX file in shared mode to VMs. For example: Add-VMHardDiskDrive -VMName Node1 -Path \\gogo\smbcontainer\TestDisk\Shared.vhdx -SupportPersistentReservations .

Upgrading Hyper-V Hypervisor Hosts
  • When upgrading hosts to Hyper-V 2016, 2019, and later versions, the local administrator user name and password is reset to the default administrator name Administrator and password of nutanix/4u. Any previous changes to the administrator name and/or password are overwritten.
General Guidelines
  • Note: Do not add any other device (including guest VMs) to the VLAN to which the CVM and hypervisor host are assigned. Isolate guest VMs on one or more separate VLANs.
  • If you are destroying a cluster and creating a new one and want to reuse the hostnames, failover cluster name, and storage object name of the previous cluster, remove their computer accounts and objects from AD and DNS first.

Limitations

  • Intel Advanced Network Services (ANS) is not compatible with Load Balancing and Failover (LBFO), the built-in NIC teaming feature in Hyper-V. For more information, see the Intel support article, Teaming with Intel® Advanced Network Services .
  • Nutanix does not support the online resizing of the shared virtual hard disks (VHDX files).

Configuration Scenarios

After using Foundation to create a cluster, you can use the Nutanix web console to join the Hyper-V cluster and its constituent hosts to the domain, create the Hyper-V failover cluster, and enable Kerberos.

Note: If you are installing Windows Server 2016, you do not have to enable Kerberos. Kerberos is enabled during cluster creation.

You can then use the setup_hyperv.py script to add host and storage to SCVMM, configure a Nutanix library share in SCVMM, and register Nutanix storage containers as file shares in SCVMM.

Note: You can use the setup_hyperv.py script only with a standalone SCVMM instance. The script does not work with an SCVMM cluster.

The usage of the setup_hyperv.py script is as follows.

nutanix@cvm$ setup_hyperv.py flags command
commands:
register_shares
setup_scvmm

Nonconfigurable Components

The components listed here are configured by the Nutanix manufacturing and installation processes. Do not modify any of these components except under the direction of Nutanix Support.

Nutanix Software

Modifying any of the following Nutanix software settings may inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.

  • Local datastore name.
  • Configuration and contents of any CVM (except memory configuration to enable certain features).
Important: Note the following important considerations about CVMs.
  • Do not delete the Nutanix CVM.
  • Do not take a snapshot of the CVM for backup.
  • Do not rename, modify, or delete the admin and nutanix user accounts of the CVM.
  • Do not create additional CVM user accounts.

    Use the default accounts ( admin or nutanix ), or use sudo to elevate to the root account.

  • Do not decrease CVM memory below recommended minimum amounts required for cluster and add-in features.

    Nutanix Cluster Checks (NCC), preupgrade cluster checks, and the AOS upgrade process detect and monitor CVM memory.

  • Nutanix does not support the usage of third-party storage on the host part of Nutanix clusters.

    Normal cluster operations might be affected if there are connectivity issues with the third-party storage you attach to the hosts in a Nutanix cluster.

  • Do not run any commands on a CVM that are not in the Nutanix documentation.

Hyper-V Settings

  • Cluster name (using the web console)
  • Controller VM name
  • Controller VM virtual hardware configuration file (.xml file in Hyper-V version 2012 R2 and earlier and .vmcx file in Hyper-V version 2016 and later). Each AOS version and upgrade includes a specific Controller VM virtual hardware configuration. Therefore, do not edit or otherwise modify the Controller VM virtual hardware configuration file.
  • Host name (you can configure the host name only at the time of creating and expanding the cluster)
  • Internal switch settings (internal virtual switch and internal virtual network adapter) and external network adapter name

    Two virtual switches are created on the Nutanix host, ExternalSwitch and InternalSwitch. Two virtual network adapters are created on the host corresponding to these virtual switches, vEthernet (ExternalSwitch) and vEthernet (InternalSwitch).

    Note: Do not delete these switches and adapters. Do not change the names of the internal virtual switch, internal virtual network adapter, and external virtual network adapter. You can change the name of the external virtual switch. For more information about changing the name of the external virtual switch, see Updating the Cluster After Renaming the Hyper-V External Virtual Switch.
  • Windows roles and features

    No new Windows roles or features must be installed on the Nutanix hosts. This especially includes the Multipath IO feature, which can cause the Nutanix storage to become unavailable.

    Do not apply GPOs to the Nutanix nodes that impact Log on as a service. It is recommended not to remove the default entries of the following service.

    NT Service\All Services

    NT Virtual Machine\Virtual Machines

  • Note: This best practice helps keep the host operating system free of roles, features, and applications that aren't required to run Hyper-V. For more information, see the Hyper-V should be the only enabled role document in the Microsoft documentation portal.
  • Controller VM pre-configured VM setting of Automatic Start Action
  • Controller VM high-availability setting
  • Controller VM operations: migrating, saving state, or taking checkpoints of the Controller VM

Adding the Cluster and Hosts to a Domain

After completing foundation of the cluster, you need to add the cluster and its constituent hosts to the Active Directory (AD) domain. The adding of cluster and hosts to the domain facilitates centralized administration and security through the use of other Microsoft services such as Group Policy and enables administrators to manage the distribution of updates and hotfixes.

Before you begin

  • If you have a VLAN segmented network, verify that you have assigned the VLAN tags to the Hyper-V hosts and Controller VMs. For information about how to configure VLANs for the Controller VM, see the Advanced Setup Guide.
  • Ensure that you have valid credentials of the domain account that has the privileges to create a new computer account or modify an existing computer account in the Active Directory domain. An Active Directory domain created by using non-ASCII text may not be supported. For more information about usage of ASCII or non-ASCII text in Active Directory configuration, see Internationalization (i18n) .

Procedure

  1. Log on to the Web Console by using one of the Controller VM IP address or by using cluster virtual IP address.
  2. Click the gear icon in the main menu and select Join Cluster and Hosts to the Domain on the Settings page.
    Figure. Join Cluster and Hosts to the Domain
    Click to enlarge A sample image of the Join Cluster and Hosts to the Domain menu used to add a cluster and its constituent hosts to an AD domain.

  3. Enter the fully qualified name of the domain that you want to join the cluster and its constituent hosts to in the Full Domain Name box.
  4. Enter the IP address of the name server in the Name Server IP Address box that can resolve the domain name that you have entered in the Full Domain Name box.
  5. In the Base OU Path box, type the OU (organizational unit) path where the computer accounts must be stored after the host joins a domain. For example, if the organization is nutanix.com and the OU is Documentation, the Base OU Path can be specified as OU=Documentation,DC=nutanix,DC=com
    Specifying the Base OU Path is optional. When you specify the Base OU Path, the computer accounts are stored in the Base OU Path within the Active Directory after the hosts join a domain. If the Base OU Path is not specified, the computer accounts are stored in the default Computers OU.
  6. Enter a name for the cluster in the Nutanix Cluster Name box.
    The maximum length of the cluster name should not be more than 15 characters and it should be a valid NetBIOS name.
  7. Enter the virtual IP address of the cluster in the Nutanix Cluster Virtual IP Address box.
    If you have not already configured the virtual IP address of the cluster, you can configure it by using this box.
  8. Enter the prefix that should be used to name the hosts (according to your convention) in the Prefix box.
    • The prefix name should not end with a period.
    • The maximum length of the prefix name should not be more than 11 characters.
    • Should be a valid NetBIOS name.

      For example, if you enter prefix name as Tulip, the hosts are named as Tulip-1, Tulip-2, and so on, in the increasing order of the external IP address of the hosts.

    If you do not provide any prefix, the default name of NTNX- block-number is used. Click Advanced View to see the expanded view of all the hosts in all the blocks of the cluster and to rename them individually.
  9. In the Credentials field, enter the logon name and password of the domain account that has the privileges to create a new or modify an existing computer accounts in the Active Directory domain.
    Ensure that the logon name is in the DOMAIN\USERNAME format. The cluster and its constituent hosts require these credentials to join the AD domain. Nutanix does not store the credentials.
  10. When all the information is correct, click Join .
    The cluster is added to the domain. Also, all the hosts are renamed, added to the domain, and restarted. Allow the hosts and Controller VMs a few time to start-up. After the cluster is ready, the logon page is displayed.

What to do next

Create a Microsoft failover cluster. For more information, see Creating a Failover Cluster for Hyper-V.

Creating a Failover Cluster for Hyper-V

Before you begin

Perform the following tasks before you create a failover cluster:

Perform the following procedure to create a failover cluster that includes all the hosts in the cluster.

Procedure

  1. Log on to the Prism Element web console by using one of the Controller VM IP addresses or by using the cluster virtual IP address.
  2. Click the gear icon in the main menu and select Configure Failover Cluster from the Settings page.
    Figure. Configure Failover Cluster
    Click to enlarge Configuring Failover Cluster

  3. Type the failover cluster name in the Failover Cluster Name text box.
    The maximum length of the failover cluster name must not be more than 15 characters and must be a valid NetBIOS name.
  4. Type an IP address for the Hyper-V failover cluster in the Failover Cluster IP Address text box.
    This address is for the cluster of Hyper-V hosts that are currently being configured. It must be unique, different from the cluster virtual IP address and from all other IP addresses assigned to the hosts and Controller VMs. It must be in the same network range as the Hyper-V hosts.
  5. In the Credentials field, type the logon name and password of the domain account that has the privileges to create a new account or modify existing accounts in the Active Directory domain.
    The logon name must be in the format DOMAIN\USERNAME . The credentials are required to create a failover cluster. Nutanix does not store the credentials.
  6. Click Create Cluster .
    A failover cluster is created by the name that has been provided and it includes all the hosts in the cluster.
    For information on manually creating a failover cluster, see Manually Creating a Failover Cluster (SCVMM User Interface).

Manually Creating a Failover Cluster (SCVMM User Interface)

Join the hosts to the domain as described in Adding the Cluster and Hosts to a Domain in the Hyper-V Administration for Acropolis guide.

About this task

Perform the following procedure to manually create a failover cluster for Hyper-V by using System Center VM Manager (SCVMM).

If you are not using SCVMM or are using Hyper-V Manager, see Creating a Failover Cluster for Hyper-V.

Procedure

  1. Start the Failover Cluster Manager utility.
  2. Right-click and select Create Cluster , and click Next .
  3. Enter all the hosts that you want to add to the Failover cluster, and click Next .
  4. Select the No. I do not require support from Microsoft for this cluster, and therefore do not want to run the validation tests. When I click Next continue creating the cluster option, and click Next .
    Note:

    If you select Yes , two tests fail when you run the cluster validation tests. The tests fail because the internal network adapter on each host is configured with the same IP address (192.168.5.1). The network validation tests fail with the following error message:

    Duplicate IP address

    The failures occur despite the internal network being reachable only within a host, so the internal adapter can have the same IP address on different hosts. The second test, Validate Network Communication, fails due to the presence of the internal network adapter. Both failures are benign and can be ignored.

  5. Enter a name for the cluster, specify a static IP address, and click Next .
  6. Clear the All eligible storage to the cluster check box, and click Next .
  7. Wait until the cluster is created. After you receive the message that the cluster is successfully created, click Finish to exit the Cluster Creation wizard.
  8. Go to Networks in the cluster tree and select Cluster Network 1 and ensure it is in the internal network by verifying the IP address in the summary pane. The IP address must be 192.168.5.0/24 as shown in the following screen shot.
    Figure. Failover Cluster Manager Click to enlarge

  9. Click the Action tab on the toolbar and select Live Migration Settings .
  10. Remove Cluster Network 1 from Networks for Live Migration and click OK .
    Note: If you do not perform this step, live migrations fail because the internal network is added to the live migration network lists. Log on to SCVMM, add the cluster to SCVMM, check the host migration setting, and ensure that the internal network is not listed.

Changing the Failover Cluster IP Address

About this task

Perform the following procedure to change your Hyper-V failover cluster IP address.

Procedure

  1. Open Failover Cluster Manager and connect to your cluster.
  2. Enter the name of any one of the Hyper-V hosts and click OK .
  3. In the Failover Cluster Manager pane, select your cluster and expand Cluster Core Resources .
  4. Right-click select the cluster, and select Properties > IP address .
  5. Change the IP address of your failover cluster using the Edit option and click OK .
  6. Click Apply .

Enabling Kerberos for Hyper-V

If you are running Windows Server 2012 R2, perform the following procedure to configure Kerberos to secure the storage. You do not have to perform this procedure for Windows Server 2016 because Kerberos is enabled automatically during failover cluster creation.

Before you begin

  • Join the hosts to the domain as described in Adding the Cluster and Hosts to a Domain.
  • Verify that you have configured a service account for delegation. For more information on enabling delegation, see the Microsoft documentation .

Procedure

  1. Log on to the web console by using one of the Controller VM IP addresses or by using the cluster virtual IP address.
  2. Click the gear icon in the main menu and select Kerberos Management from the Settings page.
    Figure. Configure Failover Cluster Click to enlarge Enabling Kerberos

  3. Set the Kerberos Required option to enabled.
  4. In the Credentials field, type the logon name and password of the domain account that has the privileges to create and modify the virtual computer object representing the cluster in Active Directory. The credentials are required for enabling Kerberos.
    The logon name must be in the format DOMAIN\USERNAME . Nutanix does not store the credentials.
  5. Click Save .

Configuring the Hyper-V Computer Object by Using Kerberos

About this task

Perform the following procedure to complete the configuration of the Hyper-V Computer Object by using Kerberos and SMB signing (for enhanced security).
Note: Nutanix recommends you to configure Kerberos during a maintenance window to ensure cluster stability and prevent loss of storage access for user VMs.

Procedure

  1. Log on to Domain Controller and perform the following for each Hyper-V host computer object.
    1. Right-click the host object, and go to Properties . In the Delegation tab, select the Trust this computer for delegation to specified services only option, and select Use any authentication protocol .
    2. Click Add to add the cifs of the Nutanix storage cluster object.
    Figure. Adding the cifs of the Nutanix storage cluster object Click to enlarge

  2. Check the Service Principal Name (SPN) of the Nutanix storage cluster object.
    > Setspn -l name_of_cluster_object

    Replace name_of_cluster_object with the name of the Nutanix storage cluster object.

    Output similar to the following is displayed.

    Figure. SPN Registration Click to enlarge

    If the SPN is not registered for the Nutanix storage cluster object, create the SPN by running the following commands.

    > Setspn -S cifs/name_of_cluster_object name_of_cluster_object
    > Setspn -S cifs/FQDN_of_the_cluster_object name_of_cluster_object

    Replace name_of_cluster_object with the name of the Nutanix storage cluster object and FQDN_of_the_cluster_object with the domain name of the Nutanix storage cluster object.

    Example

    > Setspn -S cifs/virat virat
    > Setspn -S cifs/virat.sre.local virat
    
  3. [Optional] To enable SMB signing feature, log on to each Hyper-V host by using RDP and run the following PowerShell command to change the Require Security Signature setting to True .
    > Set-SMBClientConfiguration -RequireSecuritySignature $True –Force
    Caution: The SMB server will only communicate with an SMB client that can perform SMB packet signing, therefore if you decide to enable the SMB signing feature, it must be enabled for all the Hyper-V hosts in the cluster.

Disabling Kerberos for Hyper-V

Perform the following procedure to disable Kerberos.

Procedure

  1. Disable SMB signing.
    Log on to each Hyper-V host by using RDP and run the following PowerShell command to change the Require Security Signature setting to False .
    Set-SMBClientConfiguration -RequireSecuritySignature $False –Force
  2. Disable Kerberos from the Prism web console.
    1. Log into the web console by using one of the Controller VM IP address or by using cluster virtual IP address.
    2. From the gear icon, click Kerberos Management .
    3. Set Kerberos Required button to disabled.
    4. In the Credentials field, type the logon name and password of the domain account that has the privileges to create modify the virtual computer object representing the cluster in the Active Directory. The credentials are required for enabling Kerberos.
      This logon name must be in the format DOMAIN\USERNAME . Nutanix does not store the credentials.
    5. Click Save .

Setting Up Hyper-V Manager

Perform the following steps to set up Hyper-V Manager.

Before you begin

  • Add the server running Hyper-V Manager to the allowlist by using the Prism user interface. For more information, see Configuring a Filesystem Whitelist in the Prism Web Console Guide .
  • If Kerberos is enabled for accessing storage (by default it is disabled), enable SMB delegation.

Procedure

  1. Log into the Hyper-V Manager.
  2. Right-click the Hyper-V Manager and select Connect to Server .
  3. Type the name of the host that you want to add and click OK .
  4. Right-click the host and select Hyper-V Settings .
  5. Click Virtual Hard Disks and verify that the location to store virtual hard disk files is same that you have specified during storage container creation.
    For more information, see Creating a Storage Container in the Prism Web Console Guide .
  6. Click Virtual Machines and verify that the location to store virtual machine configuration files is same that you have specified during storage container creation.
    For more information, see Creating a Storage Container in the Prism Web Console Guide .
    After performing these steps, you are ready to create and manage virtual machines by using Hyper-V Manager.
    Warning: Virtual machines created by using Hyper-V should never be defined on storage using IP-based SMB share location.

Cluster Management

Installing Windows Updates with Cluster-Aware Updating

With storage containers that are configured with a replication factor of 2, Nutanix clusters can tolerate only a single node being down at a time. For such clusters, you need a way to update nodes one node at a time.

If your Nutanix cluster runs Microsoft Hyper-V, you can use the Cluster-Aware Updating (CAU) utility, which ensures that only one node is down at a time when Windows updates are applied.

Note: Nutanix does not recommend performing a manual patch installation for a Hyper-V cluster running on the Nutanix platform.

The procedure for configuring CAU for a Hyper-V cluster running on the Nutanix platform is the same as that for a Hyper-V cluster running on any other platform. However, for a Hyper-V cluster running on Nutanix, you need to use a Nutanix pre-update script created specifically for Nutanix clusters. The pre-update script ensures that the CAU utility does not proceed to the next node until the Controller VM on the node that was updated is fully back up, preventing a condition in which multiple Controller VMs are down at the same time.

The CAU utility might not install all the recommended updates, and you might have to install some updates manually. For a complete list of recommended updates, see the following articles in the Microsoft documentation portal.

  • Recommended hotfixes, updates, and known solutions for Windows Server 2012 R2 Hyper-V environments
  • Recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters

Revisit these articles periodically and install any updates that are added to the list.

Note: Ensure that the Nutanix Controller VM and the Hyper-V host are placed in maintenance mode before any maintenance or patch installation. For more information, see Placing the Controller VM and Hyper-V Host in Maintenance Mode.

Preparing to Configure Cluster-Aware Updating

Configure your environment to run the Nutanix pre-update script for Cluster-Aware Updating. The Nutanix pre-update script is named cau_preupdate.ps1 and is, by default, located on each Hyper-V host in C:\Program Files\Nutanix\Utils\ . To ensure smooth configuration, make sure you have everything you need before you begin to configure CAU.

Before you begin

  • Review the required and recommended Windows updates for your cluster.
  • See the Microsoft documentation for information about the Cluster-Aware Updating feature. In particular, see the requirements and best practices for Cluster-Aware Updating in the Micorosoft documentation portal.
  • To enable the migration of virtual machines from one node to another, configure the virtual machines for high availability.

About this task

To configure your environment to run the Nutanix pre-update script, do the following:

Procedure

  1. If you plan to use self-updating mode, do the following:
    1. On each Hyper-V host and on the management workstation that you are using to configure CAU, create a directory such that the path to the directory and the directory name do not contain spaces (for example, C:\cau ).
      Note: The location of the directory must be the same on the hosts and the management workstation.
    2. From C:\Program Files\Nutanix\Utils\ on each host, copy the Nutanix pre-update file cau_preupdate.ps1 to the directory you created on the hosts and on the management workstation.

    A directory whose path does not contain spaces is necessary because Microsoft does not support the use of spaces in the PreUpdateScript field. The space in the default path ( C:\Program Files\Nutanix\Utils\ ) prevents the cluster from updating itself in the self-updating mode. However, that space does not cause issues if you update the cluster by using the remote-updating mode. If you plan to use only the remote-updating mode, you can use the pre-update script from its default location. If you plan to use the self-updating mode or both self-updating and remote-updating modes, use a directory whose path does not contain spaces.

  2. On each host, do the following.
    1. Unblock the script file.
      > powershell.exe Unblock–File -Path 'path-to-pre-update-script'

      Replace path-to-pre-update-script with the full path to the pre-update script (for example, C:\cau\cau_preupdate.ps1 ).

    2. Allow Windows PowerShell to run unsigned code.
      > powershell.exe  Set-ExecutionPolicy remoteSigned

Accessing the Cluster-Aware Updating Dialog Box

You configure CAU by using the Cluster-Aware Updating dialog box.

About this task

To access the Cluster-Aware Updating dialog box, do the following:

Procedure

  1. Open Failover Cluster Manager and connect to your cluster.
  2. In the Configure section, click Cluster-Aware Updating .
    Figure. Cluster-Aware Updating Dialog Box Click to enlarge "The Cluster-Aware Updating dialog box connects to a failover cluster. The dialog box displays the nodes in the cluster, a last update summary, logs of updates in progress, and links to CAU configuration options and wizards."

    The Cluster-Aware Updating dialog box appears. If the dialog box indicates that you are not connected to the cluster, in the Connect to a failover cluster field, enter the name of the cluster, and then click Connect .

Specifying the Nutanix Pre-Update Script in an Updating Run Profile

Specify the Nutanix pre-update script in an Updating Run and save the configuration to an Updating Run Profile in the XML format. This is a one-time task. The XML file contains the configuration for the cluster-update operation. You can reuse this file to drive cluster updates through both self-updating and remote-updating modes.

About this task

To specify the Nutanix pre-update script in an Updating Run Profile, do the following:

Procedure

  1. In the Cluster-Aware Updating dialog box, click Create or modify Updating Run Profile .
    You can see the current location of the XML file under the Updating Run profile to start from: field.
    Note: You cannot overwrite the default CAU configuration file, because non-local administrative users, including the AD administrative users, do not have permissions to modify files in the C:\Windows\System32\ directory.
  2. Click Save As .
  3. Select a new location for the file and rename the file. For example, you can rename the file to msfc_updating_run_profile.xml and save it to the following location: C:\Users\administrator\Documents .
  4. Click Save .
  5. In the Cluster-Aware Updating dialog box, under Cluster Actions , click Configure cluster self-updating options .
  6. Go to Input Settings > Advanced Options and, in the Updating Run options based on: field, click Browse to select the location to which you saved the XML file in an earlier step.
  7. In the Updating Run Profile Editor dialog box, in the PreUpdateScript field, specify the full path to the cau_preupdate.ps1 script. The default full path is C:\Program Files\Nutanix\Utils\cau_preupdate.ps1 . The default path is acceptable if you plan to use only the remote-updating mode. If you plan to use the self-updating mode, place cau_preupdate.ps1 in a directory such that the path does not include spaces. For more information, see Preparing to Configure Cluster-Aware Updating.
    Note: You can also place the script on the SMB file share if you can access the SMB file share from all your hosts and the workstation that you are configuring the CAU from.
  8. Click Save .
    Caution: Do not change the auto-populated ConfigurationName field value. Otherwise, the script fails.
    The CAU configuration is saved to an XML file in the following folder: C:\Windows\System32

What to do next

Save the Updating Run Profile to another location and use it for any other cluster updates.

Updating a Cluster by Using the Remote-Updating Mode

You can update the cluster by using the remote-updating mode to verify that CAU is configured and working correctly. You might need to use the remote-updating mode even when you have configured the self-updating mode, but mostly for updates that cannot wait until the next self-updating run.

About this task

Note: Do not turn off your workstation until all updates have been installed.
To update a cluster by using the remote-updating mode, do the following:

Procedure

  1. In the Cluster-Aware Updating dialog box, click Apply updates to this cluster .
    The Cluster-Aware Updating Wizard appears.
  2. Read the information on the Getting Started page, and then click Next .
  3. On the Advanced Options page, do the following.
    1. In the Updating Run options based on field, enter the full path to the CAU configuration file that you created in Specifying the Nutanix Pre-Update Script in an Updating Run Profile .
    2. Ensure that the full path to the downloaded script is shown in the PreUpdateScript field and that the value in the CauPluginName field is Microsoft.WindowsUpdatePlugin .
  4. On the Additional Update Options page, do the following.
    1. If you want to include recommended updates, select the Give me recommended updates the same way that I receive important updates check box.
    2. Click Next .
  5. On the Completion page, click Close .
    The update process begins.
  6. In the Cluster-Aware Updating dialog box, click the Log of Updates in Progress tab and monitor the update process.

Updating a Cluster by Using the Self-Updating Mode

The self-updating mode ensures that the cluster is up-to-date at all times.

About this task

To configure the self-updating mode, do the following:

Procedure

  1. In the Cluster-Aware Updating dialog box, click Configure cluster self-updating options .
    The Configure Self-Updating Options Wizard appears.
  2. Read the information on the Getting Started page, and then click Next .
  3. On the Add Clustered Role page, do the following.
    1. Select the Add the CAU clustered role, with self-updating mode enabled, to this cluster check box.
    2. If you have a prestaged computer account, select the I have a prestaged computer object for the CAU clustered role check box. Otherwise, leave the check box clear.
  4. On the Self-updating schedule page, specify details such as the self-updating frequency and start date.
  5. On the Advanced Options page, do the following.
    1. In the Updating Run options based on field, enter the full path to the CAU configuration file that you created in Specifying the Nutanix Pre-Update Script in an Updating Run Profile .
    2. Ensure that the full path to the Nutanix pre-update script is shown in the PreUpdateScript field and that the value in the CauPluginName field is Microsoft.WindowsUpdatePlugin .
  6. On the Additional Update Options page, do the following.
    1. If you want to include recommended updates, select the Give me recommended updates the same way that I receive important updates check box.
    2. Click Next .
  7. Click Close .

Moving a Hyper-V Cluster to a Different Domain

This topic describes the supported procedure to move all the hosts on a Nutanix cluster running Hyper-V from one domain to another domain. For example, you might need to do this when you are ready to transition a test cluster to your production environment. Ensure that you merge all the VM checkpoints before moving them to another domain. The VMs fail to start in another domain if there are multiple VM checkpoints.

Before you begin

This method involves cluster downtime. Therefore, schedule a maintenance window to perform the following operations.

Procedure

  1. Note: If you are using System Center Virtual Machine Manager (SCVMM) to manage the cluster, remove the cluster from the SCVMM console. Right-click the cluster in the SCVMM console, and select Remove .
    Destroy the Hyper-V failover cluster using the Failover Cluster Manager or PowerShell commands.
    Note:

    • Remove all the roles from the cluster before destroying the cluster by doing either of the following:
      • Open Failover Cluster Manager, and select Roles from the left navigation pane. Select all the VM's, and select Remove .
      • Log on to any Hyper-V host with domain administrator user credentials and remove the roles with the PowerShell command Get-ClusterGroup | Remove-ClusterGroup -RemoveResources -Force .
    • Destroying the cluster removes any non-VM role permanently. This does not impact the VMs, and the VMs are visible in Hyper-V manager only.
    • Open Failover Cluster Manager, right-click select the cluster, and select More Actions > Destroy Cluster .
    • Log on to any Hyper-V host with domain administrator user credentials and remove the cluster with the PowerShell command Remove-Cluster -Force -CleanupAD , which ensures that all Active Directory objects (all hosts in the Nutanix cluster, Hyper-V failover cluster object, Nutanix storage cluster object) and any corresponding entries are deleted.
  2. Log on to any controller VM in the cluster and remove the Nutanix cluster from the domain by using nCLI; ensure that you also specify the Active Directory administrator user name.
    nutanix@cvm$ ncli cluster unjoin-domain logon-name=domain\username
  3. Log on to each host as the domain administrator user and remove the domain security identifiers from the virtual machines.
    > $d = (Get-WMIObject Win32_ComputerSystem).Domain.Split(".")[0]
    > Get-VMConnectAccess | Where {$_.username.StartsWith("$d\")} | `
      Foreach {Revoke-VMConnectAccess -VMName * -UserName $_.UserName} 
  4. Caution:

    Ensure all the user VM's are powered off before performing this step.
    Log on to any controller VM in the cluster and remove all hosts in the Nutanix cluster from the domain.
    nutanix@cvm$ allssh 'source /etc/profile > /dev/null 2>&1; winsh "\$x=hostname; netdom \
      remove \$x /domain /force"'
  5. Restart all hosts.
  6. If a controller VM fails to restart, use the Repair-CVM Nutanix PowerShell cmdlet to help you recover from this issue. Otherwise, skip this step and perform the next step.
    1. Log on to the Hyper-V host with Remote Desktop Connection and start PowerShell.
    2. Start the controller VM repair process.
      > Repair-CVM
      The CVM will be shutdown. Proceed (Y/N)? Y

      Progress is displayed in the PowerShell command-line shell. When the process is complete, the controller VM configuration information is displayed:

      Using the following configuration:
      
      Name                           Value
      ----                           -----
      internal_adapter_name          Internal
      name                           cvm-host-name
      external_adapter_name          External
      processor_count                8
      memory_weight                  100
      svmboot_iso_path               C:\Program Files\Nutanix\Cvm\cvm_name\svmboot.iso
      nutanix_path                   C:\Program Files\Nutanix
      vm_repository                  C:\Users\Administrator\Virtual Machines
      internal_vswitch_name          InternalSwitch
      processor_weight               200
      external_vswitch_name          ExternalSwitch
      memory_size_bytes              12884901888
      pipe_name                      \\.\pipe\SVMPipe

What to do next

Add the hosts to the new domain as described in Adding the Cluster and Hosts to a Domain.

Recover a Controller VM by Using Repair-CVM

The Repair-CVM PowerShell cmdlet can repair an unusable or deleted Controller VM by removing the existing Controller VM (if present) and creating a new one. In the Nutanix enterprise cloud platform design, no data associated with the unusable or deleted Controller VM is lost.

About this task

If a Controller VM already exists and is running, Repair-CVM prompts you to shut down the Controller VM so it can be deleted and re-created. If the Controller VM has been deleted, the cmdlet creates a new one. In all cases, the new CVM automatically powers on and joins the cluster.

A Controller VM is considered unusable when:

  • The Controller VM is accidentally deleted.
  • The Controller VM configuration is accidentally or unintentionally changed and the original configuration parameters are unavailable.
  • The Controller VM fails to restart after unjoining the cluster from a Hyper-V domain as part of a domain move procedure.

To use the cmdlet, log on to the Hyper-V host, type Repair-CVM, and follow any prompts. The repair process creates a new Controller VM based on any available existing configuration information. If the process cannot find the information or the information does not exist, the cmdlet prompts you for:

  • Controller VM name
  • Controller VM memory size in GB
  • Number of processors to assign to the Controller VM
Note: After running this command, you need to manually re-apply all the custom configuration that you have performed, for example increased RAM size.

Procedure

  1. Log on to the Hyper-V host with Remote Desktop Connection and start PowerShell.
  2. Start the controller VM repair process.
    > Repair-CVM
    The CVM will be shutdown. Proceed (Y/N)? Y

    Progress is displayed in the PowerShell command-line shell. When the process is complete, the controller VM configuration information is displayed:

    Using the following configuration:

    Name                 Value
    ----                           -----
    internal_adapter_name          Internal
    name                           cvm-host-name
    external_adapter_name          External
    processor_count                8
    memory_weight                  100
    svmboot_iso_path               C:\Program Files\Nutanix\Cvm\cvm_name\svmboot.iso
    nutanix_path                   C:\Program Files\Nutanix
    vm_repository                  C:\Users\Administrator\Virtual Machines
    internal_vswitch_name          InternalSwitch
    processor_weight               200
    external_vswitch_name          ExternalSwitch
    memory_size_bytes              12884901888
    pipe_name                      \\.\pipe\SVMPipe

Connect to a Controller VM by Using Connect-CVM

Nutanix installs Hyper-V utilities on each Hyper-V host for troubleshooting and Controller VM access. This procedure describes how to use Connect-CVM to launch the FreeRDP utility to access a Controller VM console when a secure shell (SSH) is not available or cannot be used.

About this task

FreeRDP launches when you run: > Connect-CVM .

Procedure

  1. Log on to a Hyper-V host in your environment and open a PowerShell command window.
  2. Start Connect-CVM.
    > Connect-CVM
  3. In the authentication dialog box, type the local administrator credentials and click OK .
  4. Log on to the Controller VM at the FreeRDP console window.
  5. Login to the Controller VM by using Controller VM credentials.

Changing the Name of the Nutanix Storage Cluster

The name of the Nutanix Storage clusters cannot be changed by using the Web console.

About this task

To change the name of the Nutanix storage cluster, do the following:

Procedure

  1. Log on to the CVM with SSH.
  2. Unjoin the existing Nutanix storage cluster object from the domain.
    ncli> cluster unjoin-domain logon-name=domain\username
  3. Change the cluster name.
    ncli> cluster edit-params new-name=cluster_name

    Replace cluster_name with the new cluster name.

  4. Create a new AD object corresponding to the new storage cluster name.
    nutanix@cvm$ ncli cluster join-domain cluster-name=new_name domain=domain_name \
    external-ip-address=external_ip_address name-server-ip=dns_ip logon-name=domain\username
  5. Restart genesis on each Controller VM in the cluster.
    nutanix@cvm$ allssh 'genesis restart'
    A new entry for the cluster is created in \Windows\System32\drivers\etc\hosts on the Hyper-V hosts.

Changing the Nutanix Cluster External IP Address

About this task

To change the external IP address of the Nutanix cluster, do the following.

Procedure

  1. Log on to the Controller VM with SSH.
  2. Run the following command to change the cluster external IP address.
    nutanix@cvm$ ncli cluster edit-params external-ip-address external_ip_address
    Replace external_ip_address with the new Nutanix cluster external IP address.

Fast Clone a VM Based on Nutanix SMB Shares by using New-VMClone

This cmdlet provides for fast-cloning virtual machines that are based off of Nutanix SMB shares. This cmdlet provide options for creating one or more clones from a given virtual machine.

About this task

Run Get-Help New-VMClone -Full to get detailed help on using the cmdlet with all the options that are available.

Note: This cmdlet does not support creating clones of VMs that have Hyper-V checkpoints.

Procedure

Log on to the Hyper-V host with a Remote Desktop Connection and open a PowerShell command window.
  • The syntax to create single clone is as follows.
    > New-VMClone -VM vm_name -CloneName clone_name -ComputerName computer_name`
     -DestinationUncPath destination_unc_path -PowerOn`
    -Credential prism_credential common_parameters
  • The syntax to create multiple clones is as follows.
    > New-VMClone -VM vm_name -CloneNamePrefix  clone_name_prefix`
    -CloneNameSuffixBegin clone_name_suffix_begin -NCopies n_copies`
    -ComputerName computer_name -DestinationUncPath destination_unc_path -PowerOn`
    -Credential prism_credential -MaxConcurrency max_concurrency common_parameters
  • Replace vm_name with the name of the VM that you are cloning.
  • Replace clone_name with the name of the VM that you are creating.
  • Replace clone_name_prefix with the prefix that should be used for naming the clones.
  • Replace clone_name_suffix_begin with the starting number of the suffix.
  • Replace n_copies with the number of clones that you need to create.
  • Replace computer_name with the name of the computer on which you are creating the clone.
  • Replace destination_unc_path with path on the Nutanix SMB share to store the clone on.
  • Replace prism_credential with the credential to access the Prism (the Nutanix Management service).
  • Replace max_concurrency with the number of clones that you need to create in parallel.
  • Replace common_parameters with any additional parameters that you want to define. For example, -Verbose flag.

Change the Path of a VM Based on Nutanix SMB shares by using Set-VMPath

This cmdlet provides for repairing the UNC paths in the metadata of the VMs that are based off of Nutanix SMB shares and has the following two forms.

About this task

  • Replaces the specified IP address with the supplied DNS name for every occurrence of the IP address in the UNC paths in the VM metadata or configuration file.
  • Replaces the specified SMB server name with the supplied alternative in the UNC paths in the VM metadata without taking the case into consideration.
Note: You cannot use the Set-VMPath cmdlet in 4.5 release. You can use this cmdlet for 4.5.1 or later releases.

Procedure

Log on to the Hyper-V host with a Remote Desktop Connection and open a PowerShell command window.
  • The syntax to change the IP address to DNS name is as follows.
    > Set-VMPath -VMId vm_id -IPAddress ip_address -DNSName dns_name common_parameters
  • The syntax to change the SMB server name is as follows.
    > Set-VMPath -VMId vm_id -SmbServerName smb_server_name`
    -ReplacementSmbServerName replacement_smb_server_name common_parameters
  • Replace vm_id with the ID of the VM.
  • Replace ip_address with the IP address that you want to replace in the VM metadata or configuration file.
  • Replace dns_name with the DNS name that you want to replace the IP address with.
  • Replace smb_server_name with the SMB server name that you want to replace.
  • Replace replacement_smb_server_name with the SMB server name that you want as a replacement.
  • Replace common_parameters with any additional parameters that you want to define. For example, -Verbose flag.
Note: The target VM must be powered off for the operation to complete.

Nutanix SMB Shares Connection Requirements from Outside the Cluster

For external non-Nutanix host that needs to access Nutanix SMB shares must conform to following requirements.

  • Any external non-Nutanix host that needs to access Nutanix SMB shares must run at least Windows 8 or later version if it is a desktop client, and Windows 2012 or later version if it is running Windows Server. This requirement is because SMB 3.0 support is required for accessing Nutanix SMB shares.
  • The IP address of the host must be allowed in the Nutanix storage cluster.
    Note: The SCVMM host IP address is automatically included in the allowlist during the setup. For other IP addresses, you can add those source addresses to the allowlist after the setup configuration is completed by using the Web Console or the nCLI cluster add-to-nfs-whitelist command.
  • For accessing a Nutanix SMB share from Windows 10 or Windows Server 2016, you must enable Kerberos on the Nutanix cluster.
  • If Kerberos is not enabled in the Nutanix storage cluster (the default configuration), then the SMB client in the host must not have RequireSecuritySignature set to True. For more information about checking the policy, see System Center Virtual Machine Manager Configuration . You can verify this by running Get-SmbClientConfiguration in the host. If the SMB client is running in a Windows desktop instead of Windows Server, the account used to log on into the desktop should not be linked to an external Microsoft account.
  • If Kerberos is enabled in the Nutanix storage cluster, you can access the storage only by using the DNS name of the Nutanix storage cluster, and not by using the external IP address of the cluster.
Warning: Nutanix does not support using SMB shares of Hyper-V for storing anything other than virtual machine disks (e.g VHD, VHDX files) and their associated configuration files. This includes, but is not limited to, using Nutanix SMB shares of Hyper-V for general file sharing, virtual machine and configuration files for VMs running on outside of the Nutanix nodes, or any other type of hosted repository not based on virtual machine disks.

Updating the Cluster After Renaming the Hyper-V External Virtual Switch

About this task

You can rename the external virtual switch on your Hyper-V cluster to a name of your choice. After you rename the external virtual switch, you must update the new name in AOS so that AOS upgrades and VM migrations do not fail.

Note: In releases earlier than AOS 5.11, the name of the external virtual switch in your Hyper-V cluster must be ExternalSwitch .

See the Microsoft documentation for instructions about how to rename the external virtual switch.

Perform the following steps after you rename the external virtual switch.

Procedure

  1. Log on to a CVM with SSH.
  2. Restart Genesis on all the CVMs in the cluster.
    nutanix@cvm$ genesis restart
  3. Refresh all the guest VMs.
    1. Log on to a Hyper-V host.
    2. Go to Hyper-V Manager, select the VM and, in Settings , click the Refresh icon.
    See the Microsoft documentation for the updated instructions about how to refresh the guest VMs.

Upgrade to Windows Server Version 2016, 2019, and 2022

The following procedures describe how to upgrade earlier releases of Windows Server to Windows Server 2016, 2019, and 2022. For information about fresh installation of Windows Server, see Hyper-V Configuration.
Note: If you are upgrading from Windows Server 2012 R2 and if the AOS version is less than 5.11, then upgrade to Windows Server 2016 first and then upgrade to AOS 5.17. Proceed with upgrading to Windows Server 2019 if necessary.

Hyper-V Hypervisor Upgrade Recommendations, Requirements, and Limitations

This section provides the requirements, recommendations, and limitations to upgrade Hyper-V.

Recommendations

Nutanix recommends that you schedule a sufficiently long maintenance window to upgrade your Hyper-V clusters.

Budget sufficient time to upgrade: Depending on the number of VMs running on a node before the upgrade, a node could take more than 1.5 hours to upgrade. For example, the total time to upgrade a Hyper-V cluster from Hyper-V 2016 to Hyper-V 2019 is approximately the time per node multiplied by the number of nodes. Upgrading can take longer if you also need to upgrade your AOS version.

Requirements

Note:
  • You can upgrade to Windows Server 2022 Hyper-V only from a Hyper-V 2019 cluster.
  • Upgrade to Windows Server 2022 Hyper-V from an LACP enabled Hyper-V 2019 cluster is not supported.
  • Direct upgrade to Windows Server 2022 Hyper-V from Hyper-V 2016 or Windows Server 2012 R2 is not supported.
  • For Windows Server 2022 Hyper-V, only NX Series G6 and later models are supported.
  • For Windows Server 2022 Hyper-V, SET is the default teaming mode. LBFO teaming is not supported on Windows Server 2022 Hyper-V.
  • For Hyper-V 2019, if you do not choose LACP/LAG, SET is the default teaming mode. NX Series G5 and later models support Hyper-V 2019.
  • For Hyper-V 2016, if you do not choose LACP/LAG, the teaming mode is Switch Independent LBFO teaming.
  • For Hyper-V (2016 and 2019), if you choose LACP/LAG, the teaming mode is Switch Dependant LBFO teaming.
  • The platform must not be a light-compute platform.
  • Before upgrading, disable or uninstall third-party antivirus or security filter drivers that modify Windows firewall rules. Windows firewalls must accept inbound and outbound SSH traffic outside of the domain rules.
  • Enable Kerberos when upgrading from Windows Server 2012 R2 to Windows Server 2016. For more information, see Enabling Kerberos for Hyper-V .
    Note: Kerberos is enabled by default when upgrading from Windows Server 2016 to Windows Server 2019.
  • Enable virtual machine migration on the host. Upgrading reimages the hypervisor. Any custom or non-standard hypervisor configurations could be lost after the upgrade is completed.
  • If you are using System Center for Virtual Machine Management (SCVMM) 2012, upgrade to SCVMM 2016 first before upgrading to Hyper-V 2016. Similarly, upgrade to SCVMM 2019 before upgrading to Hyper-V 2019 and upgrade to SCVMM 2022 before upgrading to Windows Server 2022 Hyper-V.
  • Upgrade using ISOs and Nutanix JSON File
    • Upgrade using ISOs. The Prism Element web console supports 1-click upgrade ( Upgrade Software dialog box) of Hyper-V 2016, 2019, or 2022 by using metadata upgrade JSON file. This file is available in the Nutanix Support portal Hypervisor Details page and the Microsoft Hyper-V ISO file.
    • The Hyper-V upgrade JSON file, when used on clusters where Foundation 4.0 or later is installed, is available for Nutanix NX series G4 and later, Dell EMC XC series, or Lenovo Converged HX series platforms. You can upgrade hosts to Hyper-V 2016, 2019 (except for NX series G4) on these platforms by using this JSON file.

Limitations

  • When upgrading hosts to Hyper-V 2016, 2019, and later versions, the local administrator user name and password is reset to the default administrator name Administrator and password of nutanix/4u. Any previous changes to the administrator name and/or password are overwritten.
  • VMs with any associated files on local storage are lost.
    • Logical networks are not restored immediately after upgrade. If you configure logical switches, the configuration is not retained and VMs could become unavailable.
    • Any VMs created during hypervisor upgrade (including as part of disaster recovery operations) and not marked as HA (High Availability) experiences unavailability.
    • Disaster recovery: VMs with the Automatic Stop Action property set to Save is marked as CBR Not Capable if they are upgraded to version 8.0 after upgrading the hypervisor. Change the value of Automatic Stop Action to ShutDown or TurnOff when the VM is upgraded so that it is not marked as CBR Not Capable
  • Enabling Link Aggregation Control Protocol (LACP) for your cluster deployment is supported when upgrading hypervisor hosts from Windows Server 2016 to 2019.

Upgrading to Windows Server Version 2016, 2019, and 2022

About this task

Note:
  • It is possible that clusters running Windows Server 2012 R2 and AOS have time synchronization issues. Therefore, before you upgrade to Windows Server 2016 or Windows Server 2019 and AOS, make sure that the cluster is free from time synchronization issues.
  • Windows Server 2016 also implements Discrete Device Assignment (DDA) for passing through PCI Express devices to guest VMs. This feature is available in Windows Server 2019 too. Therefore, DiskMonitorService, which was used in earlier AOS releases for passing disks through to the CVM, no longer exists. For more information about DDA, see the Microsoft documentation.

Procedure

  1. Make sure that AOS, host, and hypervisor upgrade prerequisites are met.
    For more information, see Hyper-V Hypervisor Upgrade Recommendations, Requirements, and Limitations and the Acropolis Upgrade Guide.
  2. Upgrade AOS by either using the one-click upgrade procedure or uploading the installation files manually. The Prism web console performs both procedures.
    • After upgrading AOS and before upgrading your cluster hypervisor, perform a Life Cycle Manager (LCM) inventory, update LCM, and upgrade any recommended firmware. For more information, see the Life Cycle Manager documentation .
    • For more information, including recommended installation or upgrade order, see the Acropolis Upgrade Guide.
  3. Do one of the following if you want to manage your VMs with SCVMM:
    1. If you register the Hyper-V cluster with an SCVMM installation with a version that is earlier to 2016, do the following in any order:
      • Unregister the cluster from SCVMM.
      • Upgrade SCVMM to version 2016. See Microsoft documentation for this upgrade procedure.
        Note: Do the same when upgrading from Hyper-V 2016 to 2019. Upgrade SCVMM to version 2019 and register the cluster to SCVMM 2019. Similarly, when upgrading to any higher version, upgrade SCVMM to that version and register the cluster to the upgraded SCVMM.
    2. If you do not have SCVMM, deploy SCVMM 2016 / 2019/2022. See Microsoft documentation for this installation procedure.
    Regardless of whether you deploy a new instance of SCVMM 2016 or you upgrade an existing SCVMM installation, do not register the Hyper-V cluster with SCVMM now. To minimize the steps in the overall upgrade workflow, register the cluster with SCVMM 2016 after you upgrade the Hyper-V hosts.
  4. If you are upgrading from Windows Server 2012 R2 to Windows Server 2016, then enable Kerberos. For more information, see Enabling Kerberos for Hyper-V.
  5. Upgrade the Hyper-V hosts.
  6. After the cluster is up, add the cluster to SCVMM 2016. The procedure for adding the cluster to SCVMM 2016 is the procedure used for earlier versions of SCVMM. For more information, see Registering a Cluster with SCVMM.
  7. Any log redirection (for example, SCOM log redirection) configurations are lost during the hypervisor upgrade process. Reconfigure log redirection.

System Center Virtual Machine Manager Configuration

System Center Virtual Machine Manager (SCVMM) is a management platform for Hyper-V clusters. Nutanix provides a utility for joining Hyper-V hosts to a domain and adding Hyper-V hosts and storage to SCVMM. If you cannot or do not want to use this utility, you must join to hosts to the domain and add the hosts and storage to SCVMM manually.

Note: The Validate Cluster feature of the Microsoft System Center VM Manager (SCVMM) is not supported for Nutanix clusters managed by SCVMM.

SCVMM Configuration

After joining cluster and its constituent hosts to the domain and creating a failover cluster, you can configure SCVMM.

Registering a Cluster with SCVMM

Perform the following procedure to register a cluster with SCVMM.

Before you begin

  • Join the hosts in the Nutanix cluster to a domain manually or by following Adding the Cluster and Hosts to a Domain.
  • Make sure that the hosts are not registered with SCVMM.

Procedure

  1. Log on to any CVM in the cluster using SSH.
  2. Verify that the status of all services on all the CVMs are Up.
    nutanix@cvm$ cluster status
    If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix cluster.
    CVM:host IP-Address Up
                                    Zeus   UP       [9935, 9980, 9981, 9994, 10015, 10037]
                               Scavenger   UP       [25880, 26061, 26062]
                                  Xmount   UP       [21170, 21208]
                        SysStatCollector   UP       [22272, 22330, 22331]
                               IkatProxy   UP       [23213, 23262]
                        IkatControlPlane   UP       [23487, 23565]
                           SSLTerminator   UP       [23490, 23620]
                          SecureFileSync   UP       [23496, 23645, 23646]
                                  Medusa   UP       [23912, 23944, 23945, 23946, 24176]
                      DynamicRingChanger   UP       [24314, 24404, 24405, 24558]
                                  Pithos   UP       [24317, 24555, 24556, 24593]
                              InsightsDB   UP       [24322, 24472, 24473, 24583]
                                  Athena   UP       [24329, 24504, 24505]
                                 Mercury   UP       [24338, 24515, 24516, 24614]
                                  Mantle   UP       [24344, 24572, 24573, 24634]
                              VipMonitor   UP       [18387, 18464, 18465, 18466, 18474]
                                Stargate   UP       [24993, 25032]
                    InsightsDataTransfer   UP       [25258, 25348, 25349, 25388, 25391, 25393, 25396]
                                   Ergon   UP       [25263, 25414, 25415]
                                 Cerebro   UP       [25272, 25462, 25464, 25581]
                                 Chronos   UP       [25281, 25488, 25489, 25547]
                                 Curator   UP       [25294, 25528, 25529, 25585]
                                   Prism   UP       [25718, 25801, 25802, 25899, 25901, 25906, 25941, 25942]
                                     CIM   UP       [25721, 25829, 25830, 25856]
                            AlertManager   UP       [25727, 25862, 25863, 25990]
                                Arithmos   UP       [25737, 25896, 25897, 26040]
                                 Catalog   UP       [25749, 25989, 25991]
                               Acropolis   UP       [26011, 26118, 26119]
                                   Uhura   UP       [26037, 26165, 26166]
                                    Snmp   UP       [26057, 26214, 26215]
                       NutanixGuestTools   UP       [26105, 26282, 26283, 26299]
                              MinervaCVM   UP       [27343, 27465, 27466, 27730]
                           ClusterConfig   UP       [27358, 27509, 27510]
                                Aequitas   UP       [27368, 27567, 27568, 27600]
                             APLOSEngine   UP       [27399, 27580, 27581]
                                   APLOS   UP       [27853, 27946, 27947]
                                   Lazan   UP       [27865, 27997, 27999]
                                  Delphi   UP       [27880, 28058, 28060]
                                    Flow   UP       [27896, 28121, 28124]
                                 Anduril   UP       [27913, 28143, 28145]
                                   XTrim   UP       [27956, 28171, 28172]
                           ClusterHealth   UP       [7102, 7103, 27995, 28209,28495, 28496, 28503, 28510,	
    28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645, 28646, 28648, 28792,	
    28793, 28837, 28838, 28840, 28841, 28858, 28859, 29123, 29124, 29127, 29133, 29135, 29142, 29146, 29150,	
    29161, 29162, 29163, 29179, 29187, 29219, 29268, 29273]
  3. Add the Nutanix hosts and storage to SCVMM.
    nutanix@cvm$ setup_hyperv.py setup_scvmm

    This script performs the following functions.

    • Adds the cluster to SCVMM.
    • Sets up the library share in SCVMM.
    • Unregisters the deleted storage containers from SCVMM.
    • Registers the new storage containers in SCVMM.

    Alternatively, you can specify all the parameters as given in the following steps as command-line arguments. If you do so, enclose the values in single quotation marks since the Controller VM shell does not otherwise correctly interpret the backslash (\).

    The utility prompts for the necessary parameters, for example:

    Getting the cluster configuration ... Done
    Getting information about each host ... Done
    The hosts are joined to domain hyperv.nutanix.com
    
    Please enter the domain account username that has local administrator rights on
    the hosts: hyperv.nutanix.com\Administrator
    Please enter the password for hyperv.nutanix.com\Administrator:
    Verifying credentials for accessing localhost ... Done
    
    Please enter the name of the SCVMM server: scvmmhyperv
    Getting the SCVMM server IP address ... 10.4.34.44
    Adding 10.4.34.44 to the IP address whitelist ... Done
    
    Please enter the domain account username (e.g. username@corp.contoso.com or
     CORP.CONTOSO.COM\username) that has administrator rights on the SCVMM server
    and is a member of the domain administrators group (press ENTER for hyperv.nutanix.com\Administrator):
    Verifying credentials for accessing scvmmhyperv ... Done
    
    Verifying SCVMM service account ... HYPERV\scvmm
    
    All nodes are already part of the Hyper-V failover cluster msfo-tulip.
    Preparing to join the Nutanix storage cluster to domain ... Already joined
    Creating an SCVMM run-as account ... hyperv-Administrator
    Verifying the DNS entry tulip.hyperv.nutanix.com -> 10.4.36.191 ... Done
    Verifying that the Hyper-V failover cluster IP address has been added to DNS ... 10.4.36.192
    Verifying SCVMM security settings ... Done
    Initiating adding the failover cluster to SCVMM ... Done
    Step 2 of adding the failover cluster to SCVMM ... Done
    Final step of adding the failover cluster to SCVMM ... Done
    Querying registered Nutanix library shares ... None
    Add a Nutanix share to the SCVMM library for storing VM templates, useful for deploying VMs using Fast File Copy ([Y]/N)? Y
    Querying the registered library servers ... Done
    Using library server scvmmhyperv.hyperv.nutanix.com.
    Please enter the name of the Nutanix library share to be created (press ENTER for "msfo-tulip-library"): 
    Creating container msfo-tulip-library ... Done
    Registering msfo-tulip-library as a library share with server scvmmhyperv.hyperv.nutanix.com in SCVMM ... Done
    Please enter the Prism password: 
    Registering the SMI-S provider with SCVMM ... Done
    Configuring storage in SCVMM ... Done
    Registered default-container-11962
    
    1. Type the domain account username and password.
      This username must include the fully-qualified domain name, for example hyperv.nutanix.com\Administrator .
    2. Type the SCVMM server name.
      The name must resolve to an IP address.
    3. Type the SCVMM username and password if they are different from the domain account; otherwise press Enter to use the domain account.
    4. Choose whether to create a library share.
      Add a Nutanix share to the SCVMM library for storing VM templates, useful for
       deploying VMs using Fast File Copy ([Y]/N)?

      If you choose to create a library share, output similar to the following is displayed.

      Querying the registered library servers ... Done
      Add a Nutanix share to the SCVMM library for storing VM templates, useful for deploying VMs using Fast File Copy ([Y]/N)? Y
      Querying the registered library servers ... Done
      Using library server scvmmhyperv.hyperv.nutanix.com.
      Please enter the name of the Nutanix library share to be created (press ENTER
       for "NTNX-HV-library"):
      Creating container NTNX-HV-library ... Done
      Registering NTNX-HV-library as a library share with server scvmmhyperv.hyperv.nutanix.com ... Done
      
      Finally the following output is displayed.
      Registering the SMI-S provider with SCVMM ... Done
      Configuring storage in SCVMM ... Done
      Registered share ctr1
      
      Setup complete.
    Note: You can also register Nutanix Cluster by using SCVMM. For more information, see Adding Hosts and Storage to SCVMM Manually (SCVMM User Interface).
    Warning: If you change the Prism password, you must change the Prism run as account in SCVMM.

Adding Hosts and Storage to SCVMM Manually (SCVMM User Interface)

If you are unable to add hosts and storage to SCVMM by using the utility provided by Nutanix, you can add the hosts and storage to SCVMM by using the SCVMM user interface.

Before you begin

  • Verify that the SCVMM server IP address is on the cluster allowlist.
  • Verify that the SCVMM library server has a run-as account specified. Right-click the library server, click Properties , and ensure that Library management credential is populated.

Procedure

  1. Log into the SCVMM user interface and click VMs and Services .
  2. Right-click All Hosts and select Add Hyper-V Hosts and Clusters , and click Next .
    The Specify the Credentials to use for discovery screen appears.
  3. Click Browse and select an existing Run As Account or create a new Run As Account by clicking Create Run As Account . Click OK and then click Next .
    The Specify the search scope for virtual machine host candidates screen appears.
  4. Type the failover cluster name in the Computer names text box, and click Next .
  5. Select the failover cluster that you want to add, and click Next .
  6. Select Reassociate this host with this VMM environment check box, and click Next .
    The Confirm the settings screen appears.
  7. Click Finish .
    Warning: If you are adding the cluster for the first time, the addition action fails with the following error message.
    Error (10400)
    Before Virtual Machine Manager can perform the current operation, the virtualization server must be restarted.

    Remove the cluster that you were adding and perform the same procedure again.

  8. Register a Nutanix SMB share as a library share in SCVMM by clicking Library and then adding the Nutanix SMB share.
    1. Right-click the Library Servers and click Add Library Shares .
    2. Click Add Unmanaged Share and type the SMB file share path, click OK , and click Next .
    3. Click Add Library Shares .
      If all the parameters are correct, the library share is added.
  9. Register the Nutanix SMI-S provider.
    1. Go to Settings > Security > Run As Accounts and click Create Run As Account .
    2. Enter the Prism user name and password, de-select Validate domain credentials , and click Finish .
      Note:

      Only local Prism accounts are supported and even if AD authentication in Prism is configured, SMI-S provider cannot use it for authentication.

    3. Go to Fabric > Storage > Providers .
    4. Right-click Providers and select Add Storage Devices .
    5. Select SAN and NAS devices discovered and managed by a SMI-S provider check box, and Click Next .
    6. Specify the protocol and address of the storage SMI-S provider.
      • In the Protocol drop-down menu, select SMI-S CIMXML .
      • In the Provider IP Address or FQDN text box, provide the Nutanix storage cluster name. For example, clus-smb .
        Note: The Nutanix storage cluster name is not the same as the Hyper-V cluster name. You should get the storage cluster name from the cluster details in the web console.
      • Select the Use Secure sockets layer SSL connection check box.
      • In the Run As Account field, click Browse and select the Prism Run As Account that you have created earlier, and click Next .
      Note: If you encounter the following error when attempting to add an SMI-S provider, see KB 5070:
      Could not retrieve a certificate from the <clustername> server because of the error:
      The request was aborted: Could not create SSL/TLS secure channel.
    7. Click Import to verify the identity of the storage provider.
      The discovery process starts and at the completion of the process, the storage is displayed.
    8. Click Next and select all the SMB shares exported by the Nutanix cluster except the library share and click Next .
    9. Click Finish .
      The newly added provider is displayed under Providers. Go to Storage > File Clusters to verify that the Managed column is Yes .
  10. Add the file shares to the Nutanix cluster by navigating to VMs and Services .
    1. Right-click the cluster name and select Properties .
    2. Go to File Share Storage , and click Add to add file shares to the cluster.
    3. From the File share path drop-down menu, select all the shares that you want to add, and click OK .
    4. Right-click the cluster and click Refresh . Wait for the refresh job to finish.
    5. Right-click the cluster name and select Properties > File Share Storage . You should see the access status with a green check mark, which means that the shares are successfully added.
    6. Select all the virtual machines in the cluster, right-click, and select Refresh .

SCVMM Operations

You can perform the operational procedures on a Hyper-V mode by using SCVMM such as placing a host in the maintenance mode.

Placing a Host in Maintenance Mode

If you try to place a host that is managed by SCVMM in maintenance mode, by default the Controller VM running on the host is placed in the saved state, which might create issues. Perform the following procedure to properly place a host in the maintenance mode.

Procedure

  1. Log into the Controller VM of the host that you are planning to place in maintenance mode by using SSH and shut down the Controller VM.
    nutanix@cvm$ cvm_shutdown -P now

    Wait for the Controller VM to completely shut down.

  2. Select the host and place it in the maintenance mode by navigating to the Host tab in the Host group and clicking Start Maintenance Mode .
    Wait for the operation to complete before performing any maintenance activity on the host.
  3. After the maintenance activity is completed, bring out the host from the maintenance mode by navigating to the Host tab in the Host group and clicking Stop Maintenance Mode .
  4. Start the Controller VM manually.
Read article

Migration Guide

AOS 6.5

Product Release Date: 2022-07-25

Last updated: 2022-07-25

This Document Has Been Removed

Nutanix Move is the Nutanix-recommended tool for migrating a VM. Please see the Move documentation at the Nutanix Support portal.

Read article

vSphere Administration Guide for Acropolis

AOS 6.5

Product Release Date: 2022-07-25

Last updated: 2022-12-08

Overview

Nutanix Enterprise Cloud delivers a resilient, web-scale hyperconverged infrastructure (HCI) solution built for supporting your virtual and hybrid cloud environments. The Nutanix architecture runs a storage controller called the Nutanix Controller VM (CVM) on every Nutanix node in a cluster to form a highly distributed, shared-nothing infrastructure.

All CVMs work together to aggregate storage resources into a single global pool that guest VMs running on the Nutanix nodes can consume. The Nutanix Distributed Storage Fabric manages storage resources to preserve data and system integrity if there is node, disk, application, or hypervisor software failure in a cluster. Nutanix storage also enables data protection and High Availability that keep critical data and guest VMs protected.

This guide describes the procedures and settings required to deploy a Nutanix cluster running in the VMware vSphere environment. To know more about the VMware terms referred to in this document, see the VMware Documentation.

Hardware Configuration

See the Field Installation Guide for information about how to deploy and create a Nutanix cluster running ESXi for your hardware. After you create the Nutanix cluster by using Foundation, use this guide to perform the management tasks.

Limitations

For information about ESXi configuration limitations, see Nutanix Configuration Maximums webpage.

Nutanix Software Configuration

The Nutanix Distributed Storage Fabric aggregates local SSD and HDD storage resources into a single global unit called a storage pool. In this storage pool, you can create several storage containers, which the system presents to the hypervisor and uses to host VMs. You can apply a different set of compression, deduplication, and replication factor policies to each storage container.

Storage Pools

A storage pool on Nutanix is a group of physical disks from one or more tiers. Nutanix recommends configuring only one storage pool for each Nutanix cluster.

Replication factor
Nutanix supports a replication factor of 2 or 3. Setting the replication factor to 3 instead of 2 adds an extra data protection layer at the cost of more storage space for the copy. For use cases where applications provide their own data protection or high availability, you can set a replication factor of 1 on a storage container.
Containers
The Nutanix storage fabric presents usable storage to the vSphere environment as an NFS datastore. The replication factor of a storage container determines its usable capacity. For example, replication factor 2 tolerates one component failure and replication factor 3 tolerates two component failures. When you create a Nutanix cluster, three storage containers are created by default. Nutanix recommends that you do not delete these storage containers. You can rename the storage container named default - xxx and use it as the main storage container for hosting VM data.
Note: The available capacity and the vSphere maximum of 2,048 VMs limits the number of VMs a datastore can host.

Capacity Optimization

  • Nutanix recommends enabling inline compression unless otherwise advised.
  • Nutanix recommends disabling deduplication for all workloads except VDI.

    For mixed-workload Nutanix clusters, create a separate storage container for VDI workloads and enable deduplication on that storage container.

Nutanix CVM Settings

CPU
Keep the default settings as configured by the Foundation during the hardware configuration.

Change the CPU settings only if Nutanix Support recommends it.

Memory
Most workloads use less than 32 GB RAM memory per CVM. However, for mission-critical workloads with large working sets, Nutanix recommends more than 32 GB CVM RAM memory.
Tip: You can increase CVM RAM memory up to 64 GB using the Prism one-click memory upgrade procedure. For more information, see Increasing the Controller VM Memory Size in the Prism Web Console Guide .
Networking
The Nutanix CVM uses the standard Ethernet MTU (maximum transmission unit) of 1500 bytes for all the network interfaces by default. The standard 1500 byte MTU helps deliver enhanced excellent performance and stability. Nutanix does not support configuring the MTU on a network interface of CVMs to higher values.
Caution: Do not use jumbo Frames for the Nutanix CVM.
Caution: Do not change the vSwitchNutanix or the internal vmk (VMkernel) interface.

Nutanix Cluster Settings

Nutanix recommends that you do the following.

  • Map a Nutanix cluster to only one vCenter Server.

    Due to the way the Nutanix architecture distributes data, there is limited support for mapping a Nutanix cluster to multiple vCenter Servers. Some Nutanix products (Move, Era, Calm, Files, Prism Central), and features (disaster recovery solution) are unstable when a Nutanix cluster maps to multiple vCenter Servers.

  • Configure a Nutanix cluster with replication factor 2 or replication factor 3.
    Tip: Nutanix recommends using replication factor 3 for clusters with more than 16 nodes. Replication factor 3 requires at least five nodes so that the data remains online even if two nodes fail concurrently.
  • Use the advertised capacity feature to ensure that the resiliency capacity is equivalent to one node of usable storage for replication factor 2 or two nodes for replication factor 3.

    The advertised capacity of a storage container must equal the total usable cluster space minus the capacity of either one or two nodes. For example, in a 4-node cluster with 20 TB usable space per node with replication factor 2, the advertised capacity of the storage container must be 60 TB. That spares 20 TB capacity to sustain and rebuild one node for self-healing. Similarly, in a 5-node cluster with 20 TB usable space per node with replication factor 3, advertised capacity of the storage container must be 60 TB. That spares 40 TB capacity to sustain and rebuild two nodes for self-healing.

  • Use the default storage container and mounting it on all the ESXi hosts in the Nutanix cluster.

    You can also create a single storage container. If you are creating multiple storage containers, ensure that all the storage containers follow the advertised capacity recommendation.

  • Configure the vSphere cluster according to settings listed in vSphere Cluster Settings Checklist.

Software Acceptance Level

The Foundation sets the software acceptance level of an ESXi image to CommunitySupported by default. If there is a requirement to upgrade the software acceptance level, run the following command to upgrade the software acceptance level to the maximum acceptance level of PartnerSupported .

root@esxi# esxcli software acceptance set --level=PartnerSupported

Scratch Partition Settings

ESXi uses the scratch partition (/scratch) to dump the logs when it encounters a purple screen of death (PSOD) or a kernel dump. The Foundation install automatically creates this partition on the SATA DOM or M.2 device with the ESXi installation. Moving the scratch partition to any location other than the SATA DOM or M.2 device can cause issues with LCM, 1-click hypervisor updates, and the general stability of the Nutanix node.

vSphere Networking

vSphere on the Nutanix platform enables you to dynamically configure, balance, or share logical networking components across various traffic types. To ensure availability, scalability, performance, management, and security of your infrastructure, configure virtual networking when designing a network solution for Nutanix clusters.

You can configure networks according to your requirements. For detailed information about vSphere virtual networking and different networking strategies, refer to the Nutanix vSphere Storage Solution Document and the VMware Documentation . This chapter describes the configuration elements required to run VMware vSphere on the Nutanix Enterprise infrastrucutre.

Virtual Networking Configuration Options

vSphere on Nutanix supports the following types of virtual switches.

vSphere Standard Switch (vSwitch)
vSphere Standard Switch (vSS) with Nutanix vSwitch is the default configuration for Nutanix deployments and suits most use cases. A vSwitch detects which VMs are connected to each virtual port and uses that information to forward traffic to the correct VMs. You can connect a vSwitch to physical switches by using physical Ethernet adapters (also referred to as uplink adapters) to join virtual networks with physical networks. This type of connection is similar to connecting physical switches together to create a larger network.
Tip: A vSwitch works like a physical Ethernet switch.
vSphere Distributed Switch (vDS)

Nutanix recommends vSphere Distributed Switch (vDS) coupled with network I/O control (NIOC version 2) and load-based teaming. This combination provides simplicity, ensures traffic prioritization if there is contention, and reduces operational management overhead. A vDS acts as a single virtual switch across all associated hosts on a datacenter. It enables VMs to maintain consistent network configuration as they migrate across multiple hosts. For more information about vDS, see NSX-T Support on Nutanix Platform.

Nutanix recommends setting all vNICs as active on the port group and dvPortGroup unless otherwise specified. The following table lists the naming convention, port groups, and the corresponding VLAN Nutanix uses for various traffic types.

Table 1. Port Groups and Corresponding VLAN
Port group VLAN Description
MGMT_10 10 VM kernel interface for host management traffic
VMOT_20 20 VM kernel interface for vMotion traffic
FT_30 30 Fault tolerance traffic
VM_40 40 VM traffic
VM_50 50 VM traffic
NTNX_10 10 Nutanix CVM to CVM cluster communication traffic (public interface)
Svm-iscsi-pg N/A Nutanix CVM to internal host traffic
VMK-svm-iscsi-pg N/A VM kernel port for CVM to hypervisor communication (internal)

All Nutanix configurations use an internal-only vSwitch for the NFS communication between the ESXi host and the Nutanix CVM.This vSwitch remains unmodified regardless of the virtual networking configuration for ESXi management, VM traffic, vMotion, and so on.

Caution: Do not modify the internal-only vSwitch (vSwitch-Nutanix). vSwitch-Nutanix facilitates communication between the CVM and the internal hypervisor.

VMware NSX Support

Running VMware NSX on Nutanix infrastructure ensures that VMs always have access to fast local storage and compute, consistent network addressing and security without the burden of physical infrastructure constraints. The supported scenario connects the Nutanix CVM to a traditional VLAN network, with guest VMs inside NSX virtual networks. For more information, see the Nutanix vSphere Storage Solution Document .

NSX-T Support on Nutanix Platform

Nutanix platform relies on communication with vCenter to work with networks backed by vSphere standard switch (vSS) or vSphere Distributed Switch (vDS). With the introduction of a new management plane, that enables network management agnostic to the compute manager (vCenter), network configuration information is available through the NSX-T manager. To collect the network configuration information from the NSX-T Manager, you must modify the Nutanix infrastructure workflows (AOS upgrades, LCM upgrades, and so on).

Figure. Nutanix and the NSX-T Workflow Overview Click to enlarge Nutanix and NSX-T Workflow Overview

The Nutanix platform supports the following in the NSX-T configuration.

  • ESXi hypervisor only.
  • vSS and vDS virtual switch configurations.
  • Nutanix CVM connection to VLAN backed NSX-T segments only.
  • The NSX-T Manager credentials registration using the CLI.

The Nutanix platform does not support the following in the NSX-T configuration.

  • Network segmentation with N-VDS.
  • Nutanix CVM connection to overlay NSX-T segments.
  • Link Aggregation/LACP for the uplinks backing the NVDS host switch connecting Nutanix CVMs.
  • The NSX-T Manager credentials registration through Prism.

NSX-T Segments

Nutanix supports NSX-T logical segments to co-exist on Nutanix clusters running ESXi hypervisors. All infrastructure workflows that include the use of the Foundation, 1-click upgrades, and AOS upgrades are validated to work in the NSX-T configurations where CVM is backed by the NSX-T VLAN logical segment.

NSX-T has the following types of segments.

VLAN backed
VLAN backed segments operate similar to the standard port group in a vSphere switch. A port group is created on the NVDS, and VMs that are connected to the port group have their network packets tagged with the configured VLAN ID.
Overlay backed
Overlay backed segments use the Geneve overlay to create a logical L2 network over L3 network. Encapsulation occurs at the transport layer (which is the NVDS module on the host).

Multicast Filtering

Enabling multicast snooping on a vDS with a Nutanix CVM attached affects the ability of CVM to discover and add new nodes in the Nutanix cluster (the cluster expand option in Prism and the Nutanix CLI).

Creating Segment for NVDS

This procedure provides details about creating a segment for nVDS.

About this task

To check the vSwitch configuration of the host and verify if NSX-T network supports the CVM port-group, perform the following steps.

Procedure

  1. Log on to vCenter sever and go to the NSX-T Manager.
  2. Click Networking , and go to Connectivity > Segments in the left pane.
  3. Click ADD SEGMENT under the SEGMENTS tab on the right pane and specify the following information.
    Figure. Create New Segment Click to enlarge Create New Segment

    1. Segment Name : Enter a name for the segment.
    2. Transport Zone : Select the VLAN-based transport zone.
      This transport name is associated with the Transport Zone when configuring the NSX switch .
    3. VLAN : Enter the number 0 for native VLAN.
  4. Click Save to create a segment for NVDS.
  5. Click Yes when the system prompts to continue with configuring the segment.
    The newly created segment appears below the prompt.
    Figure. New Segment Created Click to enlarge New Segment Created

Creating NVDS Switch on the Host by Using NSX-T Manager

This procedure provides instructions to create an NVDS switch on the ESXi host. The management and CVM external interface of the host is migrated to the NVDS switch.

About this task

To create an NVDS switch and configure the NSX-T Manager, do the following.

Procedure

  1. Log on to NSX-T Manager.
  2. Click System , and go to Configuration > Fabric > Nodes in the left pane.
    Figure. Add New Node Click to enlarge Add New Node

  3. Click ADD HOST NODE under the HOST TRANSPORT NODES in the right pane.
    1. Specify the following information in the Host Details dialog box.
      Figure. Add Host Details Click to enlarge Add Host Details

        1. Name : Enter an identifiable ESXi host name.
        2. Host IP : Enter the IP address of the ESXi host.
        3. Username : Enter the username used to log on to the ESXi host.
        4. Password : Enter the password used to log on to the ESXi host.
        5. Click Next to move to the NSX configuration.
    2. Specify the following information in the Configure NSX dialog box.
      Figure. Configure NSX Click to enlarge Configure NSX

        1. Mode : Select Standard option.

          Nutanix recommends the Standard mode only.

        2. Name : Displays the default name of the virtual switch that appears on the host. You can edit the default name and provide an identifiable name as per your configuration requirements.
        3. Transport Zone : Select the transport zone that you selected in Creating Segment for NVDS.

          These segments operate in the way similar to the standard port group in a vSphere switch. A port group is created on the NVDS, and VMs that are connected to the port group have their network packets tagged with the configured VLAN ID.

        4. Uplink Profile : Select an uplink profile for the new nVDS switch.

          This selected uplink profile represents the NICs connected to the host. For more information about uplink profiles, see the VMware Documentation .

        5. LLDP Profile : Select the LLDP profile for the new nVDS switch.

          For more information about LLDP profiles, see the VMware Documentation .

        6. Teaming Policy Uplink Mapping : Map the uplinks with the physical NICs of the ESXi host.
          Note: To verify the active physical NICs on the host, select ESXi host > Configure > Networking > Physical Adapters .

          Click Edit icon and enter the name of the active physical NIC in the ESXi host selected for migration to the NVDS.

          Note: Always migrate one physical NIC at a time to avoid connectivity failure with the ESXi host.
        7. PNIC only Migration : Turn on the switch to Yes if there are no VMkernal Adapters (vmks) associated with the PNIC selected for migration from vSS switch to the nVDS switch.
        8. Network Mapping for Install . Click Add Mapping to migrate the VMkernels (vmks) to the NVDS switch.
        9. Network Mapping for Uninstall : To revert the migration of VMKernels.
  4. Click Finish to create the ESXi host to the NVDS switch.
    The newly created nVDS switch appears on the ESXi host.
    Figure. NVDS Switch Created Click to enlarge NVDS Switch Created

Registering NSX-T Manager with Nutanix

After migrating the external interface of the host and the CVM to the NVDS switch, it is mandatory to inform Genesis about the registration of the cluster with the NSX-T Manager. This registration helps Genesis communicate with the NSX-T Manager and avoid failures during LCM, 1-click, and AOS upgrades.

About this task

This procedure demonstrates an AOS upgrade error encountered for a non-registered NSX-T Manager with Nutanix and how to register the the NSX-T Manager with Nutanix and resolve the issue.

To register an the NSX-T Manager with Nutanix, do the following.

Procedure

  1. Log on to the Prism Element web console.
  2. Select VM > Settings > Upgrade Software > Upgrade > Pre-upgrade to upgrade AOS on the host.
    Figure. Upgrade AOS Click to enlarge

  3. The upgrade process throws an error if the NSX-T Manager is not registered with Nutanix.
    Figure. AOS Upgrade Error for Unregistered NSX-T Click to enlarge

    The AOS upgrade determines if the NSX-T networks supports the CVM, its VLAN, and then attempts to get the VLAN information of those networks. To get VLAN information for the CVM, the NSX-T Manager information must be configured in the Nutanix cluster.

  4. To fix this upgrade issue, log on to the Prism Element web console using SSH.
  5. Access the cluster directory.
    nutanix@cvm$ cd ~/cluster/bin
  6. Verify if the NSX-T Manager was registered with the CVM earlier.
    nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -l

    If the NSX-T Manager was not registered earlier, you get the following message.

    No MX-T manager configured in the cluster
  7. Register the NSX-T Manager with the CVM if it was not registered earlier. Specify the credentials of the NSX-T Manager to the CVM.
    nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -a
    IP address: 10.10.10.10
    Username: admin
    Password: 
    /usr/local/nutanix/cluster/lib/py/requests-2.12.0-py2.7.egg/requests/packages/urllib3/conectionpool.py:843:
     InsecureRequestWarning: Unverified HTTPS request is made. Adding certificate verification is strongly advised. 
    See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
    Successfully persisted NSX-T manager information
  8. Verify the registration of NSX-T Manager with the CVM.
    nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -l

    If there are no errors, the system displays a similar output.

    IP address: 10.10.10.10
    Username: admin
  9. In the Prism Element Web Console, click the Pre-upgrade to continue the AOS upgrade procedure.

    The AOS upgrade is completed successfully.

Networking Components

IP Addresses

All CVMs and ESXi hosts have two network interfaces.
Note: An empty interface eth2 is created on CVM during deployment by Foundation. The eth2 interface is used for backplane when backplane traffic isolation (Network Segmentation) is enabled in the cluster. For more information about backplane interface and traffic segmentation, see Security Guide.
Interface IP address vSwitch
ESXi host vmk0 User-defined vSwitch0
CVM eth0 User-defined vSwitch0
ESXi host vmk1 192.168.5.1 vSwitchNutanix
CVM eth1 192.168.5.2 vSwitchNutanix
CVM eth1:1 192.168.5.254 vSwitchNutanix
CVM eth2 User-defined vSwitch0
Note: The ESXi and CVM interfaces on vSwitch0 cannot use IP addresses in any subnets that overlap with subnet 192.168.5.0/24.

vSwitches

A Nutanix node is configured with the following two vSwitches.

  • vSwitchNutanix

    Local communications between the CVM and the ESXi host use vSwitchNutanix. vSwitchNutanix has no uplinks.

    Caution: To manage network traffic between VMs with greater control, create more port groups on vSwitch0. Do not modify vSwitchNutanix.
    Figure. vSwitchNutanix Configuration Click to enlarge vSwitchNutanix Configuration

  • vSwitch0

    All other external communications like CVM to a differnet host (in case of HA re-direction) use vSwitch0 that has uplinks to the physical network interfaces. Since network segmentation is disabled by default, the backplane traffic uses vSwitch0.

    vSwitch0 has the following two networks.

    • Management Network

      HA, vMotion, and vCenter communications use the Management Network.

    • VM Network

      All VMs use the VM Network.

    Caution:
    • The Nutanix CVM uses the standard Ethernet maximum transmission unit (MTU) of 1,500 bytes for all the network interfaces by default. The standard 1,500-byte MTU delivers excellent performance and stability. Nutanix does not support configuring the MTU on a network interface of CVMs to higher values.
    • You can enable jumbo Frames (MTU of 9,000 bytes) on the physical network interfaces of ESXi hosts and guest VMs if the applications on your guest VMs require them. If you choose to use jumbo Frames on hypervisor hosts, ensure to enable them end-to-end in the desired network and consider both the physical and virtual network infrastructure impacted by the change.
    Figure. vSwitch0 Configuration Click to enlarge vSwitch0 Configuration

Configuring Host Networking (Management Network)

After you create the Nutanix cluster by using Foundation, configure networking for your ESXi hosts.

About this task

Figure. Configure Management Network Click to enlarge Ip Configuration image

Procedure

  1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.
  2. Press the down arrow key until Configure Management Network highlights and then press Enter .
  3. Select Network Adapters and then press Enter .
  4. Ensure that the connected network adapters are selected.
    If they are not selected, press Space key to select them and press Enter key to return to the previous screen.
    Figure. Network Adapters Click to enlarge Select a Network Adapters
  5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional) and press Enter . In the dialog box, provide the VLAN ID and press Enter .
    Note: Do not add any other device (including guest VMs) to the VLAN to which the CVM and hypervisor host are assigned. Isolate guest VMs on one or more separate VLANs.
  6. Select IP Configuration and press Enter .
    Figure. Configure Management Network Click to enlarge IP Address Configuration
  7. If necessary, highlight the Set static IP address and network configuration option and press Space to update the setting.
  8. Provide values for the following: IP Address , Subnet Mask , and Default Gateway fields based on your environment and then press Enter .
  9. Select DNS Configuration and press Enter .
  10. If necessary, highlight the Use the following DNS server addresses and hostname option and press Space to update the setting.
  11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your environment and then press Enter .
  12. Press Esc and then Y to apply all changes and restart the management network.
  13. Select Test Management Network and press Enter .
  14. Press Enter to start the network ping test.
  15. Verify that the default gateway and DNS servers reported by the ping test match those that you specified earlier in the procedure and then press Enter .

    Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct IP addresses are configured.

    Figure. Test Management Network Click to enlarge Test Management Network

    Press Enter to close the test window.

  16. Press Esc to log off.

Changing a Host IP Address

About this task

To change a host IP address, perform the following steps. Perform the following steps once for each hypervisor host in the Nutanix cluster. Complete the entire procedure on a host before proceeding to the next host.
Caution: The cluster cannot tolerate duplicate host IP addresses. For example, when swapping IP addresses between two hosts, temporarily change one host IP address to an interim unused IP address. Changing this IP address avoids having two hosts with identical IP addresses on the cluster. Then complete the address change or swap on each host using the following steps.
Note: All CVMs and hypervisor hosts must be on the same subnet. The hypervisor can be multihomed provided that one interface is on the same subnet as the CVM.

Procedure

  1. Configure networking on the Nutanix node. For more information, see Configuring Host Networking (Management Network).
  2. Update the host IP addresses in vCenter. For more information, see Reconnecting a Host to vCenter.
  3. Log on to every CVM in the Nutanix cluster and restart Genesis service.
    nutanix@cvm$ genesis restart

    If the restart is successful, output similar to the following is displayed.

    Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
    Genesis started on pids [30378, 30379, 30380, 30381, 30403]

Reconnecting a Host to vCenter

About this task

If you modify the IP address of a host, you must reconnect the host with the vCenter. To reconnect the host to the vCenter, perform the following procedure.

Procedure

  1. Log on to vCenter with the web client.
  2. Right-click the host with the changed IP address and select Disconnect .
  3. Right-click the host again and select Remove from Inventory .
  4. Right-click the Nutanix cluster and then click Add Hosts... .
    1. Enter the IP address or fully qualified domain name (FQDN) of the host you want to reconnect in the IP address or FQDN under New hosts .
    2. Enter the host logon credentials in the User name and Password fields, and click Next .
      If a security or duplicate management alert appears, click Yes .
    3. Review the Host Summary and click Next .
    4. Click Finish .
    You can see the host with the updated IP address in the left pane of vCenter.

Selecting a Management Interface

Nutanix tracks the management IP address for each host and uses that IP address to open an SSH session into the host to perform management activities. If the selected vmk interface is not accessible through SSH from the CVMs, activities that require interaction with the hypervisor fail.

If multiple vmk interfaces are present on a host, Nutanix uses the following rules to select a management interface.

  1. Assigns weight to each vmk interface.
    • If vmk is configured for the management traffic under network settings of ESXi, then the weight assigned is 4. Otherwise, the weight assigned is 0.
    • If the IP address of vmk belongs to the same IP subnet as eth0 of the CVMs interface, then 2 is added to its weight.
    • If the IP address of vmk belongs to the same IP subnet as eth2 of the CVMs interface, then 1 is added to its weight.
  2. The vmk interface that has the highest weight is selected as the management interface.

Example of Selection of Management Network

Consider an ESXi host with following configuration.

  • vmk0 IP address and mask: 2.3.62.204, 255.255.255.0
  • vmk1 IP address and mask: 192.168.5.1, 255.255.255.0
  • vmk2 IP address and mask: 2.3.63.24, 255.255.255.0

Consider a CVM with following configuration.

  • eth0 inet address and mask: 2.3.63.31, 255.255.255.0
  • eth2 inet address and mask: 2.3.62.12, 255.255.255.0

According to the rules, the following weights are assigned to the vmk interfaces.

  • vmk0 = 4 + 0 + 1 = 5
  • vmk1 = 0 + 0 + 0 = 0
  • vmk2 = 0 + 2 + 0 = 2

Since vmk0 has the highest weight assigned, vmk0 interface is used as a management IP address for the ESXi host.

To verify that vmk0 interface is selected for management IP address, use the following command.

root@esx# esxcli network ip interface tag get -i vmk0

You see the following output.

Tags: Management, VMmotion

For the other two interfaces, no tags are displayed.

If you want any other interface to act as the management IP address, enable management traffic on that interface by following the procedure described in Selecting a New Management Interface.

Selecting a New Management Interface

You can mark the vmk interface to select as a management interface on an ESXi host by using the following method.

Procedure

  1. Log on to vCenter with the web client.
  2. Do the following on the ESXi host.
    1. Go to Configure > Networking > VMkernel adapters .
    2. Select the interface on which you want to enable the management traffic.
    3. Click Edit settings of the port group to which the vmk belongs.
    4. Select Management check box from the Enabled services option to enable management traffic on the vmk interface.
  3. Open an SSH session to the ESXi host and enable the management traffic on the vmk interface.
    root@esx# esxcli network ip interface tag add -i vmkN --tagname=Management

    Replace vmkN with the vmk interface where you want to enable the management traffic.

Updating Network Settings

After you configure networking of your vSphere deployments on Nutanix Enterprise Cloud, you may want to update the network settings.

  • To know about the best practice of ESXi network teaming policy, see Network Teaming Policy.

  • To migrate an ESXi host networking from a vSphere Standard Switch (vSwitch) to a vSphere Distributed Switch (vDS) with LACP/LAG configuration, see Migrating to a New Distributed Switch with LACP/LAG.

  • To migrate an ESXi host networking from a vSphere standard switch (vSwitch) to a vSphere Distributed Switch (vDS) without LACP, see Migrating to a New Distributed Switch without LACP/LAG.

    .

Network Teaming Policy

On an ESXi host, NIC teaming policy allows you to bundle two or more physical NICs into a single logical link to provide more network bandwidth aggregation and link redundancy to a vSwitch. The NIC teaming policies in the ESXi networking configuration for a vSwitch consists of the following.

  • Route based on originating virtual port.
  • Route based on IP hash.
  • Route based on source MAC hash.
  • Explicit failover order.

In addition to the earlier mentioned NIC teaming policy, vDS uses an extra teaming policy that consists of - Route based on physical NIC load.

When Foundation or Phoenix imaging is performed on a Nutanix cluster, the following two standard virtual switches are created on ESXi hosts:

  • vSwitch0
  • vSwitchNutanix

On vSwitch0, the Nutanix best practice guide (see Nutanix vSphere Networking Solution Document) provides the following recommendations for NIC teaming:

  • vSwitch. Route based on originating virtual port
  • vDS. Route based on physical NIC load

On vSwitchNutanix, there are no uplinks to the virtual switch, so there is no NIC teaming configuration required.

Migrate from a Standard Switch to a Distributed Switch

This topic provides detailed information about how to migrate from a vSphere Standard Switch (vSS) to a vSphere Distributed Switch (vDS).

The following are the two types of virtual switches (vSwitch) in vSphere.

  • vSphere standard switch (vSwitch) (see vSphere Standard Switch (vSwitch) in vSphere Networking).
  • vSphere Distributed Switch (vDS) (see vSphere Distributed Switch (vDS) in vSphere Networking).
Tip: For more information about vSwitches and the associated network concepts, see the VMware Documentation .

For migrating from a vSS to a vDS with LACP/LAG configuration, see Migrating to a New Distributed Switch with LACP/LAG.

For migrating from a vSS to a vDS without LACP/LAG configuration, see Migrating to a New Distributed Switch without LACP/LAG.

Standard Switch Configuration

The standard switch configuration consists of the following.

vSwitchNutanix
vSwitchNutanix handles internal communication between the CVM and the ESXi host. There are no uplink adapters associated with this vSwitch. This virtual switch enables the communication between the CVM and the host. Administrators must not modify the settings of this virtual switch or its port groups. The only members of this port group must be the CVM and its host. Do not modify this virtual switch configuration as it can disrupt communication between the host and the CVM.
vSwitch0
vSwitch0 consists of the vmk (VMkernel) management interface, vMotion interface, and VM port groups. This virtual switch connects to uplink network adapters that are plugged into a physical switch.

Planning the Migration

It is important to plan and understand the migration process. An incorrect configuration can disrupt communication, which can require downtime to resolve.

Consider the following while or before planning the migration.

  • Read Nutanix Best Practice Guide for VMware vSphere Networking available here.

  • Understand the various teaming and load-balancing algorithms on vSphere.

    For more information, see the VMware Documentation .

  • Confirm communication on the network through all the connected uplinks.
  • Confirm access to the host using IPMI when there are network connectivity issues during migration.

    Access the host to troubleshoot the network issue or move the management network back to the standard switch depending on the issue.

  • Confirm that the hypervisor external management IP address and the CVM IP address are in the same public subnet for the data path redundancy functionality to work.
  • When performing migration to the distributed switch, migrate one host at a time and verify that networking is working as desired.
  • Do not migrate the port groups and vmk (VMkernel) interfaces that are on vSwitchNutanix to the distributed switch (dvSwitch).

Unassigning Physical Uplink of the Host for Distributed Switch

All the physical adapters connect to the vSwitch0 of the host. A live distributed switch must have a physical uplink connected to it to work. To assign the physical adapter of the host to the distributed switch, unassign the physical adapter of the host and assign it to the new distributed switch.

About this task

To unassign the physical uplink of the host, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Networking > Virtual Switches .
  4. Click MANAGE PHYSICAL ADAPTERS tab and select the active adapters from the Assigned adapters that you want to unassign from the list of physical adapters of the host.
    Figure. Managing Physical Adapters Click to enlarge Managing Physical Adapters

  5. Click X on the top.
    The selected adapter is unassigned from the list of physical adapters of the host.
    Tip: Ping the host to check and confirm if you are able to communicate with the active physical adapter of the host. If you lose network connectivity to the ESXi host during this test, review your network configuration.

Migrating to a New Distributed Switch without LACP/LAG

Migrating to a new distributed switch without LACP/LAG consists of the following workflow.

  1. Creating a Distributed Switch
  2. Creating Port Groups on the Distributed Switch
  3. Configuring Port Group Policies

Creating a Distributed Switch

Connect to vCenter and create a distributed switch.

About this task

To create a distributed switch, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Distributed Switch Creation Click to enlarge Distributed Switch Creation

  3. Right-click the host, select Distributed Switch > New Distributed Switch , and specify the following information in the New Distributed Switch dialog box.
    1. Name and Location : Enter name for the distributed switch.
    2. Select Version : Select a distributed switch version that is compatible with all your hosts in that datacenter.
    3. Configure Settings : Select the number of uplinks you want to connect to the distributed switch.
      Select Create a default port group checkbox to create a port group. To configure a port group later, see Creating Port Groups on the Distributed Switch.
    4. Ready to complete : Review the configuration and click Finish .
    A new distributed switch is created with the default uplink port group. The uplink port group is the port group to which the uplinks connect. This uplink is different from the vmk (VMkernel) or the VM port groups.
    Figure. New Distributed Switch Created in the Host Click to enlarge New Distributed Switch Created in the Host

Creating Port Groups on the Distributed Switch

Create one or more vmk (VMkernel) port groups and VM port groups depending on the vSphere features you plan to use and or the physical network layout. The best practice is to have the vmk Management interface, vmk vMotion interface, and vmk iSCSI interface on separate port groups.

About this task

To create port groups on the distributed switch, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Creating Distributed Port Groups Click to enlarge Creating Distributed Port Groups

  3. Right-click the host, select Distributed Switch > Distributed Port Group > New Distributed Port Group , and follow the wizard to create the remaining distributed port group (vMotion interface and VM port groups).
    You would need the following port groups because you would be migrating from the standard switch to the distributed switch.
    • VMkernel Management interface . Use this port group to connect to the host for all management operations.
    • VMNetwork . Use this port group to connect to the new VMs.
    • vMotion . This port group is an internal interface and the host will use this port during failover for vMotion traffic.
    Note: Nutnaix recommends you to use static port binding instead of ephemeral port binding when you create a port group.
    Figure. Distibuted Port Groups Created Click to enlarge Distibuted Port Groups Created

    Note: The port group for vmk management interface is created during the distributed switch creation. See Creating a Distributed Switch for more information.

Configuring Port Group Policies

To configure port groups, you must configure VLANs, Teaming and failover, and other distributed port groups policies at the port group layer or at the distributed switch layer. Refer to the following topics to configure the port group policies.

  1. Configuring Policies on the Port Group Layer
  2. Configuring Policies on the Distributed Switch Layer
  3. Adding ESXi Host to the Distributed Switch

Configuring Policies on the Port Group Layer

Ensure that the distributed switches port groups have VLANs tagged if the physical adapters of the host have a VLAN tagged to them. Update the policies for the port group, VLANs, and teaming algorithms to configure the physical network switch. Configure the load balancing policy as per the network configuration requirements on the physical switch.

About this task

To configure the port group policies, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Configure Port Group Policies on the Distributed Switch Click to enlarge Configure Port Group Policies on the Distributed Switch

  3. Right-click the host, select Distributed Switch > Distributed Port Group > Edit Settings , and follow the wizard to configure the VLAN, Teaming and failover, and other options.
    Note: For more information about configuring port group policies, see the VMware Documentation .
  4. Click OK to complete the configuration.
  5. Repeat steps 2–4 to configure the other port groups.
Configuring Policies on the Distributed Switch Layer

You can configure the same policy for all the port groups simultaneously.

About this task

To configure the same policy for all the port groups, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Manage Distributed Port Groups Click to enlarge Manage Distributed Port Groups

  3. Right-click the host, select Distributed Switch > Distributed Port Group > Manage Distributed Port Groups , and specify the following information in Manage Distributed Port Group dialog box.
    1. In the Select port group policies tab, select the port group policies that you want to configure and click Next .
      Note: For more information about configuring port group policies, see the VMware Documentation .
    2. In the Select port groups tab, select the distributed port groups on which you want to configure the policy and click Next .
    3. In the Teaming and failover tab, configure the Load balancing policy, Active uplinks , and click Next .
    4. In the Ready to complete window, review the configuration and click Finish .
Adding ESXi Host to the Distributed Switch

Migrate the management interface and CVM of the host to the distributed switch.

About this task

To migrate the Management interface and CVM of the ESXi host, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Add ESXi Host to Distributed Switch Click to enlarge Add ESXi Host to Distributed Switch

  3. Right-click the host, select Distributed Switch > Add and Manage Hosts , and specify the following information in Add and Manage Hosts dialog box.
    1. In the Select task tab, select Add hosts to add new host to the distributed switch and click Next .
    2. In the Select hosts tab, click New hosts to select the ESXi host and add it to the distributed switch.
      Note: Add one host at a time to the distributed switch and then migrate all the CVMs from the host to the distributed switch.
    3. In the Manage physical adapters tab, configure the physical NICs (PNICs) on the distributed switch.
      Tip: For consistent network configuration, you can connect the same physical NIC on every host to the same uplink on the distributed switch.
        1. Select a PNIC from the On other switches/unclaimed section and click Assign uplink .
          Figure. Select Physical Adapter for Uplinking Click to enlarge Select Physical Adapter for Uplinking

          Important: If you select physical NICs connected to other switches, those physical NICs migrate to the current distributed switch.
        2. Select the Uplink in the distributed switch to which you want to assign the PNIC of the host and click OK .
        3. Click Next .
    4. In the Manage VMkernel adapters tab, configure the vmk adapters.
        1. Select a VMkernel adapter from the On other switches/unclaimed section and click Assign port group .
        2. Select the port group in the distributed switch to which you want to assign the VMkernel of the host and click OK .
          Figure. Select a Port Group Click to enlarge Select a Port Group

        3. Click Next .
    5. (optional) In the Migrate VM networking tab, select Migrate virtual machine networking to connect all the network adapters of a VM to a distributed port group.
        1. Select the VM to connect all the network adapters of the VM to a distributed port group, or select an individual network adapter to connect with the distributed port group.
        2. Click Assign port group and select the distributed port group to which you want to migrate the VM or network adapter and click OK .
        3. Click Next .
    6. In the Ready to complete tab, review the configuration and click Finish .
  4. Go to the Hosts and Clusters view in the vCenter web client and Hosts > Configure to review the network configuration for the host.
    Note: Run a ping test to confirm that the networking on the host works as expected.
  5. Follow the steps 2–4 to add the remaining hosts to the distributed switch and migrate the adapters.

Migrating to a New Distributed Switch with LACP/LAG

Migrating to a new distributed switch without LACP/LAG consists of the following workflow.

  1. Creating a Distributed Switch
  2. Creating Port Groups on the Distributed Switch
  3. Creating Link Aggregation Group on Distributed Switch
  4. Creating Port Groups to use the LAG
  5. Adding ESXi Host to the Distributed Switch

Creating Link Aggregation Group on Distributed Switch

Using Link Aggregation Group (LAG) on a distributed switch, you can connect the ESXi host to physical switched by using dynamic link aggregation. You can create multiple link aggregation groups (LAGs) on a distributed switch to aggregate the bandwidth of physical NICs on ESXi hosts that are connected to LACP port channels.

About this task

To create a LAG, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
  3. Right-click the host, select Distributed Switch > Configure > LACP .
    Figure. Create LAG on Distributed Switch Click to enlarge Create LAG on Distributed Switch

  4. Click New and enter the following details in the New Link Aggregation Group dialog box.
    1. Name : Enter a name for the LAG.
    2. Number of Ports : Enter the number of ports.
      The number of ports must match the physical ports per host in the LACP LAG. For example, if the Number of Ports two, then you can attach two physical ports per ESXi host to the LAG.
    3. Mode : Specify the state of the physical switch.
      Based on the configuration requirements, you can set the mode to Active or Passive .
    4. Load balancing mode : Specify the load balancing mode for the physical switch.
      For more information about the various load balancing options, see the VMware Documentation .
    5. VLAN trunk range : Specify the VLANs if you have VLANs configured in your environment.
  5. Click OK .
    LAG is created on the distributed switch.

Creating Port Groups to Use LAG

To use LAG as the uplink you have to edit the settings of the port group created on the distributed switch.

About this task

To edit the settings on the port group to use LAG, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
  3. Right-click the host, select Management port Group > Edit Setting .
  4. Go to the Teaming and failover tab in the Edit Settings dialog box and specify the following information.
    Figure. Configure the Management Port Group Click to enlarge Configure the Management Port Group

    1. Load Balancing : Select Route based IP hash .
    2. Active uplinks : Move the LAG under the Unused uplinks section to Active Uplinks section.
    3. Unused uplinks : Select the physical uplinks ( Uplink 1 and Uplink 2 ) and move them to the Unused uplinks section.
  5. Repeat steps 2–4 to configure the other port groups.

Adding ESXi Host to the Distributed Switch

Add the ESXi host to the distributed switch and migrate the network from the standard switch to the distributed switch. Migrate the management interface and CVM of the ESXi host to the distributed switch.

About this task

To migrate the Management interface and CVM of ESXi host, do the following.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Networking view and select the host from the left pane.
    Figure. Add ESXi Host to Distributed Switch Click to enlarge Add ESXi Host to Distributed Switch

  3. Right-click the host, select Distributed Switch > Add and Manage Hosts , and specify the following information in Add and Manage Hosts dialog box.
    1. In the Select task tab, select Add hosts to add new host to the distributed switch and click Next .
    2. In the Select hosts tab, click New hosts to select the ESXi host and add it to the distributed switch.
      Note: Add one host at a time to the distributed switch and then migrate all the CVMs from the host to the distributed switch.
    3. In the Manage physical adapters tab, configure the physical NICs (PNICs) on the distributed switch.
      Tip: For consistent network configuration, you can connect the same physical NIC on every host to the same uplink on the distributed switch.
        1. Select a PNIC from the On other switches/unclaimed section and click Assign uplink .
          Important: If you select physical NICs connected to other switches, those physical NICs migrate to the current distributed switch.
        2. Select the LAG Uplink in the distributed switch to which you want to assign the PNIC of the host and click OK .
        3. Click Next .
    4. In the Manage VMkernel adapters tab, configure the vmk adapters.
      Select the VMkernel adapter that is associated with vSwitch0 as your management VMkernel adapter. Migrate this adapter to the corresponding port group on the distributed switch.
      Note: Do not migrate the VMkernel adapter associated with vSwitchNutanix.
      Note: If the are any VLANs associated with the port group on the standard switch, ensure that the corresponding distributed port group also has the correct VLAN. Verify the physical network configuration to ensure it is configured as required.
        1. Select a VMkernel adapter from the On other switches/unclaimed section and click Assign port group .
        2. Select the port group in the distributed switch to which you want to assign the VMkernel of the host and click OK .
        3. Click Next .
    5. (optional) In the Migrate VM networking tab, select Migrate virtual machine networking to connect all the network adapters of a VM to a distributed port group.
        1. Select the VM to connect all the network adapters of the VM to a distributed port group, or select an individual network adapter to connect with the distributed port group.
        2. Click Assign port group and select the distributed port group to which you want to migrate the VM or network adapter and click OK .
        3. Click Next .
    6. In the Ready to complete tab, review the configuration and click Finish .

vCenter Configuration

VMware vCenter enables the centralized management of multiple ESXi hosts. You can either create a vCenter Server or use an existing vCenter Server. To create a vCenter Server, refer to the VMware Documentation .

This section considers that you already have a vCenter Server and therefore describes the operations you can perform on an existing vCenter Server. To deploy vSphere clusters running Nutanix Enterprise Cloud, perform the following steps in the vCenter.

Tip: For a single-window management of all your ESXi nodes, you can also integrate the vCenter Server to Prism Central. For more information, see Registering a Cluster to vCenter Server

1. Create a cluster entity within the existing vCenter inventory and configure its settings according to Nutanix best practices. For more information, see Creating a Nutanix Cluster in vCenter.

2. Configure HA. For more information, see vSphere HA Settings.

3. Configure DRS. For more information, see vSphere DRS Settings.

4. Configure EVC. For more information, see vSphere EVC Settings.

5. Configure override. For more information, see VM Override Settings.

6. Add the Nutanix hosts to the new cluster. For more information, see Adding a Nutanix Node to vCenter.

Registering a Cluster to vCenter Server

To perform core VM management operations directly from Prism without switching to vCenter Server, you need to register your cluster with the vCenter Server.

Before you begin

Ensure that you have vCenter Server Extension privileges as these privileges provide permissions to perform vCenter registration for the Nutanix cluster.

About this task

Following are some of the important points about registering vCenter Server.

  • Nutanix does not store vCenter Server credentials.
  • Whenever a new node is added to Nutanix cluster, vCenter Sever registration for the new node is automatically performed.
  • Nutanix supports vCenter Enhanced Linked Mode.

    When registering a Nutanix cluster to a vCenter Enhanced Linked Mode (EHM) enabled ESXi environment, ensure that Prism is registered to the vCenter containing the vSphere Cluster and Nutanix nodes (often the local vCenter). For more information about vCenter Enhanced Linked Mode, see vCenter Enhanced Linked Mode in the vCenter Server Installation and Setup documentation.

Procedure

  1. Log into the Prism web console.
  2. Click the gear icon in the main menu and then select vCenter Registration in the Settings page.
    The vCenter Server that is managing the hosts in the cluster is auto-discovered and displayed.
  3. Click the Register link.
    The IP address is auto-populated in the Address field. The port number field is also auto-populated with 443. Do not change the port number. For the complete list of required ports, see Port Reference.
  4. Type the administrator user name and password of the vCenter Server in the Admin Username and Admin Password fields.
    Figure. vCenter Registration Figure 1 Click to enlarge vcenter registration

  5. Click Register .
    During the registration process a certificate is generated to communicate with the vCenter Server. If the registration is successful, relevant message is displayed in the Tasks dashboard. The Host Connection field displays as Connected, which implies that all the hosts are being managed by the vCenter Server that is registered.
    Figure. vCenter Registration Figure 2 Click to enlarge vcenter registration

Unregistering a Cluster from the vCenter Server

To unregister the vCenter Server from your cluster, perform the following procedure.

About this task

  • Ensure that you unregister the vCenter Server from the cluster before changing the IP address of the vCenter Server. After you change the IP address of the vCenter Sever, you should register the vCenter Server again with the new IP address with the cluster.
  • The vCenter Server Registration page displays the registered vCenter Server. If for some reason the Host Connection field changes to Not Connected , it implies that the hosts are being managed by a different vCenter Server. In this case, there will be new vCenter entry with host connection status as Connected and you need to register to this vCenter Server.

Procedure

  1. Log into the Prism web console.
  2. Click the gear icon in the main menu and then select vCenter Registration in the Settings page.
    A message that cluster is already registered to the vCenter Server is displayed.
  3. Type the administrator user name and password of the vCenter Server in the Admin Username and Admin Password fields.
  4. Click Unregister .
    If the credentials are correct, the vCenter Server is unregistered from the cluster and a relevant message is displayed in the Tasks dashboard.

Creating a Nutanix Cluster in vCenter

Before you begin

Nutanix recommends creating a storage container in the Prism Element running on the host or using the default container to mount NFS datastore on all ESXi hosts.

About this task

To enable the vCenter to discover the Nutanix clusters, perform the following steps in the vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Do one of the following.
    • If you want the Nutanix cluster to be in an existing datacenter, proceed to step 3.
    • If you want the Nutanix cluster to be in a new datacenter or if there is no datacenter, perform the following steps to create a datacenter.
      Note: Nutanix clusters must be in a datacenter.
    1. Go to the Hosts and Clusters view and right-click the IP address of the vCenter Server in the left pane.
    2. Click New Datacenter .
    3. Enter a meaningful name for the datacenter (for example, NTNX-DC ) and click OK .
  3. Right-click the datacenter node and click New Cluster .
    1. Enter a meaningful name for the cluster in the Name field (for example, NTNX-Cluster ).
    2. Turn on the vSphere DRS switch.
    3. Turn on the Turn on vSphere HA switch.
    4. Uncheck Manage all hosts in the cluster with a single image .
    Nutanix cluster ( NTNX-Cluster ) is created with the default settings for vSphere HA and vSphere DRS.

What to do next

Add all the Nutanix nodes to the Nutanix cluster inventory in vCenter. For more information, see Adding a Nutanix Node to vCenter.

Adding a Nutanix Node to vCenter

Before you begin

Configure the Nutanix cluster according to Nutanix specifications given in Creating a Nutanix Cluster in vCenter and vSphere Cluster Settings Checklist.

About this task

Note: To ensure that vCenter managed ESXi hosts are accessible through vCenter only and are not directly accessible, put the vCenter managed ESXi hosts in lockdown mode. Lockdown mode forces all operations through the vCenter Server.
Tip: Refer to KB-1661 for the default credentials of all cluster components.

Procedure

  1. Log on to vCenter with the web client.
  2. Right-click the Nutanix cluster and then click Add Hosts... .
    1. Enter the IP address or fully qualified domain name (FQDN) of the host you want to reconnect in the IP address or FQDN under New hosts .
    2. Enter the host logon credentials in the User name and Password fields, and click Next .
      If a security or duplicate management alert appears, click Yes .
    3. Review the Host Summary and click Next .
    4. Click Finish .
  3. Select the host under the Nutanix cluster from the left pane and go to Configure > System > Security Profile .
    Ensure that Lockdown Mode is Disabled because Nutanix does not support lockdown mode.
  4. Configure DNS servers.
    1. Go to Configure > Networking > TCP/IP configuration .
    2. Click Default under TCP/IP stack and go to TCP/IP .
    3. Click the pencil icon to configure DNS servers and perform the following.
        1. Select Enter settings manually .
        2. Type the domain name in the Domain field.
        3. Type DNS server addresses in the Preferred DNS Server and Alternate DNS Server fields and click OK .
  5. Configure NTP servers.
    1. Go to Configure > System > Time Configuration .
    2. Click Edit .
    3. Select the Use Network Time Protocol (Enable NTP client) .
    4. Type the NTP server address in the NTP Servers text box.
    5. In the NTP Service Startup Policy, select Start and stop with host from the drop-down list.
      Add multiple NTP servers if necessary.
    6. Click OK .
  6. Click Configure > Storage and ensure that NFS datastores are mounted.
    Note: Nutanix recommends creating a storage container in Prism Element running on the host.
  7. If HA is not enabled, set the CVM to start automatically when the ESXi host starts.
    Note: Automatic VM start and stop is disabled in clusters where HA is enabled.
    1. Go to Configure > Virtual Machines > VM Startup/Shutdown .
    2. Click Edit .
    3. Ensure that Automatically start and stop the virtual machines with the system is checked.
    4. If the CVM is listed in Manual Startup , click the up arrow to move the CVM into the Automatic Startup section.
    5. Click OK .

What to do next

Configure HA and DRS settings. For more information, see vSphere HA Settings and vSphere DRS Settings.

Nutanix Cluster Settings

To ensure the optimal performance of your vSphere deployment running on Nutanix cluster, configure the following settings from the vCenter.

vSphere General Settings

About this task

Configure the following general settings from vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Configuration > General .
    1. Under General , set the Swap file location to Virtual machine directory .
      Setting the swap file location to the VM directory stores the VM swap files in the same directory as the VM.
    2. Under Default VM Compatibility , set the compatibility to Use datacenter setting and host version .
      Do not change the compatibility unless the cluster has to support previous versions of ESXi VMs.
      Figure. General Cluster Settings Click to enlarge General Cluster Settings

vSphere HA Settings

If there is a node failure, vSphere HA (High Availability) settings ensure that there are sufficient compute resources available to restart all VMs that were running on the failed node.

About this task

Configure the following HA settings from vCenter.
Note: Nutanix recommends that you configure vSphere HA and DRS even if you do not use the features. The vSphere cluster configuration preserves the settings, so if you later decide to enable the features, the settings are in place and conform to Nutanix best practices.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Services > vSphere Availability .
  4. Click Edit next to the text showing vSphere HA status.
    Figure. vSphere Availability Settings: Failures and Responses Click to enlarge vSphere Availability Settings: Failures and Responses

    1. Turn on the vSphere HA and Enable Host Monitoring switches.
    2. Specify the following information under the Failures and Responses tab.
        1. Host Failure Response : Select Restart VMs from the drop-down list.

          This option configures the cluster-wide host isolation response settings.

        2. Response for Host Isolation : Select Power off and restart VMs from the drop-down list.
        3. Datastore with PDL : Select Disabled from the drop-down list.
        4. Datastore with APD : Select Disabled from the drop-down list.
          Note: To enable the VM component protection in vCenter, refer to the VMware Documentation.
        5. VM Monitoring : Select Disabled from the drop-down list.
    3. Specify the following information under the Admission Control tab.
      Note: If you are using replication factor 2 with cluster sizes up to 16 nodes, configure HA admission control settings to tolerate one node failure. For cluster sizes larger than 16 nodes, configure HA admission control to sustain two node failures and use replication factor 3. vSphere 6.7, and newer versions automatically calculate the percentage of resources required for admission control.
      Figure. vSphere Availability Settings: Admission Control Click to enlarge vSphere Availability Settings: Admission Control

        1. Host failures cluster tolerates : Enter 1 or 2 based on the number of nodes in the Nutanix cluster and the replication factor.
        2. Define host failover capacity by : Select Cluster resource Percentage from the drop-down list.
        3. Performance degradation VMs tolerate : Set the percentage to 100.

          For more information about settings of percentage of cluster resources reserved as failover spare capacity, see vSphere HA Admission Control Settings for Nutanix Environment.

    4. Specify the following information under the Heartbeat Datastores tab.
      Note: vSphere HA uses datastore heart beating to distinguish between hosts that have failed and hosts that reside on a network partition. With datastore heart beating, vSphere HA can monitor hosts when a management network partition occurs while continuing to respond to failures.
      Figure. vSphere Availability Settings: Heartbeat Datastores Click to enlarge vSphere Availability Settings: Heartbeat Datastores

        1. Select Use datastores only from the specified list .
        2. Select the named storage container mounted as the NFS datastore (Nutanix datastore).

          If you have more than one named storage container, select all that are applicable.

        3. If the cluster has only one datastore, click Advanced Options tab and add das.ignoreInsufficientHbDatastore with Value of true .
    5. Click OK .

vSphere HA Admission Control Settings for Nutanix Environment

Overview

If you are using redundancy factor 2 with cluster sizes of up to 16 nodes, you must configure HA admission control settings with the appropriate percentage of CPU/RAM to achieve at least N+1 availability. For cluster sizes larger than 16 nodes, you must configure HA admission control with the appropriate percentage of CPU/RAM to achieve at least N+2 availability.

N+2 Availability Configuration

The N+2 availability configuration can be achieved in the following two ways.

  • Redundancy factor 2 and N+2 vSphere HA admission control setting configured.

    Because Nutanix distributed file system recovers in the event of a node failure, it is possible to have a second node failure without data being unavailable if the Nutanix cluster has fully recovered before the subsequent failure. In this case, a N+2 vSphere HA admission control setting is required to ensure sufficient compute resources are available to restart all the VMs.

  • Redundancy factor 3 and N+2 vSphere HA admission control setting configured.
    If you want two concurrent node failures to be tolerated and the cluster has insufficient blocks to use block awareness, redundancy factor 3 in a cluster of five or more nodes is required. In either of these two options, the Nutanix storage pool must have sufficient free capacity to restore the configured redundancy factor (2 or 3). The percentage of free space required is the same as the required HA admission control percentage setting. In this case, redundancy factor 3 must be configured at the storage container layer. An N+2 vSphere HA admission control setting is also required to ensure sufficient compute resources are available to restart all the VMs.
    Note: For redundancy factor 3, a minimum of five nodes is required, which provides the ability that two concurrent nodes can fail while ensuring data remains online. In this case, the same N+2 level of availability is required for the vSphere cluster to enable the VMs to restart following a failure.
Table 1. Minimum Reservation Percentage for vSphere HA Admission Control Setting For redundancy factor 2 deployments, the recommended minimum HA admission control setting percentage is marked with single asterisk (*) symbol in the following table. For redundancy factor 2 or redundancy factor 3 deployments configured for multiple non-concurrent node failures to be tolerated, the minimum required HA admission control setting percentage is marked with two asterisks (**) in the following table.
Nodes Availability Level
N+1 N+2 N+3 N+4
1 N/A N/A N/A N/A
2 N/A N/A N/A N/A
3 33* N/A N/A N/A
4 25* 50 75 N/A
5 20* 40** 60 80
6 18* 33** 50 66
7 15* 29** 43 56
8 13* 25** 38 50
9 11* 23** 33 46
10 10* 20** 30 40
11 9* 18** 27 36
12 8* 17** 25 34
13 8* 15** 23 30
14 7* 14** 21 28
15 7* 13** 20 26
16 6* 13** 19 25
Nodes Availability Level
N+1 N+2 N+3 N+4
17 6 12* 18** 24
18 6 11* 17** 22
19 5 11* 16** 22
20 5 10* 15** 20
21 5 10* 14** 20
22 4 9* 14** 18
23 4 9* 13** 18
24 4 8* 13** 16
25 4 8* 12** 16
26 4 8* 12** 16
27 4 7* 11** 14
28 4 7* 11** 14
29 3 7* 10** 14
30 3 7* 10** 14
31 3 6* 10** 12
32 3 6* 9** 12

The table also represents the percentage of the Nutanix storage pool, which should remain free to ensure that the cluster can fully restore the redundancy factor in the event of one or more nodes, or even a block failure (where three or more blocks exist within a cluster).

Block Awareness

For deployments of at least three blocks, block awareness automatically ensures data availability when an entire block of up to four nodes configured with redundancy factor 2 can become unavailable.

If block awareness levels of availability are required, the vSphere HA admission control setting must ensure sufficient compute resources are available to restart all virtual machines. In addition, the Nutanix storage pool must have sufficient space to restore redundancy factor 2 to all data.

The vSphere HA minimum availability level must be equal to number of nodes per block.

Note: For block awareness, each block must be populated with a uniform number of nodes. In the event of a failure, a non-uniform node count might compromise block awareness or the ability to restore the redundancy factor, or both.

Rack Awareness

Rack fault tolerance is the ability to provide a rack-level availability domain. With rack fault tolerance, data is replicated to nodes that are not in the same rack. Rack failure can occur in the following situations.

  • All power supplies in a rack fail.
  • Top-of-rack (TOR) switch fails.
  • Network partition occurs: one of the racks becomes inaccessible from the other racks.

With rack fault tolerance enabled, the cluster has rack awareness and guest VMs can continue to run even during the failure of one rack (with replication factor 2) or two racks (with replication factor 3). The redundant copies of guest VM data and metadata persist on other racks when one rack fails.

Table 2. Rack awareness has minimum requirements, described in the following table.
Replication factor Minimum number of nodes Minimum number of Blocks Minimum number of racks Data resiliency
2 3 3 3 Failure of 1 node, block, or rack
3 5 5 5 Failure of 2 nodes, blocks, or racks

vSphere DRS Settings

About this task

Configure the following DRS settings from vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Services > vSphere DRS .
  4. Click Edit next to the text showing vSphere DRS status.
    Figure. vSphere DRS Settings: Automation Click to enlarge vSphere DRS Settings: Automation

    1. Turn on the vSphere DRS switch.
    2. Specify the following information under the Automation tab.
        1. Automation Level : Select Fully Automated from the drop-down list.
        2. Migration Threshold : Set the bar between conservative and aggressive (value=3).

          Migration threshold provides optimal resource utilization while minimizing DRS migrations with little benefit. This threshold automatically manages data locality in such a way that whenever VMs move, writes are always written on one of the replicas locally to maximize the subsequent read performance.

          Nutanix recommends the migration threshold at 3 in a fully automated configuration.

        3. Predictive DRS : Leave the option disabled.

          The value of predictive DRS depends on whether you use other VMware products such as vRealize operations. Unless you use vRealize operations, Nutanix recommends disabling predictive DRS.

        4. Virtual Machine Automation : Enable VM automation.
    3. Specifying anything under the Additional Options tab is optional.
    4. Specify the following information under the Power Management tab.
      Figure. vSphere DRS Settings: Power Management Click to enlarge vSphere DRS Settings: Power Management

        1. DPM : Leave the option disabled.

          Enabling DPM causes nodes in the Nutanix cluster to go offline, affecting cluster resources.

    5. Click OK .

vSphere EVC Settings

vSphere enhanced vMotion compatibility (EVC) ensures that workloads can live migrate, using vMotion, between ESXi hosts in a Nutanix cluster that are running different CPU generations. The general recommendation is to have EVC enabled as it will help you in the future where you will be scaling your Nutanix clusters with new hosts that might contain new CPU models.

About this task

To enable EVC in a brownfield scenario can be challenging. Configure the following EVC settings from vCenter.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Shut down all the VMs on the hosts with feature sets greater than the EVC mode.
    Ensure that the Nutanix cluster contains hosts with CPUs from only one vendor, either Intel or AMD.
  4. Click Configure , and go to Configuration > VMware EVC .
  5. Click Edit next to the text showing VMware EVC.
  6. Enable EVC for the CPU vendor and feature set appropriate for the hosts in the Nutanix cluster, and click OK .
    If the Nutanix cluster contains nodes with different processor classes, enable EVC with the lower feature set as the baseline.
    Tip: To know the processor class of a node, perform the following steps.
      1. Log on to Prism Element running on the Nutanix cluster.
      2. Click Hardware from the menu and go to Diagram or Table view.
      3. Click the node and look for the Block Serial field in Host Details .
    Figure. VMware EVC Click to enlarge VMware EVC

  7. Start the VMs in the Nutanix cluster to apply the EVC.
    If you try to enable EVC on a Nutanix cluster with mismatching host feature sets (mixed processor clusters), the lowest common feature set (lowest processor class) is selected. Hence, if VMs are already running on the new host and if you want to enable EVC on the host, you must first shut down the VMs and then enable EVC.
    Note: Do not shut down more than one CVM at the same time.

VM Override Settings

You must exclude Nutanix CVMs from vSphere availability and resource scheduling and therefore tweak the following VM overriding settings.

Procedure

  1. Log on to vCenter with the web client.
  2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
  3. Click Configure , and go to Configuration > VM Overrides .
  4. Select all the CVMs and click Next .
    If you do not have the CVMs listed, click Add to ensure that the CVMs are added to the VM Overrides dialog box.
    Figure. VM Override Click to enlarge VM Override

  5. In the VM override section, configure override for the following parameters.
    • DRS Automation Level: Disabled
    • VM HA Restart Priority: Disabled
    • VM Monitoring: Disabled
  6. Click Finish .

Migrating a Nutanix Cluster from One vCenter Server to Another

About this task

Perform the following steps to migrate a Nutanix cluster from one vCenter Server to another vCenter Server.
Note: The following steps are to migrate a Nutanix cluster with vSphere Standard Switch (vSwitch). To migrate a Nutanix cluster with vSphere Distributed Switch (vDS), see the VMware Documentation. .

Procedure

  1. Create a vSphere cluster in the vCenter Server where you want to migrate the Nutanix cluster. See Creating a Nutanix Cluster in vCenter.
  2. Configure HA, DRS, and EVC on the created vSphere cluster. See Nutanix Cluster Settings.
  3. Unregister the Nutanix cluster from the source vCenter Server. See Unregistering a Cluster from the vCenter Server.
  4. Move the nodes from the source vCenter Server to the new vCenter Server.
    See the VMware Documentation to know the process.
  5. Register the Nutanix cluster to the new vCenter Server. See Registering a Cluster to vCenter Server.

Storage I/O Control (SIOC)

SIOC controls the I/O usage of a virtual machine and gradually enforces the predefined I/O share levels. Nutanix converged storage architecture does not require SIOC. Therefore, while mounting a storage container on an ESXi host, the system disables SIOC in the statistics mode automatically.

Caution: While mounting a storage container on ESXi hosts running older versions (6.5 or below), the system enables SIOC in the statistics mode by default. Nutanix recommends disabling SIOC because an enabled SIOC can cause the following issues.
  • The storage can become unavailable because the hosts repeatedly create and delete the access .lck-XXXXXXXX files under the .iorm.sf subdirectory, located in the root directory of the storage container.
  • Site Recovery Manager (SRM) failover and failback does not run efficiently.
  • If you are using Metro Availability disaster recovery feature, activate and restore operations do not work.
    Note: For using Metro Availability disaster recovery feature, Nutanix recommends using an empty storage container. Disable SIOC and delete all the files from the storage container that are related to SIOC. For more information, see KB-3501 .
Run the NCC health check (see KB-3358 ) to verify if SIOC and SIOC in statistics mode are disabled on storage containers. If SIOC and SIOC in statistics mode are enabled on storage containers, disable them by performing the procedure described in Disabling Storage I/O Control (SIOC) on a Container.

Disabling Storage I/O Control (SIOC) on a Container

About this task

Perform the following procedure to disable storage I/O statistics collection.

Procedure

  1. Log on to vCenter with the web client.
  2. Click the Storage view in the left pane.
  3. Right-click the storage container under the Nutanix cluster and select Configure Storage I/O Controller .
    The properties for the storage container are displayed. The Disable Storage I/O statistics collection option is unchecked, which means that SIOC is enabled by default.
    1. Disable Storage I/O Control and statistics collection.
    2. Disable Storage I/O Control but enable statistics collection.
    3. Disable Storage I/O Control and statistics collection: Select the Disable Storage I/O Control and statistics collection option to disable SIOC.
      Uncheck Include I/O Statistics for SDRS option.
    4. Click OK .

Node Management

This chapter describes the management tasks you can do on a Nutanix node.

Node Maintenance (ESXi)

You are required to gracefully place a node into the maintenance mode or non-operational state for reasons such as making changes to the network configuration of a node, performing manual firmware upgrades or replacements, performing CVM maintenance or any other maintenance operations.

Entering and Exiting Maintenance Mode

With a minimum AOS release of 6.1.2, 6.5.1 or 6.6, you can only place one node at a time in maintenance mode for each cluster.​ When a host is in maintenance mode, the CVM is placed in maintenance mode as part of the node maintenance operation and any associated RF1 VMs are powered-off. The cluster marks the host as unschedulable so that no new VM instances are created on it. When a node is placed in the maintenance mode from the Prism web console, an attempt is made to evacuate VMs from the host. If the evacuation attempt fails, the host remains in the "entering maintenance mode" state, where it is marked unschedulable, waiting for user remediation.

When a host is placed in the maintenance mode, the CVM is placed in the maintenance mode as part of the node maintenance operation. The non-migratable VMs (for example, pinned or RF1 VMs which have affinity towards a specific node) are powered-off while live migratable or high availability (HA) VMs are moved from the original host to other hosts in the cluster. After exiting the maintenance mode, all non-migratable guest VMs are powered on again and the live migrated VMs are automatically restored on the original host.
Note: VMs with CPU passthrough or PCI passthrough, pinned VMs (with host affinity policies), and RF1 VMs are not migrated to other hosts in the cluster when a node undergoes maintenance. Click View these VMs link to view the list of VMs that cannot be live-migrated.

See Putting a Node into Maintenance Mode (vSphere) to place a node under maintenance.

You can also enter or exit a host under maintenance through the vCenter web client. See Putting the CVM and ESXi Host in Maintenance Mode Using vCenter.

Exiting a Node from Maintenance Mode

See Exiting a Node from the Maintenance Mode (vSphere) to remove a node from the maintenance mode.

Viewing a Node under Maintenance Mode

See Viewing a Node that is in Maintenance Mode to view the node under maintenance mode.

Guest VM Status when Node under Maintenance Mode

See Guest VM Status when Node is in Maintenance Mode to view the status of guest VMs when a node is undergoing maintenance operations.

Best Practices and Recommendations

Nutanix strongly recommends using the Enter Maintenance Mode option to place a node under maintenance.

Known Issues and Limitations ESXi

  • The Prism web console enabled maintenance operations (enter and exit node maintenance) are currently supported on ESXi.
  • Entering or exiting a node under maintenance using the vCenter for ESXi is not equivalent to entering or exiting the node under maintenance from the Prism Element web console.
  • You cannot exit the node from maintenance mode from Prism Element web console if the node is placed under maintenance mode using vCenter (ESXi node). However, you can enter the node maintenance through the Prism Element web console and exit the node maintenance using the vCenter (ESXi node).

Putting a Node into Maintenance Mode (vSphere)

Before you begin

Check the cluster status and resiliency before putting a node under maintenance. You can also verify the status of the guest VMs. See Guest VM Status when Node is in Maintenance Mode for more information.

About this task

As the node enter the maintenance mode, the following high-level tasks are performed internally.
  • The host initiates entering the maintenance mode.
  • The HA VMs are live migrated.
  • The pinned and RF1 VMs are powered-off.
  • The CVM enters the maintenance mode.
  • The CVM is shutdown.
  • The host completes entering the maintenance mode.

For more information, see Guest VM Status when Node is in Maintenance Mode to view the status of the guest VMs.

Procedure

  1. Login to the Prism Element web console.
  2. On the home page, select Hardware from the drop-down menu.
  3. Go to the Table > Host view.
  4. Select the node that you want to put under maintenance.
  5. Click the Enter Maintenance Mode option.
    Figure. Enter Maintenance Mode Option Click to enlarge

  6. On the Host Maintenance window, provide the vCenter credentials for the ESXi host and click Next .
    Figure. Host Maintenance Window (vCenter Credentials) Click to enlarge

  7. On Host Maintenance window, select the Power off VMs that cannot migrate check box to enable the Enter Maintenance Mode button.
    Figure. Host Maintenance Window (Enter Maintenance Mode) Click to enlarge

    Note: VMs with CPU passthrough, PCI passthrough, pinned VMs (with host affinity policies), and RF1 are not migrated to other hosts in the cluster when a node undergoes maintenance. Click View these VMs link to view the list of VMs that cannot be live-migrated.
  8. Click the Enter Maintenance Mode button.
    • A revolving icon appears as a tool tip beside the selected node and also in the Host Details view. This indicates that the host is entering the maintenance mode.
    • The revolving icon disappears and the Exit Maintenance Mode option is enabled after the node completely enters the maintenance mode.
      Figure. Enter Node Maintenance (On-going) Click to enlarge

    • You can also monitor the progress of the node maintenance operation through the newly created Host enter maintenance and Enter maintenance mode tasks which appear in the task tray.
    Note: In case of a node maintenance failure, certain roll-back operations are performed. For example, the CVM is rebooted. But the live-migrated VMs are not restored to the original host.

What to do next

Once the maintenance activity is complete, you can perform any of the following.
  • View the nodes under maintenance, see Viewing a Node that is in Maintenance Mode
  • View the status of the guest VMs, see Guest VM Status when Node is in Maintenance Mode
  • Remove the node from the maintenance mode Exiting a Node from the Maintenance Mode (vSphere))

Viewing a Node that is in Maintenance Mode

About this task

Note: This procedure is the same for AHV and ESXI nodes.

Perform the following steps to view a node under maintenance.

Procedure

  1. Login to the Prism Element web console.
  2. On the home page, select Hardware from the drop-down menu.
  3. Go to the Table > Host view.
  4. Observe the icon along with a tool tip that appears beside the node which is under maintenance. You can also view this icon in the host details view.
    Figure. Example: Node under Maintenance (Table and Host Details View) in AHV Click to enlarge

  5. Alternatively, view the node under maintenance from the Hardware > Diagram view.
    Figure. Example: Node under Maintenance (Diagram and Host Details View) in AHV Click to enlarge

What to do next

You can:
  • View the status of the guest VMs, see Guest VM Status when Node is in Maintenance Mode.
  • Remove the node from the maintenance mode Exiting a Node from the Maintenance Mode (vSphere) .

Exiting a Node from the Maintenance Mode (vSphere)

After you perform any maintenance activity, exit the node from the maintenance mode.

About this task

As the node exits the maintenance mode, the following high-level tasks are performed internally.
  • The host is taken out of maintenance.
  • The CVM is powered on.
  • The CVM is taken out of maintenance.
After the host exits the maintenance mode, the RF1 VMs continue to be powered on and the VMs migrate to restore host locality.

For more information, see Guest VM Status when Node is in Maintenance Mode to view the status of the guest VMs.

Procedure

  1. On the Prism web console home page, select Hardware from the drop-down menu.
  2. Go to the Table > Host view.
  3. Select the node which you intend to remove from the maintenance mode.
  4. Click the Exit Maintenance Mode option.
    Figure. Exit Maintenance Mode Option - Table View Click to enlarge

    Figure. Exit Maintenance Mode Option - Diagram View Click to enlarge

  5. On the Host Maintenance window, provide the vCenter credentials for the ESXi host and click Next .
    Figure. Host Maintenance Window (vCenter Credentials) Click to enlarge

  6. On the Host Maintenance window, click the Exit Maintenance Mode button.
    Figure. Host Maintenance Window (Enter Maintenance Mode) Click to enlarge

    • A revolving icon appears as a tool tip beside the selected node and also in the Host Details view. This indicates that the host is exiting the maintenance mode.
    • The revolving icon disappears and the Enter Maintenance Mode option is enabled after the node completely exits the maintenance mode.
    • You can also monitor the progress of the exit node maintenance operation through the newly created Host exit maintenance and Exit maintenance mode tasks which appear in the task tray.

What to do next

You can:
  • View the status of node under maintenance, see Viewing a Node that is in Maintenance Mode.
  • View the status of the guest VMs, see Guest VM Status when Node is in Maintenance Mode.

Guest VM Status when Node is in Maintenance Mode

The following scenarios demonstrate the behavior of three guest VM types - high availability (HA) VMs, pinned VMs, and RF1 VMs, when a node enters and exits a maintenance operation. The HA VMs are live VMs that can migrate across nodes if the host server goes down or reboots. The pinned VMs have the host affinity set to a specific node. The RF1 VMs have affinity towards a specific node or a CVM. To view the status of the guest VMs, go to VM > Table .

Note: The following scenarios are the same for AHV and ESXI nodes.

Scenario 1: Guest VMs before Node Entering Maintenance Mode

In this example, you can observe the status of the guest VMs on the node prior to the node entering the maintenance mode. All the guest VMs are powered-on and reside on the same host.

Figure. Example: Original State of VM and Hosts in AHV Click to enlarge

Scenario 2: Guest VMs during Node Maintenance Mode

  • As the node enter the maintenance mode, the following high-level tasks are performed internally.
    1. The host initiates entering the maintenance mode.
    2. The HA VMs are live migrated.
    3. The pinned and RF1 VMs are powered-off.
    4. The host completes entering the maintenance mode.
    5. The CVM enters the maintenance mode.
    6. The AHV host completes entering the maintenance mode.
    7. The CVM enters the maintenance mode.
    8. The CVM is shut down.
Figure. Example: VM and Hosts before Entering Maintenance Mode Click to enlarge

Scenario 3: Guest VMs after Node Exiting Maintenance Mode

  • As the node exits the maintenance mode, the following high-level tasks are performed internally.
    1. The CVM is powered on.
    2. The CVM is taken out of maintenance.
    3. The host is taken out of maintenance.
    After the host exits the maintenance mode, the RF1 VMs continue to be powered on and the VMs migrate to restore host locality.
Figure. Example: Original State of VM and Hosts in AHV Click to enlarge

Nonconfigurable ESXi Components

The Nutanix manufacturing and installation processes done by running Foundation on the Nutanix nodes configures the following components. Do not modify any of these components except under the direction of Nutanix Support.

Nutanix Software

Modifying any of the following Nutanix software settings may inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.

  • Local datastore name.
  • Configuration and contents of any CVM (except memory configuration to enable certain features).
Important: Note the following important considerations about CVMs.
  • Do not delete the Nutanix CVM.
  • Do not take a snapshot of the CVM for backup.
  • Do not rename, modify, or delete the admin and nutanix user accounts of the CVM.
  • Do not create additional CVM user accounts.

    Use the default accounts ( admin or nutanix ), or use sudo to elevate to the root account.

  • Do not decrease CVM memory below recommended minimum amounts required for cluster and add-in features.

    Nutanix Cluster Checks (NCC), preupgrade cluster checks, and the AOS upgrade process detect and monitor CVM memory.

  • Nutanix does not support the usage of third-party storage on the host part of Nutanix clusters.

    Normal cluster operations might be affected if there are connectivity issues with the third-party storage you attach to the hosts in a Nutanix cluster.

  • Do not run any commands on a CVM that are not in the Nutanix documentation.

ESXi

Modifying any of the following ESXi settings can inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.

  • NFS datastore settings
  • VM swapfile location
  • VM startup/shutdown order
  • CVM name
  • CVM virtual hardware configuration file (.vmx file)
  • iSCSI software adapter settings
  • Hardware settings, including passthrough HBA settings.

  • vSwitchNutanix standard virtual switch
  • vmk0 interface in Management Network port group
  • SSH
    Note: An SSH connection is necessary for various scenarios. For example, to establish connectivity with the ESXi server through a control plane that does not depend on additional management systems or processes. The SSH connection is also required to modify the networking and control paths in the case of a host failure to maintain High Availability. For example, the CVM autopathing (Ha.py) requires an SSH connection. In case a local CVM becomes unavailable, another CVM in the cluster performs the I/O operations over the 10GbE interface.
  • Open host firewall ports
  • CPU resource settings such as CPU reservation, limit, and shares of the CVM.
    Caution: Do not use the Reset System Configuration option.
  • ProductLocker symlink setting to point at the default datastore.

    Do not change the /productLocker symlink to point at a non-local datastore.

    Do not change the ProductLockerLocation advanced setting.

Putting the CVM and ESXi Host in Maintenance Mode Using vCenter

About this task

Nutanix recommends placing the CVM and ESXi host into maintenance mode while the Nutanix cluster undergoes maintenance or patch installations.
Caution: Verify the data resiliency status of your Nutanix cluster. Ensure that the replication factor (RF) supports putting the node in maintenance mode.

Procedure

  1. Log on to vCenter with the web client.
  2. If vSphere DRS is enabled on the Nutanix cluster, skip this step. If vSphere DRS is disabled, perform one of the following.
    • Manually migrate all the VMs except the CVM to another host in the Nutanix cluster.
    • Shut down VMs other than the CVM that you do not want to migrate to another host.
  3. Right-click the host and select Maintenance Mode > Enter Maintenance Mode.
  4. In the Enter Maintenance Mode dialog box, check Move powered-off and suspended virtual machines to other hosts in the cluster and click OK .
    Note:

    In certain rare conditions, even when DRS is enabled, some VMs do not automatically migrate due to user-defined affinity rules or VM configuration settings. The VMs that do not migrate appear under cluster DRS > Faults when a maintenance mode task is in progress. To address the faults, either manually shut down those VMs or ensure the VMs can be migrated.

    Caution: When you put the host in maintenance mode, the maintenance mode process powers down or migrates all the VMs that are running on the host.
    The host gets ready to go into maintenance mode, which prevents VMs from running on this host. DRS automatically attempts to migrate all the VMs to another host in the Nutanix cluster.

    The host enters maintenance mode after its CVM is shut down.

Shutting Down an ESXi Node in a Nutanix Cluster

Before you begin

Verify the data resiliency status of your Nutanix cluster. If the Nutanix cluster only has replication factor 2 (RF2), you can shut down only one node for each cluster. If an RF2 cluster has more than one node shut down, shut down the entire cluster.

About this task

You can put the ESXi host into maintenance mode and shut it down either from the web client or from the command line. For more information about shutting down a node from the command line, see Shutting Down an ESXi Node in a Nutanix Cluster (vSphere Command Line).

Procedure

  1. Log on to vCenter with the web client.
  2. Put the Nutanix node in the maintenance mode. For more information, see Putting the CVM and ESXi Host in Maintenance Mode Using vCenter.
    Note: If DRS is not enabled, manually migrate or shut down all the VMs excluding the CVM. The VMs that are not migrated automatically even when the DRS is enabled can be because of a configuration option in the VM that is not present on the target host.
  3. Right-click the host and select Shut Down .
    Wait until the vCenter displays that the host is not responding, which may take several minutes. If you are logged on to the ESXi host rather than to vCenter, the web client disconnects when the host shuts down.

Shutting Down an ESXi Node in a Nutanix Cluster (vSphere Command Line)

Before you begin

Verify the data resiliency status of your Nutanix cluster. If the Nutanix cluster only has replication factor 2 (RF2), you can shut down only one node for each cluster. If an RF2 cluster has more than one node shut down, shut down the entire cluster.

About this task

Procedure

  1. Log on to the CVM with SSH and shut down the CVM.
    nutanix@cvm$ cvm_shutdown -P now
  2. Log on to another CVM in the Nutanix cluster with SSH.
  3. Shut down the host.
    nutanix@cvm$ ~/serviceability/bin/esx-enter-maintenance-mode -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    If successful, this command returns no output. If it fails with a message like the following, VMs are probably still running on the host.

    CRITICAL esx-enter-maintenance-mode:42 Command vim-cmd hostsvc/maintenance_mode_enter failed with ret=-1

    Ensure that all VMs are shut down or moved to another host and try again before proceeding.

    nutanix@cvm$ ~/serviceability/bin/esx-shutdown -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host..

    Alternatively, you can put the ESXi host into maintenance mode and shut it down using the vSphere web client. For more information, see Shutting Down an ESXi Node in a Nutanix Cluster.

    If the host shuts down, a message like the following is displayed.

    INFO esx-shutdown:67 Please verify if ESX was successfully shut down using ping hypervisor_ip_addr

    hypervisor_ip_addr is the IP address of the ESXi host.

  4. Confirm that the ESXi host has shut down.
    nutanix@cvm$ ping hypervisor_ip_addr

    Replace hypervisor_ip_addr with the IP address of the ESXi host.

    If no ping packets are answered, the ESXi host shuts down.

Starting an ESXi Node in a Nutanix Cluster

About this task

You can start an ESXi host either from the web client or from the command line. For more information about starting a node from the command line, see Starting an ESXi Node in a Nutanix Cluster (vSphere Command Line).

Procedure

  1. If the node is off, turn it on by pressing the power button on the front. Otherwise, proceed to the next step.
  2. Log on to vCenter (or to the node if vCenter is not running) with the web client.
  3. Right-click the ESXi host and select Exit Maintenance Mode .
  4. Right-click the CVM and select Power > Power on .
    Wait approximately 5 minutes for all services to start on the CVM.
  5. Log on to another CVM in the Nutanix cluster with SSH.
  6. Confirm that the Nutanix cluster services are running on the CVM.
    nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    Output similar to the following is displayed.
        Name                      : 10.1.56.197
        Status                    : Up
        ... ... 
        StatsAggregator           : up
        SysStatCollector          : up

    Every service listed should be up .

  7. Right-click the ESXi host in the web client and select Rescan for Datastores . Confirm that all Nutanix datastores are available.
  8. Verify that the status of all services on all the CVMs are Up.
    nutanix@cvm$ cluster status
    If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix cluster.
    CVM:host IP-Address Up
                                    Zeus   UP       [9935, 9980, 9981, 9994, 10015, 10037]
                               Scavenger   UP       [25880, 26061, 26062]
                                  Xmount   UP       [21170, 21208]
                        SysStatCollector   UP       [22272, 22330, 22331]
                               IkatProxy   UP       [23213, 23262]
                        IkatControlPlane   UP       [23487, 23565]
                           SSLTerminator   UP       [23490, 23620]
                          SecureFileSync   UP       [23496, 23645, 23646]
                                  Medusa   UP       [23912, 23944, 23945, 23946, 24176]
                      DynamicRingChanger   UP       [24314, 24404, 24405, 24558]
                                  Pithos   UP       [24317, 24555, 24556, 24593]
                              InsightsDB   UP       [24322, 24472, 24473, 24583]
                                  Athena   UP       [24329, 24504, 24505]
                                 Mercury   UP       [24338, 24515, 24516, 24614]
                                  Mantle   UP       [24344, 24572, 24573, 24634]
                              VipMonitor   UP       [18387, 18464, 18465, 18466, 18474]
                                Stargate   UP       [24993, 25032]
                    InsightsDataTransfer   UP       [25258, 25348, 25349, 25388, 25391, 25393, 25396]
                                   Ergon   UP       [25263, 25414, 25415]
                                 Cerebro   UP       [25272, 25462, 25464, 25581]
                                 Chronos   UP       [25281, 25488, 25489, 25547]
                                 Curator   UP       [25294, 25528, 25529, 25585]
                                   Prism   UP       [25718, 25801, 25802, 25899, 25901, 25906, 25941, 25942]
                                     CIM   UP       [25721, 25829, 25830, 25856]
                            AlertManager   UP       [25727, 25862, 25863, 25990]
                                Arithmos   UP       [25737, 25896, 25897, 26040]
                                 Catalog   UP       [25749, 25989, 25991]
                               Acropolis   UP       [26011, 26118, 26119]
                                   Uhura   UP       [26037, 26165, 26166]
                                    Snmp   UP       [26057, 26214, 26215]
                       NutanixGuestTools   UP       [26105, 26282, 26283, 26299]
                              MinervaCVM   UP       [27343, 27465, 27466, 27730]
                           ClusterConfig   UP       [27358, 27509, 27510]
                                Aequitas   UP       [27368, 27567, 27568, 27600]
                             APLOSEngine   UP       [27399, 27580, 27581]
                                   APLOS   UP       [27853, 27946, 27947]
                                   Lazan   UP       [27865, 27997, 27999]
                                  Delphi   UP       [27880, 28058, 28060]
                                    Flow   UP       [27896, 28121, 28124]
                                 Anduril   UP       [27913, 28143, 28145]
                                   XTrim   UP       [27956, 28171, 28172]
                           ClusterHealth   UP       [7102, 7103, 27995, 28209,28495, 28496, 28503, 28510,	
    28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645, 28646, 28648, 28792,	
    28793, 28837, 28838, 28840, 28841, 28858, 28859, 29123, 29124, 29127, 29133, 29135, 29142, 29146, 29150,	
    29161, 29162, 29163, 29179, 29187, 29219, 29268, 29273]

Starting an ESXi Node in a Nutanix Cluster (vSphere Command Line)

About this task

You can start an ESXi host either from the command line or from the web client. For more information about starting a node from the web client, see Starting an ESXi Node in a Nutanix Cluster .

Procedure

  1. Log on to a running CVM in the Nutanix cluster with SSH.
  2. Start the CVM.
    nutanix@cvm$ ~/serviceability/bin/esx-exit-maintenance-mode -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    If successful, this command produces no output. If it fails, wait 5 minutes and try again.

    nutanix@cvm$ ~/serviceability/bin/esx-start-cvm -s cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    .

    If the CVM starts, a message like the following is displayed.

    INFO esx-start-cvm:67 CVM started successfully. Please verify using ping cvm_ip_addr

    cvm_ip_addr is the IP address of the CVM on the ESXi host.

    After starting, the CVM restarts once. Wait three to four minutes before you ping the CVM.

    Alternatively, you can take the ESXi host out of maintenance mode and start the CVM using the web client. For more information, see Starting an ESXi Node in a Nutanix Cluster

  3. Verify that the status of all services on all the CVMs are Up.
    nutanix@cvm$ cluster status
    If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix cluster.
    CVM:host IP-Address Up
                                    Zeus   UP       [9935, 9980, 9981, 9994, 10015, 10037]
                               Scavenger   UP       [25880, 26061, 26062]
                                  Xmount   UP       [21170, 21208]
                        SysStatCollector   UP       [22272, 22330, 22331]
                               IkatProxy   UP       [23213, 23262]
                        IkatControlPlane   UP       [23487, 23565]
                           SSLTerminator   UP       [23490, 23620]
                          SecureFileSync   UP       [23496, 23645, 23646]
                                  Medusa   UP       [23912, 23944, 23945, 23946, 24176]
                      DynamicRingChanger   UP       [24314, 24404, 24405, 24558]
                                  Pithos   UP       [24317, 24555, 24556, 24593]
                              InsightsDB   UP       [24322, 24472, 24473, 24583]
                                  Athena   UP       [24329, 24504, 24505]
                                 Mercury   UP       [24338, 24515, 24516, 24614]
                                  Mantle   UP       [24344, 24572, 24573, 24634]
                              VipMonitor   UP       [18387, 18464, 18465, 18466, 18474]
                                Stargate   UP       [24993, 25032]
                    InsightsDataTransfer   UP       [25258, 25348, 25349, 25388, 25391, 25393, 25396]
                                   Ergon   UP       [25263, 25414, 25415]
                                 Cerebro   UP       [25272, 25462, 25464, 25581]
                                 Chronos   UP       [25281, 25488, 25489, 25547]
                                 Curator   UP       [25294, 25528, 25529, 25585]
                                   Prism   UP       [25718, 25801, 25802, 25899, 25901, 25906, 25941, 25942]
                                     CIM   UP       [25721, 25829, 25830, 25856]
                            AlertManager   UP       [25727, 25862, 25863, 25990]
                                Arithmos   UP       [25737, 25896, 25897, 26040]
                                 Catalog   UP       [25749, 25989, 25991]
                               Acropolis   UP       [26011, 26118, 26119]
                                   Uhura   UP       [26037, 26165, 26166]
                                    Snmp   UP       [26057, 26214, 26215]
                       NutanixGuestTools   UP       [26105, 26282, 26283, 26299]
                              MinervaCVM   UP       [27343, 27465, 27466, 27730]
                           ClusterConfig   UP       [27358, 27509, 27510]
                                Aequitas   UP       [27368, 27567, 27568, 27600]
                             APLOSEngine   UP       [27399, 27580, 27581]
                                   APLOS   UP       [27853, 27946, 27947]
                                   Lazan   UP       [27865, 27997, 27999]
                                  Delphi   UP       [27880, 28058, 28060]
                                    Flow   UP       [27896, 28121, 28124]
                                 Anduril   UP       [27913, 28143, 28145]
                                   XTrim   UP       [27956, 28171, 28172]
                           ClusterHealth   UP       [7102, 7103, 27995, 28209,28495, 28496, 28503, 28510,	
    28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645, 28646, 28648, 28792,	
    28793, 28837, 28838, 28840, 28841, 28858, 28859, 29123, 29124, 29127, 29133, 29135, 29142, 29146, 29150,	
    29161, 29162, 29163, 29179, 29187, 29219, 29268, 29273]
  4. Verify the storage.
    1. Log on to the ESXi host with SSH.
    2. Rescan for datastores.
      root@esx# esxcli storage core adapter rescan --all
    3. Confirm that cluster VMFS datastores, if any, are available.
      root@esx# esxcfg-scsidevs -m | awk '{print $5}'

Restarting an ESXi Node using CLI

Before you begin

Shut down the guest VMs, including vCenter that are running on the node or move them to other nodes in the Nutanix cluster.

About this task

Procedure

  1. Log on to vCenter (or to the ESXi host if the node is running the vCenter VM) with the web client.
  2. Right-click the host and select Maintenance mode > Enter Maintenance Mode .
    In the Confirm Maintenance Mode dialog box, click OK .
    The host is placed in maintenance mode, which prevents VMs from running on the host.
    Note: The host does not enter in the maintenance mode until after the CVM is shut down.
  3. Log on to the CVM with SSH and shut down the CVM.
    nutanix@cvm$ cvm_shutdown -P now
    Note: Do not reset or shutdown the CVM in any way other than the cvm_shutdown command to ensure that the cluster is aware that the CVM is unavailable.
  4. Right-click the node and select Power > Reboot .
    Wait until vCenter shows that the host is not responding and then is responding again, which takes several minutes.

    If you are logged on to the ESXi host rather than to vCenter, the web client disconnects when the host shuts down.

  5. Right-click the ESXi host and select Exit Maintenance Mode .
  6. Right-click the CVM and select Power > Power on .
    Wait approximately 5 minutes for all services to start on the CVM.
  7. Log on to the CVM with SSH.
  8. Confirm that the Nutanix cluster services are running on the CVM.
    nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr

    Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.

    Output similar to the following is displayed.
        Name                      : 10.1.56.197
        Status                    : Up
        ... ... 
        StatsAggregator           : up
        SysStatCollector          : up

    Every service listed should be up .

  9. Right-click the ESXi host in the web client and select Rescan for Datastores . Confirm that all Nutanix datastores are available.

Rebooting an ESXI Node in a Nutanix Cluster

About this task

The Request Reboot operation in the Prism web console gracefully restarts the selected nodes one after the other.

Perform the following procedure to restart the nodes in the cluster.

Procedure

  1. Click the gear icon in the main menu and then select Reboot in the Settings page.
  2. In the Request Reboot window, select the nodes you want to restart, and click Reboot .
    Figure. Request Reboot of ESXi Node Click to enlarge

    A progress bar is displayed that indicates the progress of the restart of each node.

Changing an ESXi Node Name

After running a bare-metal Foundation, you can change the host (node) name from the command line or by using the vSphere web client.

To change the hostname, see VMware Documentation. .

Changing an ESXi Node Password

Although it is not required for the root user to have the same password on all hosts (nodes), doing so makes cluster management and support much easier. If you do select a different password for one or more hosts, make sure to note the password for each host.

To change the host password, see VMware Documentation .

Changing the CVM Memory Configuration (ESXi)

About this task

You can increase the memory reserved for each CVM in your Nutanix cluster by using the 1-click CVM Memory Upgrade option available from the Prism Element web console.

Increase memory size depending on the workload type or to enable certain AOS features. For more information about CVM memory sizing recommendations and instructions about how to increase the CVM memory, see Increasing the Controller VM Memory Size in the Prism Web Console Guide .

VM Management

For the list of supported VMs, see Compatibility and Interoperability Matrix.

VM Management Using Prism Central

You can create and manage a VM on your ESXi from Prism Central. For more information, see Creating a VM through Prism Central (ESXi) and Managing a VM (ESXi).

Creating a VM through Prism Central (ESXi)

In ESXi clusters, you can create a new virtual machine (VM) through Prism Central.

Before you begin

  • See the requirements and limitations section in vCenter Server Integration in the Prism Central Guide before proceeding.
  • Register the vCenter Server with your cluster. For more information, see Registering vCenter Server (Prism Central) in the Prism Central Guide .

About this task

To create a VM, do the following:

Procedure

  1. Go to the List tab of the VMs dashboard (see VM Summary View in the Prism Central Guide ) and click the Create VM button.
    The Create VM wizard appears.
  2. In the Configuration step, do the following in the indicated fields:
    1. Name : Enter a name for the VM.
    2. Description (optional): Enter a description for the VM.
    3. Cluster : Select the target cluster from the pull-down list on which you intend to create the VM.
    4. Number of VMs : Enter the number of VMs you intend to create. The created VM names are suffixed sequentially.
    5. vCPU(s) : Enter the number of virtual CPUs to allocate to this VM.
    6. Number of Cores per vCPU : Enter the number of cores assigned to each virtual CPU.
    7. Memory : Enter the amount of memory (in GiBs) to allocate to this VM.
    Figure. Create VM Window (Configuration) Click to enlarge VM update window display

  3. In the Resources step, do the following.

    Disks: To attach a disk to the VM, click the Attach Disk button. The Add Disks dialog box appears. Do the following in the indicated fields:

    Figure. Add Disk Dialog Box Click to enlarge

    1. Type : Select the type of storage device, Disk or CD-ROM , from the pull-down list.
    2. Operation : Specify the device contents from the pull-down list.
      • Select Clone from NDSF file to copy any file from the cluster that can be used as an image onto the disk.

      • [ CD-ROM only] Select Empty CD-ROM to create a blank CD-ROM device. A CD-ROM device is needed when you intend to provide a system image from CD-ROM.
      • [Disk only] Select Allocate on Storage Container to allocate space without specifying an image. Selecting this option means you are allocating space only. You have to provide a system image later from a CD-ROM or other source.
      • Select Clone from Image to copy an image that you have imported by using image service feature onto the disk.
    3. Bus Type : Select the bus type from the pull-down list. The choices are IDE , SCSI , PCI , or SATA .
    4. Path : Enter the path to the desired system image.
    5. Clone from Image : Select the image that you have created by using the image service feature.

      This field appears only when Clone from Image Service is selected. It specifies the image to copy.

      Note: If the image you created does not appear in the list, see KB-4892.
    6. Storage Container : Select the storage container to use from the pull-down list.

      This field appears only when Allocate on Storage Container is selected. The list includes all storage containers created for this cluster.

    7. Capacity : Enter the disk size in GiB.
    8. When all the field entries are correct, click the Add button to attach the disk to the VM and return to the Create VM dialog box.
    9. Repeat this step to attach additional devices to the VM.
    Figure. Create VM Window (Resources) Click to enlarge Create VM Window (Resources)

  4. In the Resources step, do the following.

    Networks: To create a network interface for the VM, click the Attach to Subnet button. The Attach to Subnet dialog box appears.

    Do the following in the indicated fields:

    Figure. Attach to Subnet Dialog Box Click to enlarge Create NIC window display

    1. Subnet : Select the target virtual LAN from the pull-down list.

      The list includes all defined networks (see Configuring Network Connections in the Prism Central Guide ).

    2. Network Adapter Type : Select the network adapter type from the pull-down list.

      For information about the list of supported adapter types, see vCenter Server Integration in the Prism Central Guide .

    3. Network Connection State : Select the state for the network that you want it to operate in after VM creation. The options are Connected or Disconnected .
    4. When all the field entries are correct, click the Add button to create a network interface for the VM and return to the Create VM dialog box.
    5. Repeat this step to create more network interfaces for the VM.
  5. In the Management step, do the following.
    1. Categories : Search for the category to be assigned to the VM. The policies associated with the category value are assigned to the VM.
    2. Guest OS : Type and select the guest operating system.

      The guest operating system that you select affects the supported devices and number of virtual CPUs available for the virtual machine. The Create VM wizard does not install the guest operating system. For information about the list of supported operating systems, see vCenter Server Integration in the Prism Central Guide .

      Figure. Create VM Window (Management) Click to enlarge VM update resources display - VM Flash Mode

  6. In the Review step, when all the field entries are correct, click the Create VM button to create the VM and close the Create VM dialog box.

    The new VM appears in the VMs entity page list.

Managing a VM (ESXi)

You can manage virtual machines (VMs) in an ESXi cluster through Prism Central.

Before you begin

  • See the requirements and limitations section in vCenter Server Integration in the Prism Central Guide before proceeding.
  • Ensure that you have registered the vCenter Server with your cluster. For more information, see Registering vCenter Server (Prism Central) in the Prism Central Guide .

About this task

After creating a VM (see Creating a VM through Prism Central (ESXi)), you can use Prism Central to update the VM configuration, delete the VM, clone the VM, launch a console window, power on (or off) the VM, enable flash mode for a VM, assign the VM to a protection policy, create VM recovery point, add the VM to a recovery plan, run a playbook, manage categories, install and manage Nutanix Guest Tools (NGT), manage the VM ownership, or configure QoS settings.

You can perform these tasks by using any of the following methods:

  • Select the target VM in the List tab of the VMs dashboard (see VM Summary View in the Prism Central Guide ) and choose the required action from the Actions menu.
  • Right-click on the target VM in the List tab of the VMs dashboard and select the required action from the drop-down list.
  • Go to the details page of a selected VM (see VM Details View in the Prism Central Guide ) and select the desired action.
Note: The available actions appear in bold; other actions are unavailable. The available actions depend on the current state of the VM and your permissions.

Procedure

  • To modify the VM configuration, select Update .

    The Update VM dialog box appears, which includes the same fields as the Create VM dialog box. Make the desired changes and then click the Save button in the Review step.

    Figure. Update VM Window (Resources) Click to enlarge VM update resources display - VM Flash Mode

  • Disks: You can add new disks to the VM using the Attach Disk option. You can also modify the existing disk attached to the VM using the controls under the actions column. See Creating a VM through Prism Central (ESXi) before you create a new disk for a VM. You can enable or disable the flash mode settings for the VM. To enable flash mode on the VM, click the Enable Flash Mode check box. After you enable this feature on the VM, the status is updated in the VM table view.
  • Networks: You can attach new network to the VM using the Attach to Subnet option. You can also modify the existing subnet attached to the VM. See Creating a VM through Prism Central (ESXi) before you modify NIC network or create a new NIC for a VM.
  • To delete the VM, select Delete . A window prompt appears; click the OK button to delete the VM.
  • To clone the VM, select Clone .

    This displays the Clone VM dialog box, which includes the same fields as the Create VM dialog box. A cloned VM inherits most the configurations (except the name) of the source VM. Enter a name for the clone and then click the Save button to create the clone. You can optionally override some of the configurations before clicking the Save button. For example, you can override the number of vCPUs, memory size, boot priority, NICs, or the guest customization.

    Note:
    • You can clone up to 250 VMs at a time.
    • You cannot override the secure boot setting while cloning a VM, unless the source VM already had secure boot setting enabled.
    Figure. Clone VM Window Click to enlarge clone VM window display

  • To launch a console window, select Launch Console .

    This opens a Virtual Network Computing (VNC) client and displays the console in a new tab or window. This option is available only when the VM is powered on. The VM power options that you access from the Power On Actions (or Power Off Actions ) action link below the VM table can also be accessed from the VNC console window. To access the VM power options, click the Power button at the top-right corner of the console window.

    Note: A VNC client may not function properly on all browsers. Some keys are not recognized when the browser is Chrome. (Firefox typically works best.)
    Figure. Console Window (VNC) Click to enlarge VNC console window display

  • To power on (or off) the VM, select Power on (or Power off ).
  • To disable (or enable) efficiency measurement for the VM, select Disable Efficiency Measurement (or Enable Efficiency Measurement ).
  • To disable (or enable) anomaly detection the VM, select Disable Anomaly Detection (or Enable Anomaly Detection ).
  • To assign the VM to a protection policy, select Protect . This opens a page to specify the protection policy to which this VM should be assigned. To remove the VM from a protection policy, select Unprotect .
    Note: You can create a protection policy for a VM or set of VMs that belong to one or more categories by enabling Leap and configuring the Availability Zone.
  • To migrate the VM to another host, select Migrate .

    This displays the Migrate VM dialog box. Select the target host from the pull-down list (or select the System will automatically select a host option to let the system choose the host) and then click the Migrate button to start the migration.

    Figure. Migrate VM Window Click to enlarge migrate VM window display

    Note: Nutanix recommends to live migrate VMs when they are under light load. If they are migrated while heavily utilized, migration may fail because of limited bandwidth.
  • To add this VM to a recovery plan you created previously, select Add to Recovery Plan . For more information, see Adding Guest VMs Individually to a Recovery Plan in the Leap Administration Guide .
  • To create VM recovery point, select Create Recovery Point .

    This displays the Create VM Recovery Point dialog box. Enter a name of the recovery action for the VM. You can choose to create an App Consistent VM recovery point by enabling the check-box. The VM can be restored or replicated from a Recovery Point either locally or remotely in a state of a chosen recovery point.

    Figure. Create VM Recovery Point Window Click to enlarge Create VM Recovery Point Window display

  • To run a playbook you created previously, select Run Playbook . For more information, see Running a Playbook (Manual Trigger) in the Prism Central Guide .
  • To assign the VM a category value, select Manage Categories .

    This displays the Manage VM Categories page. For more information, see Assigning a Category in the Prism Central Guide .

  • To install Nutanix Guest Tools (NGT), select Install NGT . For more information, see Installing NGT on Multiple VMs in the Prism Central Guide .
  • To enable (or disable) NGT, select Manage NGT Applications . For more information, see Managing NGT Applications in the Prism Central Guide .
    The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
    Note: If you clone a VM, by default NGT is not enabled on the cloned VM. You need to again enable and mount NGT on the cloned VM. If you want to enable NGT on multiple VMs simultaneously, see Enabling NGT and Mounting the NGT Installer on Cloned VMs in the Prism Web Console Guide .

    If you eject the CD, you can mount the CD back again by logging into the Controller VM and running the following nCLI command.

    ncli> ngt mount vm-id=virtual_machine_id

    For example, to mount the NGT on the VM with VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987, type the following command.

    ncli> ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-
    c1601e759987
    Note:
    • Self-service restore feature is not enabled by default on a VM. You need to manually enable the self-service restore feature.
    • If you have created the NGT ISO CD-ROMs on AOS 4.6 or earlier releases, the NGT functionality will not work even if you upgrade your cluster because REST APIs have been disabled. You need to unmount the ISO, remount the ISO, install the NGT software again, and then upgrade to a later AOS version.
  • To upgrade NGT, select Upgrade NGT . For more information, see Upgrading NGT in the Prism Central Guide .
  • To establish VM host affinity, select Configure VM Host Affinity .

    A window appears with the available hosts. Select (click the icon for) one or more of the hosts and then click the Save button. This creates an affinity between the VM and the selected hosts. If possible, it is recommended that you create an affinity to multiple hosts (at least two) to protect against downtime due to a node failure. For more information about VM affinity policies, see Affinity Policies Defined in Prism Central in the Prism Central Guide .

    Figure. Set VM Host Affinity Window Click to enlarge

  • To add a VM to the catalog, select Add to Catalog . This displays the Add VM to Catalog page. For more information, see Adding a Catalog Item in the Prism Central Guide .
  • To specify a project and user who own this VM, select Manage Ownership .
    In the Manage VM Ownership window, do the following in the indicated fields:
    1. Project : Select the target project from the pull-down list.
    2. User : Enter a user name. A list of matches appears as you enter a string; select the user name from the list when it appears.
    3. Click the Save button.
    Figure. VM Ownership Window Click to enlarge

  • To configure quality of service (QoS) settings, select Set QoS Attributes . For more information, see Setting QoS for an Individual VM in the Prism Central Guide .

VM Management using Prism Element

You can create and manage a VM on your ESXi from Prism Element. For more information, see Creating a VM (ESXi) and Managing a VM (ESXi).

Creating a VM (ESXi)

In ESXi clusters, you can create a new virtual machine (VM) through the web console.

Before you begin

  • See the requirements and limitations section in VM Management through Prism Element (ESXi) in the Prism Web Console Guide before proceeding.
  • Register the vCenter Server with your cluster. For more information, see Registering a Cluster to vCenter Server.

About this task

When creating a VM, you can configure all of its components, such as number of vCPUs and memory, but you cannot attach a volume group to the VM.

To create a VM, do the following:

Procedure

  1. In the VM dashboard (see VM Dashboard), click the Create VM button.
    The Create VM dialog box appears.
  2. Do the following in the indicated fields:
    1. Name : Enter a name for the VM.
    2. Description (optional): Enter a description for the VM.
    3. Guest OS : Type and select the guest operating system.
      The guest operating system that you select affects the supported devices and number of virtual CPUs available for the virtual machine. The Create VM wizard does not install the guest operating system. For information about the list of supported operating systems, see VM Management through Prism Element (ESXi) in the Prism Web Console Guide .
    4. vCPU(s) : Enter the number of virtual CPUs to allocate to this VM.
    5. Number of Cores per vCPU : Enter the number of cores assigned to each virtual CPU.
    6. Memory : Enter the amount of memory (in GiBs) to allocate to this VM.
  3. To attach a disk to the VM, click the Add New Disk button.
    The Add Disks dialog box appears.
    Figure. Add Disk Dialog Box Click to enlarge configure a disk screen

    Do the following in the indicated fields:
    1. Type : Select the type of storage device, DISK or CD-ROM , from the pull-down list.
      The following fields and options vary depending on whether you choose DISK or CD-ROM .
    2. Operation : Specify the device contents from the pull-down list.
      • Select Clone from ADSF file to copy any file from the cluster that can be used as an image onto the disk.
      • Select Allocate on Storage Container to allocate space without specifying an image. (This option appears only when DISK is selected in the previous field.) Selecting this option means you are allocating space only. You have to provide a system image later from a CD-ROM or other source.
    3. Bus Type : Select the bus type from the pull-down list. The choices are IDE or SCSI .
    4. ADSF Path : Enter the path to the desired system image.
      This field appears only when Clone from ADSF file is selected. It specifies the image to copy. Enter the path name as / storage_container_name / vmdk_name .vmdk . For example to clone an image from myvm-flat.vmdk in a storage container named crt1 , enter /crt1/myvm-flat.vmdk . When a user types the storage container name ( / storage_container_name / ), a list appears of the VMDK files in that storage container (assuming one or more VMDK files had previously been copied to that storage container).
      Note: Make sure you are copying from a flat file.
    5. Storage Container : Select the storage container to use from the pull-down list.
      This field appears only when Allocate on Storage Container is selected. The list includes all storage containers created for this cluster.
    6. Size : Enter the disk size in GiBs.
    7. When all the field entries are correct, click the Add button to attach the disk to the VM and return to the Create VM dialog box.
    8. Repeat this step to attach more devices to the VM.
  4. To create a network interface for the VM, click the Add New NIC button.
    The Create NIC dialog box appears. Do the following in the indicated fields:
    1. VLAN Name : Select the target virtual LAN from the pull-down list.
      The list includes all defined networks. For more information, see Network Configuration for VM Interfaces in the Prism Web Console Guide .
    2. Network Adapter Type : Select the network adapter type from the pull-down list.

      For information about the list of supported adapter types, see VM Management through Prism Element (ESXi) in the Prism Web Console Guide .

    3. Network UUID : This is a read-only field that displays the network UUID.
    4. Network Address/Prefix : This is a read-only field that displays the network IP address and prefix.
    5. When all the field entries are correct, click the Add button to create a network interface for the VM and return to the Create VM dialog box.
    6. Repeat this step to create more network interfaces for the VM.
  5. When all the field entries are correct, click the Save button to create the VM and close the Create VM dialog box.
    The new VM appears in the VM table view. For more information, see VM Table View in the Prism Web Console Guide .

Managing a VM (ESXi)

You can use the web console to manage virtual machines (VMs) in the ESXi clusters.

Before you begin

  • See the requirements and limitations section in VM Management through Prism Element (ESXi) in the Prism Web Console Guide before proceeding.
  • Ensure that you have registered the vCenter Server with your cluster. For more information, see Registering a Cluster to vCenter Server.

About this task

After creating a VM, you can use the web console to manage guest tools, power operations, suspend, launch a VM console window, update the VM configuration, clone the VM, or delete the VM. To accomplish one or more of these tasks, do the following:

Note: Your available options depend on the VM status, type, and permissions. Unavailable options are unavailable.

Procedure

  1. In the VM dashboard (see VM Dashboard), click the Table view.
  2. Select the target VM in the table (top section of screen).
    The summary line (middle of screen) displays the VM name with a set of relevant action links on the right. You can also right-click on a VM to select a relevant action.

    The possible actions are Manage Guest Tools , Launch Console , Power on (or Power off actions ), Suspend (or Resume ), Clone , Update , and Delete . The following steps describe how to perform each action.

    Figure. VM Action Links Click to enlarge

  3. To manage guest tools as follows, click Manage Guest Tools .
    You can also enable NGT applications (self-service restore, volume snapshot service and application-consistent snapshots) as part of manage guest tools.
    1. Select Enable Nutanix Guest Tools check box to enable NGT on the selected VM.
    2. Select Mount Nutanix Guest Tools to mount NGT on the selected VM.
      Ensure that VM has at least one empty IDE CD-ROM or SATA slot to attach the ISO.

      The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
    3. To enable self-service restore feature for Windows VMs, click Self Service Restore (SSR) check box.
      The self-service restore feature is enabled of the VM. The guest VM administrator can restore the desired file or files from the VM. For more information about the self-service restore feature, see Self-Service Restore in the Data Protection and Recovery with Prism Element guide.

    4. After you select Enable Nutanix Guest Tools check box the VSS and application-consistent snapshot feature is enabled by default.
      After this feature is enabled, Nutanix native in-guest VmQuiesced snapshot service (VSS) agent is used to take application-consistent snapshots for all the VMs that support VSS. This mechanism takes application-consistent snapshots without any VM stuns (temporary unresponsive VMs) and also enables third-party backup providers like Commvault and Rubrik to take application-consistent snapshots on Nutanix platform in a hypervisor-agnostic manner. For more information, see Conditions for Application-consistent Snapshots in the Data Protection and Recovery with Prism Element guide.

    5. To mount VMware guest tools, click Mount VMware Guest Tools check box.
      The VMware guest tools are mounted on the VM.
      Note: You can mount both VMware guest tools and Nutanix Guest Tools at the same time on a particular VM provided the VM has sufficient empty CD-ROM slots.
    6. Click Submit .
      The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
      Note:
      • If you clone a VM, by default NGT is not enabled on the cloned VM. If the cloned VM is powered off, enable NGT from the UI and start the VM. If cloned VM is powered on, enable NGT from the UI and restart the Nutanix guest agent service.
      • If you want to enable NGT on multiple VMs simultaneously, see Enabling NGT and Mounting the NGT Installer on Cloned VMs in the Prism Web Console Guide .
      If you eject the CD, you can mount the CD back again by logging into the Controller VM and running the following nCLI command.
      ncli> ngt mount vm-id=virtual_machine_id

      For example, to mount the NGT on the VM with VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987, type the following command.

      ncli> ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-
      c1601e759987
      Caution: In AOS 4.6, for the powered-on Linux VMs on AHV, ensure that the NGT ISO is ejected or unmounted within the guest VM before disabling NGT by using the web console. This issue is specific for 4.6 version and does not occur from AOS 4.6.x or later releases.
      Note: If you have created the NGT ISO CD-ROMs prior to AOS 4.6 or later releases, the NGT functionality will not work even if you upgrade your cluster because REST APIs have been disabled. You must unmount the ISO, remount the ISO, install the NGT software again, and then upgrade to 4.6 or later version.
  4. To launch a VM console window, click the Launch Console action link.
    This opens a virtual network computing (VNC) client and displays the console in a new tab or window. This option is available only when the VM is powered on. The VM power options that you access from the Power Off Actions action link below the VM table can also be accessed from the VNC console window. To access the VM power options, click the Power button at the top-right corner of the console window.
    Note: A VNC client may not function properly on all browsers. Some keys are not recognized when the browser is Google Chrome. (Firefox typically works best.)
  5. To start (or shut down) the VM, click the Power on (or Power off ) action link.

    Power on begins immediately. If you want to shut down the VMs, you are prompted to select one of the following options:

    • Power Off . Hypervisor performs a hard shut down action on the VM.
    • Reset . Hypervisor performs an ACPI reset action through the BIOS on the VM.
    • Guest Shutdown . Operating system of the VM performs a graceful shutdown.
    • Guest Reboot . Operating system of the VM performs a graceful restart.
    Note: The Guest Shutdown and Guest Reboot options are available only when VMware guest tools are installed.
  6. To pause (or resume) the VM, click the Suspend (or Resume ) action link. This option is available only when the VM is powered on.
  7. To clone the VM, click the Clone action link.

    This displays the Clone VM dialog box, which includes the same fields as the Create VM dialog box. A cloned VM inherits the most the configurations (except the name) of the source VM. Enter a name for the clone and then click the Save button to create the clone. You can optionally override some of the configurations before clicking the Save button. For example, you can override the number of vCPUs, memory size, boot priority, NICs, or the guest customization.

    Note:
    • You can clone up to 250 VMs at a time.
    • In the Clone window, you cannot update the disks.
    Figure. Clone VM Window Click to enlarge clone VM window display

  8. To modify the VM configuration, click the Update action link.
    The Update VM dialog box appears, which includes the same fields as the Create VM dialog box. Modify the configuration as needed (see Creating a VM (ESXi)), and in addition you can enable Flash Mode for the VM.
    Note: If you delete a vDisk attached to a VM and snapshots associated with this VM exist, space associated with that vDisk is not reclaimed unless you also delete the VM snapshots.
    1. Click the Enable Flash Mode check box.
      • After you enable this feature on the VM, the status is updated in the VM table view. To view the status of individual virtual disks (disks that are flashed to the SSD), go the Virtual Disks tab in the VM table view.
      • You can disable the Flash Mode feature for individual virtual disks. To update the Flash Mode for individual virtual disks, click the update disk icon in the Disks pane and deselect the Enable Flash Mode check box.
      Figure. Update VM Resources Click to enlarge VM update resources display - VM Flash Mode

      Figure. Update VM Resources - VM Disk Flash Mode Click to enlarge VM update resources display - VM Disk Flash Mode

  9. To delete the VM, click the Delete action link. A window prompt appears; click the OK button to delete the VM.
    The deleted VM disappears from the list of VMs in the table. You can also delete a VM that is already powered on.

vDisk Provisioning Types in VMware with Nutanix Storage

You can specify the vDisk provisioning policy when you perform certain VM management operations like creating a VM, migrating a VM, or cloning a VM.

Traditionally, a vDisk is provisioned with specific allocated space (thick space) or with space allocated on an as-needed basis (thin disk). The thick disks provisions the space using either lazy zero or eager zero disk formatting method.

For traditional storage systems, the thick eager zeroed disks provide the best performance out of the three types of disk provisioning. Thick disks provide second best performance and thin disks provide the least performance. However, this does not apply to modern storage systems found in Nutanix systems.

Nutanix uses a thick Virtual Machine Disk (VMDK) to reserve the storage space using the vStorage APIs for Array Integration (VAAI) reserve space API.

On a Nutanix system, there is no performance difference between thin and thick disks. This means that a thick eager zeroed virtual disk has no performance benefits over a thin virtual disk.

Nutanix uses thick disk (VMDK) in its configuration and the resulting disk will be the same whether the disk is a thin or a thick disk (despite the configuration differences).

Note: A thick-disk reservation is required for the reservation of the disk space. Nutanix VMDK has no performance requirement to provision a thick disk. For a single Nutanix container, even when a thick disk is provisioned, no disk space is allocated to write zeroes. So, there is no requirement for provisioning a thick disk.

When using the up-to-date VAAI for cloning operations, the following behavior is expected:

  • When cloning any type of disk format (thin, thick lazy zeroed or thick eager zeroed) to the same Nutanix datastore, the resulting VM will have a thin disk regardless of the explicit choice of a disk format in the vSphere client.

    Nutanix uses a thin provisioned disk because a thin disk performs the same as a thick disk in the system. The thin disk prevents disk space from wasting. In the cloning scenario, Nutanix disallows the flow of the reservation property from the source to the destination when creating a fast clone on the same datastore. This prevents space wastage due to unnecessary reservation.

  • When cloning a VM to a different datastore, the destination VM will have the disk format that you specified in the vSphere client.
    Important: A thick disk will be shown as thick in ESXi, and within NDFS (Nutanix Distributed File System) it is shown as a thin disk with an extra field configuration.

Nutanix recommends using thin disks over any other disk type.

VM Migration

You can migrate a VM to an ESXi host in a Nutanix cluster. Usually the migration is done in the following cases.

  • Migrate VMs from existing storage platform to Nutanix.
  • Keep VMs running during disruptive upgrade or other downtime of Nutanix cluster.

In migrating VMs between Nutanix clusters running vSphere, the source host and NFS datastore are the ones presently running the VM. The target host and NFS datastore are the ones where the VM runs after migration. The target ESXi host and datastore must be part of a Nutanix cluster.

To accomplish this migration, you have to mount the NFS datastores from the target on the source. After the migration is complete, you must unmount the datastores and block access.

Migrating a VM to Another Nutanix Cluster

Before you begin

Before migrating a VM to another Nutanix cluster running vSphere, verify that you have provisioned the target Nutanix environment.

About this task

The shared storage feature in vSphere allows you to move both compute and storage resources from the source legacy environment to the target Nutanix environment at the same time without disruption. This feature also removes the need to do any sort of file systems allow lists on Nutanix.

You can use the shared storage feature through the migration wizard in the web client.

Procedure

  1. Log on to vCenter with the web client.
  2. Select the VM that you want to migrate.
  3. Right-click the VM and select Migrate .
  4. Under Select Migration Type , select Change both compute resource and storage .
  5. Select Compute Resource and then Storage and click Next .
    If necessary, change the disk format to the one that you want to use during the migration process.
  6. Select a destination network for all VM network adapters and click Next .
  7. Click Finish .
    Wait for the migration process to complete. The process performs the storage vMotion first, and then creates a temporary storage network over vmk0 for the period where the disk files are on Nutanix.

Cloning a VM

About this task

To clone a VM, you must enable the Nutanix VAAI plug-in. For steps to enable and verify Nutanix VAAI plug-in, refer KB-1868 .

Procedure

  1. Log on to vCenter with the web client.
  2. Right-click the VM and select Clone .
  3. Follow the wizard to enter a name for the clone, select a cluster, and select a host.
  4. Select the datastore that contains source VM and click Next .
    Note: If you choose a datastore other than the one that contains the source VM, the clone operation uses the VMware implementation and not the Nutanix VAAI plug-in.
  5. If desired, set the guest customization parameters. Otherwise, proceed to the next step.
  6. Click Finish .

vStorage APIs for Array Integration

To improve the vSphere cloning process, Nutanix provides a vStorage API for array integration (VAAI) plug-in. This plug-in is installed by default during the Nutanix factory process.

Without the Nutanix VAAI plug-in, the process of creating a full clone takes a significant amount of time because all the data that comprises a VM is duplicated. This duplication also results in an increase in storage consumption.

The Nutanix VAAI plug-in efficiently makes full clones without reserving space for the clone. Read requests for blocks shared between parent and clone are sent to the original vDisk that was created for the parent VM. As the clone VM writes new blocks, the Nutanix file system allocates storage for those blocks. This data management occurs completely at the storage layer, so the ESXi host sees a single file with the full capacity that was allocated when the clone was created.

vSphere ESXi Hardening Settings

Configure the following settings in /etc/ssh/sshd_config to harden an ESXi hypervisor in a Nutanix cluster.
Caution: When hardening ESXi security, some settings may impact operations of a Nutanix cluster.
HostbasedAuthentication no
PermitTunnel no
AcceptEnv
GatewayPorts no
Compression no
StrictModes yes
KerberosAuthentication no
GSSAPIAuthentication no
PermitUserEnvironment no
PermitEmptyPasswords no
PermitRootLogin no

Match Address x.x.x.11,x.x.x.12,x.x.x.13,x.x.x.14,192.168.5.0/24
PermitRootLogin yes
PasswordAuthentication yes

ESXi Host Upgrade

You can upgrade your host either automatically through Prism Element (1-click upgrade) or manually. For more information about automatic and manual upgrades, see ESXi Upgrade and ESXi Host Manual Upgrade respectively.

This paragraph describes the Nutanix hypervisor support policy for vSphere and Hyper-V hypervisor releases. Nutanix provides hypervisor compatibility and support statements that should be reviewed before planning an upgrade to a new release or applying a hypervisor update or patch:
  • Compatibility and Interoperability Matrix
  • Hypervisor Support Policy- See Support Policies and FAQs for the supported Acropolis hypervisors.

Review the Nutanix Field Advisory page also for critical issues that Nutanix may have uncovered with the hypervisor release being considered.

Note: You may need to log in to the Support Portal to view the links above.

The Acropolis Upgrade Guide provides steps that can be used to upgrade the hypervisor hosts. However, as noted in the documentation, the customer is responsible for reviewing the guidance from VMware or Microsoft, respectively, on other component compatibility and upgrade order (e.g. vCenter), which needs to be planned first.

ESXi Upgrade

These topics describe how to upgrade your ESXi hypervisor host through the Prism Element web console Upgrade Software feature (also known as 1-click upgrade). To install or upgrade VMware vCenter server or other third-party software, see your vendor documentation for this information.

AOS supports ESXi hypervisor upgrades that you can apply through the web console Upgrade Software feature (also known as 1-click upgrade).

You can view the available upgrade options, start an upgrade, and monitor upgrade progress through the web console. In the main menu, click the gear icon, and then select Upgrade Software in the Settings panel. You can see the current status of your software versions and start an upgrade.

VMware ESXi Hypervisor Upgrade Recommendations and Limitations

  • To install or upgrade VMware vCenter Server or other third-party software, see your vendor documentation.
  • Always consult the VMware web site for any vCenter and hypervisor installation dependencies. For example, a hypervisor version might require that you upgrade vCenter first.
  • If you have not enabled DRS in your environment and want to upgrade the ESXi host, you need to upgrade the ESXi host manually. For more information about upgrading ESXi hosts manually, see ESXi Host Manual Upgrade in the vSphere Administration Guide .
  • Disable Admission Control to upgrade ESXi on AOS; if enabled, the upgrade process will fail. You can enable it for normal cluster operation otherwise.
Nutanix Support for ESXi Upgrades
Nutanix qualifies specific VMware ESXi hypervisor updates and provides a related JSON metadata upgrade file on the Nutanix Support Portal for one-click upgrade through the Prism web console Software Upgrade feature.

Nutanix does not provide ESXi binary files, only related JSON metadata upgrade files. Obtain ESXi offline bundles (not ISOs) from the VMware web site.

Nutanix supports the ability to patch upgrade ESXi hosts with versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases. See the Nutanix hypervisor support statement in our Support FAQ. For updates that are made available by VMware that do not have a Nutanix-provided JSON metadata upgrade file, obtain the offline bundle and md5sum checksum available from VMware, then use the web console Software Upgrade feature to upgrade ESXi.

Mixing nodes with different processor (CPU) types in the same cluster
If you are mixing nodes with different processor (CPU) types in the same cluster, you must enable enhanced vMotion compatibility (EVC) to allow vMotion/live migration of VMs during the hypervisor upgrade. For example, if your cluster includes a node with a Haswell CPU and other nodes with Broadwell CPUs, open vCenter and enable VMware enhanced vMotion compatibility (EVC) setting and specifically enable EVC for Intel hosts.
CPU Level for Enhanced vMotion Compatibility (EVC)

AOS Controller VMs and Prism Central VMs require a minimum CPU micro-architecture version of Intel Sandy Bridge. For AOS clusters with ESXi hosts, or when deploying Prism Central VMs on any ESXi cluster: if you have set the vSphere cluster enhanced vMotion compatibility (EVC) level, the minimum level must be L4 - Sandy Bridge .

vCenter Requirements and Limitations
Note: ENG-358564 You might be unable to log in to vCenter Server as the /storage/seat partition for vCenter Server version 7.0 and later might become full due to a large number of SSH-related events. See KB 10830 at the Nutanix Support portal for symptoms and solutions to this issue.
  • If your cluster is running the ESXi hypervisor and is also managed by VMware vCenter, you must provide vCenter administrator credentials and vCenter IP address as an extra step before upgrading. Ensure that ports 80 / 443 are open between your cluster and your vCenter instance to successfully upgrade.
  • If you have just registered your cluster in vCenter. Do not perform any cluster upgrades (AOS, Controller VM memory, hypervisor, and so on) if you have just registered your cluster in vCenter. Wait at least 1 hour before performing upgrades to allow cluster settings to become updated. Also do not register the cluster in vCenter and perform any upgrades at the same time.
  • Cluster mapped to two vCenters. Upgrading software through the web console (1-click upgrade) does not support configurations where a cluster is mapped to two vCenters or where it includes host-affinity must rules for VMs.

    Ensure that enough cluster resources are available for live migration to occur and to allow hosts to enter maintenance mode.

Mixing Different Hypervisor Versions
For ESXi hosts, mixing different hypervisor versions in the same cluster is temporarily allowed for deferring a hypervisor upgrade as part of an add-node/expand cluster operation, reimaging a node as part of a break-fix procedure, planned migrations, and similar temporary operations.

Upgrading ESXi Hosts by Uploading Binary and Metadata Files

Before you begin

About this task

Do the following steps to download Nutanix-qualified ESXi metadata .JSON files and upgrade the ESXi hosts through Upgrade Software in the Prism Element web console. Nutanix does not provide ESXi binary files, only related JSON metadata upgrade files.

Procedure

  1. Before performing any upgrade procedure, make sure you are running the latest version of the Nutanix Cluster Check (NCC) health checks and upgrade NCC if necessary.
  2. Run NCC as described in Run NCC Checks .
  3. Log on to the Nutanix support portal and navigate to the Hypervisors Support page from the Downloads menu, then download the Nutanix-qualified ESXi metadata .JSON files to your local machine or media.
    1. The default view is All . From the drop-down menu, select Nutanix - VMware ESXi , which shows all available JSON versions.
    2. From the release drop-down menu, select the available ESXi version. For example, 7.0.0 u2a .
    3. Click Download to download the Nutanix-qualified ESXi metadata .JSON file.
    Figure. Downloads Page for ESXi Metadata JSON Click to enlarge This picture shows the portal page for ESXi metadata JSON downloads
  4. Log on to the Prism Element web console for any node in the cluster.
  5. Click the gear icon in the main menu, select Upgrade Software in the Settings page, and then click the Hypervisor tab.
  6. Click the upload the Hypervisor binary link.
  7. Click Choose File for the metadata JSON (obtained from Nutanix) and binary files (obtained from VMware), respectively, browse to the file locations, select the file, and click Upload Now .
  8. When the file upload is completed, click Upgrade > Upgrade Now , then click Yes to confirm.
    [Optional] To run the pre-upgrade installation checks only on the Controller VM where you are logged on without upgrading, click Upgrade > Pre-upgrade . These checks also run as part of the upgrade procedure.
  9. Type your vCenter IP address and credentials, then click Upgrade .
    Ensure that you are using your Active Directory or LDAP credentials in the form of domain\username or username@domain .
    Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the Upgrade option is not displayed, because the software is already installed.
    The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation checks and uploads, through the Progress Monitor .
  10. On the LCM page, click Inventory > Perform Inventory to enable LCM to check, update and display the inventory information.
    For more information, see Performing Inventory With LCM in the Acropolis Upgrade Guide .

Upgrading ESXi by Uploading An Offline Bundle File and Checksum

About this task

  • Do the following steps to download a non-Nutanix-qualified (patch) ESXi upgrade offline bundle from VMware, then upgrade ESXi through Upgrade Software in the Prism Element web console.
  • Typically you perform this procedure to patch your version of ESXi and Nutanix has not yet officially qualified that new patch version. Nutanix supports the ability to patch upgrade ESXi hosts with versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases.

Procedure

  1. From the VMware web site, download the offline bundle (for example, update-from-esxi6.0-6.0_update02.zip ) and copy the associated MD5 checksum. Ensure that this checksum is obtained from the VMware web site, not manually generated from the bundle by you.
  2. Save the files to your local machine or media, such as a USB drive or other portable media.
  3. Log on to the Prism Element web console for any node in the cluster.
  4. Click the gear icon in the main menu of the Prism Element web console, select Upgrade Software in the Settings page, and then click the Hypervisor tab.
  5. Click the upload the Hypervisor binary link.
  6. Click enter md5 checksum and copy the MD5 checksum into the Hypervisor MD5 Checksum field.
  7. Scroll down and click Choose File for the binary file, browse to the offline bundle file location, select the file, and click Upload Now .
    Figure. ESXi 1-Click Upgrade, Unqualified Bundle Click to enlarge ESXi 1-Click Upgrade dialog box
  8. When the file upload is completed, click Upgrade > Upgrade Now , then click Yes to confirm.
    [Optional] To run the pre-upgrade installation checks only on the Controller VM where you are logged on without upgrading, click Upgrade > Pre-upgrade . These checks also run as part of the upgrade procedure.
  9. Type your vCenter IP address and credentials, then click Upgrade .
    Ensure that you are using your Active Directory or LDAP credentials in the form of domain\username or username@domain .
    Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the Upgrade option is not displayed, because the software is already installed.
    The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation checks and uploads, through the Progress Monitor .

ESXi Host Manual Upgrade

If you have not enabled DRS in your environment and want to upgrade the ESXi host, you must upgrade the ESXi host manually. This topic describes all the requirements that you must meet before manually upgrading the ESXi host.

Tip: If you have enabled DRS and want to upgrade the ESXi host, use the one-click upgrade procedure from the Prism web console. For more information on the one-click upgrade procedure, see the ESXi Upgrade.

Nutanix supports the ability to patch upgrade the ESXi hosts with the versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases. See the Nutanix hypervisor support statement in our Support FAQ.

Because ESXi hosts with different versions can co-exist in a single Nutanix cluster, upgrading ESXi does not require cluster downtime.

  • If you want to avoid cluster interruption, you must complete upgrading a host and ensure that the CVM is running before upgrading any other host. When two hosts in a cluster are down at the same time, all the data is unavailable.
  • If you want to minimize the duration of the upgrade activities and cluster downtime is acceptable, you can stop the cluster and upgrade all hosts at the same time.
Warning: By default, Nutanix clusters have redundancy factor 2, which means they can tolerate the failure of a single node or drive. Nutanix clusters with a configured option of redundancy factor 3 allow the Nutanix cluster to withstand the failure of two nodes or drives in different blocks.
  • Never shut down or restart multiple Controller VMs or hosts simultaneously.
  • Always run the cluster status command to verify that all Controller VMs are up before performing a Controller VM or host shutdown or restart.

ESXi Host Upgrade Process

Perform the following process to upgrade ESXi hosts in your environment.

Prerequisites and Requirements

Note: Use the following process only if you do not have DRS enabled in your Nutanix cluster.
  • If you are upgrading all nodes in the cluster at once, shut down all guest VMs and stop the cluster with the cluster stop command.
    Caution: There is downtime if you upgrade all the nodes in the Nutanix cluster at once. If you do not want downtime in your environment, you must ensure that only one CVM is shut down at a time in a redundancy factor 2 configuration.
  • If you are upgrading the nodes while keeping the cluster running, ensure that all nodes are up by logging on to a CVM and running the cluster status command. If any nodes are not running, start them before proceeding with the upgrade. Shut down all guest VMs on the node or migrate them to other nodes in the Nutanix cluster.
  • Disable email alerts in the web console under Email Alert Services or with the nCLI command.
    ncli> alerts update-alert-config enable=false
  • Run the complete NCC health check by using the health check command.
    nutanix@cvm$ ncc health_checks run_all
  • Run the cluster status command to verify that all Controller VMs are up and running, before performing a Controller VM or host shutdown or restart.
    nutanix@cvm$ cluster status
  • Place the host in the maintenance mode by using the web client.
  • Log on to the CVM with SSH and shut down the CVM.
    nutanix@cvm$ cvm_shutdown -P now
    Note: Do not reset or shutdown the CVM in any way other than the cvm_shutdown command to ensure that the cluster is aware that the CVM is unavailable.
  • Start the upgrade using vSphere Upgrade Guide or vCenter Update Manager VUM.

Upgrading ESXi Host

  • See the VMware Documentation for information about the standard ESXi upgrade procedures. If any problem occurs with the upgrade process, an alert is raised in the Alert dashboard.

Post Upgrade

Run the complete NCC health check by using the following command.

nutanix@cvm$ ncc health_checks run_all

vSphere Cluster Settings Checklist

Review the following checklist of the settings that you have to configure to successfully deploy vSphere virtual environment running Nutanix Enterprise cloud.

vSphere Availability Settings

  • Enable host monitoring.
  • Enable admission control and use the percentage-based policy with a value based on the number of nodes in the cluster.

    For more information about settings of percentage of cluster resources reserved as failover spare capacity, vSphere HA Admission Control Settings for Nutanix Environment.

  • Set the VM Restart Priority of all CVMs to Disabled .
  • Set the Host Isolation Response of the cluster to Power Off & Restart VMs .
  • Set the VM Monitoring for all CVMs to Disabled .
  • Enable datastore heartbeats by clicking Use datastores only from the specified list and choosing the Nutanix NFS datastore.

    If the cluster has only one datastore, click Advanced Options tab and add das.ignoreInsufficientHbDatastore with Value of true .

vSphere DRS Settings

  • Set the Automation Level on all CVMs to Disabled .
  • Select Automation Level to accept level 3 recommendations.
  • Leave power management disabled.

Other Cluster Settings

  • Configure advertised capacity for the Nutanix storage container (total usable capacity minus the capacity of one node for replication factor 2 or two nodes for replication factor 3).
  • Store VM swapfiles in the same directory as the VM.
  • Enable enhanced vMotion compatibility (EVC) in the cluster. For more information, see vSphere EVC Settings.
  • Configure Nutanix CVMs with the appropriate VM overrides. For more information, see VM Override Settings.
  • Check Nonconfigurable ESXi Components. Modifying the nonconfigurable components may inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.
Read article