Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
Storage and Backups-Nutanix
Foundation Platforms Submodule Open Source Software

Foundation Platforms 2.12.x

Product Release Date: 2022-09-28

Last updated: 2022-09-28

Open Source Software For Foundation Platforms Submodule 2.12.1

For more information about Foundation Platforms Submodule 2.12.1 open source licensing details, see Open Source Licenses for Foundation Platforms 2.12.1 .

Open Source Software For Foundation Platforms Submodule 2.12

For more information about Foundation Platforms Submodule 2.12 open source licensing details, see Open Source Licenses for Foundation Platforms 2.12 .

Frame Documentation

Frame Hosted

Last updated: 2022-02-21

Frame Documentation

For Frame documentation, see https://docs.frame.nutanix.com/

Read article
Karbon Platform Services Administration Guide

Karbon Platform Services Hosted

Last updated: 2022-11-24

Karbon Platform Services Overview

Karbon Platform Services is a Kubernetes-based multicloud platform as a service that enables rapid development and deployment of microservice-based applications. These applications can range from simple stateful containerized applications to complex web-scale applications across any cloud.

In its simplest form, Karbon Platform Services consists of a Service Domain encapsulating a project, application, and services infrastructure, and other supporting resources. It also incorporates project and administrator user access and cloud and data pipelines to help converge edge and cloud data.

With Karbon Platform Services, you can:

  • Quickly build and deploy intelligent applications across a public or private cloud infrastructure.
  • Connect various mobile, highly distributed data sensors (like video cameras, temperature or pressure sensors, streaming devices, and so on) to help collect data.
  • Create intelligent applications using data connectors and machine learning modules (for example, implementing object recognition) to transform the data. An application can be as simple as text processing code or it could be advanced code implementing AI by using popular machine learning frameworks like TensorFlow.
  • Push this data to your Service Domain or the public cloud to be stored or otherwise made available.

This data can be stored at the Service Domain or published to the cloud. You can then create intelligent applications using data connectors and machine learning modules to consume the collected data. These applications can run on the Service Domain or the cloud where you have pushed collected data.

Nutanix provides the Service Domain initially as a VM appliance hosted in an AOS AHV or ESXi cluster. You manage infrastructure, resources, and Karbon Platform Services capabilities in a console accessible through a web browser.

Cloud Management Console and User Experience

As part of initial and ongoing configuration, you can define two user types: an infrastructure administrator and a project user . The cloud management console and user experience help create a more intuitive experience for infrastructure administrators and project users.

  • Admin Console drop-down menu for infra admins and Home Console drop-down menu for project users. Both menus provide a list of all current projects and 1-click access to an individual project. In the navigation sidebar, Projects also provides a navigation path to the overall list of projects.
  • Vertical tabs for most pages and dashboards. For example, clicking Administration > Logging shows the Logging dashboard with related task and Alert tabs. The left navigation menu remains persistent to help eliminate excessive browser refreshes and help overall navigation.
  • Simplified Service Domain Management . In addition to the standard Service Domain list showing all Service Domains, a consolidated dashboard for each Service Domain is available from Infrastructure > Service Domain . Karbon Platform Services also simplifies upgrade operations available from Administration > Upgrades . For convenience, infra admins can choose when to download available versions to all or selected Service Domains, when to upgrade them, and check status for all or individual Service Domains.
  • Updated Workflows for Infrastructure Admins and Project Users . Apart from administration operations such as Service Domain, user management, and resource management, the infra admin and project user work flow is project-centric. A project encapsulates everything required to successfully implement and monitor the platform.
  • Updated GPU/vGPU Configuration Support . If your Service Domain includes access to a GPU/vGPU, you can choose its use case. When you create or update a Service Domain, you can specify exclusive access by any Kubernetes app or data pipeline or by the AI Inferencing API (for example, if you are using ML Models).

Built-In App Services, Logging, and Alerting

Karbon Platform Services includes these ready-to-use built-in services, which provide an advantage over self-managed services:

  • Secure by design.
  • No life cycle management. Service Domains do not require any user intervention to maintain or update service versions, patches, or other hands-on activities. For ease of use, all projects using the Service Domain use the same service version.
  • Dedicated service resources. Your project uses isolated service resources. Service resources are not shared with other projects. Applications within a project are not required to authenticate to use the service.
  • Service-specific alerts. Like other Karbon Platform Services features, the Service Domain helps monitor service health and raises service-specific alerts.

App Runtime Service

These services are enabled by default on each Service Domain. All services now have monitoring and status capabilities

  • Kubernetes Apps. Container-as-a-Service to manage and run Kubernetes apps without worrying about Kubernetes complexity and having to manage the underlying infrastructure.
  • Functions and Data Pipelines. Function-as-a-Service to run serverless functions. Invoke functions based on data triggers and publish to service end points.
  • AI Inferencing - updated in this release. ML model management and AI Inferencing runtime, pooled though an abstraction of GPUs and hardware accelerators. Enable this service to use your machine learning (ML) models in your project.

Ingress Controller

Ingress controller configuration and management is now available from the cloud management console (as well as from the Karbon Platform Services kps command line). Options to enable and disable the Ingress controller are available in the user interface.

Traefik or Nginx-Ingress. Content-based routing, load balancing, SSL/TLS termination. If your project requires Ingress controller routing, you can choose the open source Traefik router or the NGINX Ingress controller to enable on your Service Domain. You can only enable one Ingress controller per Service Domain.

Service Mesh

Istio. Provides traffic management, secure connection, and telemetry collection for your applications.

Data Streaming | Messaging

  • Kafka. Options to enable and disable Kafka are available in the user interface. Persistent, high performance data streaming with publish/subscribe and queue based messaging. Available for use within project applications and data pipelines, running on a Service Domain hosted in your environment.
  • NATS. In-memory, high performance data streaming with publish/subscribe and queue based messaging. Available for use within project applications and data pipelines. In-memory high performance data streaming including pub/sub (publish/subscribe) and queue-based messaging.

Logging | Monitoring | Alerting | Audit Trail

  • Prometheus. Provides Kubernetes app monitoring and logging, alerting, metrics collection, and a time series data model (suitable for graphing).
  • Logging. Centralized real-time log monitoring, log bundle collection, and external log forwarding.
  • Log forwarding policies for all users. Create, edit, and delete log forwarding policies to help make collection more granular, and then forward those Service Domain logs to the cloud - for example, AWS CloudWatch.
  • Audit trail. Karbon Platform Services maintains an audit trail of any operations performed by users in the last 30 days and displays them in an Audit Log dashboard in the cloud management console.

Infrastructure Administrator Role

Karbon Platform Services allows and defines two user types: an infrastructure administrator and a project user. An infrastructure administrator can create both user types.

Infrastructure administrator creates and administers most aspects of Karbon Platform Services. The infrastructure administrator provisions physical resources, Service Domains, services, data sources, and public cloud profiles. This administrator allocates these resources by creating projects and assigning project users to them. This user has create/read/update/delete (CRUD) permissions for:

  • Categories
  • Cloud profiles
  • Container registry profiles
  • Data sources
  • Service Domains
  • Projects
  • Users
  • Logs

When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project.

When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.

Figure. Infrastructure Administrator Click to enlarge Infrastructure Administrator Role Capabilities

Project User Role

A project user can view and use projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and Kubernetes applications.

Karbon Platform Services allows and defines two user types: an infrastructure administrator and a project user. An infrastructure administrator can create both user types.

The project user has project-specific create/read/update/delete (CRUD) permissions: the project user can create, read, update, and delete the following and associate it with an existing project:

  • Kubernetes Apps
  • Functions (scripts)
  • Data pipelines
  • Runtime environments

When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project.

When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.

Figure. Project User Click to enlarge

Naming Guidelines

Data Pipelines and Functions
  • When you create, clone, or edit a function, you can define one or more parameters. When you create a data pipeline, you define values for the parameters when you specify the function in the pipeline.
  • Data pipelines can share functions, but you can specify unique parameter values for the function in each data pipeline.
Service Domain, Data Source, and Data Pipeline Naming
  • Starts and ends with a lowercase alphanumeric character
  • Dash (-) and dot (.) characters are allowed.For example, my.service-domain is a valid Service Domain name.
  • Maximum length of 63 characters
Data Source Topic and Field Naming
Topic and field naming must be unique across the same data source types. You are allowed to duplicate names across different data source types. For example, an MQTT topic name and RTSP protocol stream name can share /temperature/frontroom.
  • An MQTT topic name must be unique when creating an MQTT data source. You cannot duplicate or reuse an MQTT topic name for multiple MQTT data sources.
  • An RTSP protocol stream name must be unique when creating an RTSP data source. You cannot duplicate or reuse an RTSP protocol stream name for multiple RTSP data sources.
  • A GigE Vision data source name must be unique when creating a GigE Vision data source. You cannot duplicate or reuse a GigE Vision data source name for multiple GigE Vision data sources.
Container Registry Profile Naming
  • Starts and ends with a lowercase alphanumeric character
  • Dash (-) and dot (.) characters are allowed.
  • Maximum length of 200 characters
All Other Resource Naming
  • Starts and ends with a lowercase alphanumeric character
  • Dash (-) and dot (.) characters are allowed. For example, my.service-domain is a valid Service Domain name
  • Maximum length of 200 characters

Karbon Platform Services Cloud Management Console

The web browser-based Karbon Platform Services cloud management console enables you to manage infrastructure and related projects, with specific management capability dependent on your role (infrastructure administrator or project user).

You can log on with your My Nutanix or local user credentials.

Logging On to The Cloud Management Console

Before you begin

  • The supported web browser is the current and two previous versions of Google Chrome.
  • If you are logging on for the first time, you might experience a guided onboarding workflow.
  • After three failed login attempts in one minute, you are locked out of your account for 30 minutes.
  • Users without My Nutanix credentials log on as a local user.

Procedure

  1. Open https://karbon.nutanix.com/ in a web browser.
  2. Choose one of the following to log on:
    • Click Login with My Nutanix and log on with your My Nutanix credentials.
    • Click Log In with Local User Account and log on with your project user or infrastructure administrator user name and password in the Username / Password fields.
    The web browser displays role-specific dashboard.

Infrastructure Admin View

The default view for an infrastructure administrator is the Dashboard . Click the menu button in the view to expand and display all available pages in this view.

Figure. Default Infrastructure Administrator View Click to enlarge This images shows the dashboard view for an administrator.

Admin Console / Projects
Drop-down menu that lets you quickly navigate to the Admin Console (home Dashboard) or any project.
Dashboard
Customized widget panel view of your infrastructure, projects, Service Domains, and alerts. Includes the Onboarding Dashboard panel, with links to create projects and project components such as Service Domains, cloud profiles, categories, and so on. Click Projects or Infrastructure to change the user dashboard view.
Services
Managed services dashboard. Add a service such as Istio to a project.
Alerts
Displays the Alerts page, which displays a sortable table of alerts associated with your Karbon Platform Services deployment.
Audit Trail
Access the Audit Log dashboard to view the most recent events or tasks performed in the console.
Projects
  • Displays a list of all projects.
  • Create projects from this page.
Infrastructure Submenu
Service Domains
  • Displays a list of available Service Domains.
  • Add an internet-connected Service Domain device or appliance.
  • You can create one or more Service Domains depending on your requirements and manage them from the Karbon Platform Services management console.
Data Sources and IoT Sensors
  • Displays a list of available or created data sources sensors
  • Create and define a data source (sensor or gateway) and associate it with a Service Domain.
Administration Submenu
Categories
  • Displays a list of available categories.
  • Create and define a category to logically group Service Domains, data sources, and other items.
Cloud Profiles
  • Displays a list of cloud profiles.
  • Add and define a cloud profile (Amazon Web Services, Google Cloud Platform, and so on) where acquired data is to be stored.
Container Registry Profiles
  • Displays a list of Docker container registry profiles.
  • Add and define a Docker container registry profile. This profile can optionally be associated with an existing cloud profile.
Users
  • Displays a list of users.
  • Create infrastructure administrators and project users.
System Logs
Run Log Collector on one or more connected Service Domains and then download the collected log bundle.
Upgrades
Apply software updates for your Service Domain when updates are available.
Help
Provides a link to the latest online documentation.

Project User View

The default view for a project user is the Dashboard .

  • Click the menu button in the view to expand and display all available pages in this view.
Figure. Default Project User View Click to enlarge Page shows the project user dashboard including an onboarding widget

Dashboard
Customized widget panel view of your infrastructure, projects, Service Domains, and alerts. Includes the Onboarding Dashboard panel, with links to create projects and project components such as Service Domains, cloud profiles, categories, and so on. The user view indicate displays Project for project users.
Projects
  • Displays a list of all projects created by the infrastructure administrator.
  • Edit an existing project.
Alerts
Displays the Alerts page, which displays a sortable table of alerts associated with your Karbon Platform Services deployment.

Kubernetes Apps, Logging, and Services

Kubernetes Apps
  • Displays a list of available Kubernetes applications.
  • Create an application and associate it with a project.
Functions and Data Pipelines
  • Displays a list of available data pipelines.
  • Create a data pipeline and associate it with a project.
  • Create a visualization, a graphical representation of a data pipeline including data sources and Service Domain or cloud data destinations.
  • Displays a list of scripts (code used to perform one or more tasks) available for use in a project.
  • Create a function by uploading a script, and then associate it with a project.
  • Displays a list of available runtime environments.
  • Create an runtime environment and associate it with a project and a container registry.
AI Inferencing
  • Create and display machine learning models (ML Models) available for use in a project.
Services and Related Alerts
  • Kafka. Configured and deployed data streaming and messaging topics.
  • Istio. Defined secure traffic management service.
  • Prometheus. Defined app monitoring and metrics collection.
  • Nginx-Ingress. Configured and deployed ingress controllers.

Viewing Dashboards

After you log on to the cloud management console, you are presented with the main Dashboard page as a role-specific landing page. You can also show this information at any time by clicking Dashboard under the main menu.

Each Karbon Platform Services element (Service Domain, function, data pipeline, and so on) includes a dashboard page that includes information about that element. It might also include information about elements associated with that element.

The element dashboard view also enables you to manage that element. For example, click Projects and select a project. Click Kubernetes Apps and click an application in the list. The application dashboard is displayed along with an Edit button.

Example: Managing a Service Domain From Its Dashboard

About this task

To complete this task, log on to the cloud management console.

To delete a service domain that does not have any associated data sources, click Infrastructure > Service Domains , select a Service Domain from the list, then click Remove . Deleting a multinode Service Domain deletes all nodes in that Service Domain.

Procedure

  1. Click Infrastructure > Service Domains , then click a Service Domain in the list.
    The Service Domain dashboard is displayed along with an Edit button.
    Figure. Service Domain Dash board Click to enlarge Service Domain dashboard with Edit button and management links.

  2. Click Edit to update Service Domain properties, such as its name, serial number, network settings, and so on. See Adding a Single Node Service Domain for information about these fields.
  3. Click Nodes to view nodes associated with the service domain.
  4. Click Data Sources .
    1. View any associated data sources.
    2. Delete a data source: Select it, then click Remove .
    3. Click Add Data Source to add another data source to the Service Domain. See Adding a Data Source and IoT Sensor.
  5. Click Alerts to show available or active alerts.

Quick Start Menu

The Karbon Platform Services management console includes a Quick Start menu next to your user name. Depending on your role (infrastructure administrator or project user), you can quickly create infrastructure or apps and data. Scroll down to see items you can add for use with projects.

Figure. Quick Start Menu Click to enlarge The quick start menu is at the top right of the console.

Getting Started - Infrastructure Administrator

These tasks assume you have already done the following. Ensure that any network-connected devices are assigned static IP addresses.

  1. Deployed data sources (sensors, gateways, other input devices) reachable by the Service Domain.
  2. Deployed an internet-connected Service Domain as a VM hosted in an AHV cluster.
  3. [Optional] Created an account with a supported cloud provider, where acquired data is transmitted for further processing.
  4. [Optional] Create a container registry with profile (public or private/on-premise registry).
  5. Obtained infrastructure administrator credentials for the Karbon Platform Services management console.

The Quick Start Menu lists the common onboarding tasks for the infrastructure administrator. It includes links to infrastructure-related resource pages. You can also go directly to any infrastructure resource from the Infrastructure menu item. As the infrastructure administrator, you need to create the following minimum infrastructure.

  1. Add a Service Domain and add categories.
  2. Add a data source to associate with a Service Domain.

Adding a Single Node Service Domain

Create and deploy a Service Domain cluster that consists of a single node.

About this task

To add (that is, create and deploy) a multinode Service Domain consisting of three or more nodes, see Manage a Multinode Service Domain.

If you deploy the Service Domain node as a VM, the procedure described in Creating and Starting the Service Domain VM can provide the VM ID (serial number) and IP address.

  • After adding a Service Domain, the status and dot color shows Service Domain status and can take a few minutes to update. See also Example: Managing a Service Domain From Its Dashboard.
    • Healthy (green). Service domain nodes started and connected.
    • Not Onboarded (gray). Service domain nodes have not started and are not yet connected. Might be transitioning to Healthy.
    • Disconnected (red). Service domain nodes have not started and are not connected. VM might be offline or unavailable.

Procedure

  1. Log on to the cloud management console at https://karbon.nutanix.com/.
  2. Click Infrastructure > Service Domains > + Service Domain .
  3. Select Any other On-prem or Cloud Environment and click Next .
  4. Name your Service Domain.
    • Starts and ends with a lowercase alphanumeric character
    • Maximum length of 63 lowercase alphanumeric characters
    • Dash (-) and dot (.) characters are allowed. For example, my-servicedomain.contoso.com is a valid Service Domain name.
  5. Select Single Node to create a single-node Service Domain.
    1. You cannot expand this Service Domain later by adding nodes.
  6. Click Add Node and enter the following node details:
    1. Serial Number of your Service Domain node VM.
      • If a Nutanix AOS cluster hosts your Service Domain node VM: in the cluster Prism web console, open the VM page, select the Service Domain node VM, and note the ID.
      • You can also display the serial number by opening this URL in a browser. Use your Service Domain node VM IP address: http://service-domain-node-ip-address:8080/v1/sn
    2. Name the node.
    3. IP Address of your Service Domain node VM.
    4. Subnet Mask and Gateway . Type the subnet mask and gateway IP address in these fields.
    5. Click the check mark icon. To change the details, hover over the ellipses menu and click Edit .
  7. Click Add Category .
    See Creating a Category. You can create one or more categories to add them to a Service Domain.
    1. Select a category and its associated value.
    2. Click Add to select another category and value.
  8. Click Next .
  9. Enter environment variables as one or more key-value pairs for the service domain. Click Add Key-Value Pair to additional pairs.
    You can set environment variables and associated values for each Service Domain as a key-value pair, which are available for use in Kubernetes apps.

    For example, you could set a secret variable key named SD_PASSWORD with a value of passwd1234 .For an example of how to use existing environment variables for a Service Domain in application YAML, see Using Service Domain Environment Variables - Example. See also Configure Service Domain Environment Variables.

  10. If your Service Domain includes a GPU/vGPU, choose its usage case.
    1. To allow access by any Kubernetes app or data pipeline, choose Use GPU for Kubernetes Apps and Data Pipelines .
    2. To allow access by AI Inferencing API (for example, if you are using ML Models), select Use GPU for AI Inferencing .
  11. To provide limited secure shell (SSH) administrator access to your service domain to manage Kubernetes pods. select Enable SSH Access .
    SSH Service Domain access enables you to run Kubernetes kubectl commands to help you with application development, debugging, and pod troubleshooting. See Secure Shell (SSH) Access to Service Domains.
  12. Click Add .

What to do next

See Adding a Data Source and IoT Sensor.
See [Optional] Creating Your Cloud Profile

Creating Your Cloud Profile

Procedure

  1. Click Administration > Cloud Profiles > Create .
  2. Select a Cloud Type (your cloud service provider).
  3. If you selected Amazon Web Services , complete the following fields:
    1. Cloud Profile Name . Name your profile.
    2. Cloud Profile Description . Describe the profile - for example, profile for car recognition apps. Up to 200 alphanumeric characters are allowed.
    3. Access Key . Enter the AWS access key ID.
    4. Secret . Enter the AWS secret access key.
  4. If you selected Azure , complete the following fields:
    1. Cloud Profile Name . Name your profile.
    2. Cloud Profile Description . Describe the profile - for example, profile for car recognition apps. Up to 200 alphanumeric characters are allowed.
    3. Storage Account Name .
    4. Storage Key . Copy your key into this field.
  5. If you selected Google Cloud Platform , complete the following fields:
    1. Cloud Profile Name . Name your profile.
    2. Cloud Profile Description . Describe the profile - for example, profile for car recognition apps. Up to 200 alphanumeric characters are allowed.
    3. GCP Service Account Info JSON . Download your Google Cloud Platform service account key as a JSON file, open the file in a text editor, and copy its contents into this field.
  6. Click Create .
  7. On the Your Cloud Profile page that is now displayed, you can add another profile by clicking Add Cloud Profile .
  8. You can also Edit profile properties or Remove a profile from Your Cloud Profile .

Creating a Category

About this task

Create categories of grouped attributes you can specify when you create a data source or pipeline.

Procedure

  1. Click Administration > Categories > Create .
  2. Name your category. Up to 200 alphanumeric characters are allowed.
  3. Purpose . Describe the category. Up to 200 alphanumeric characters are allowed.
  4. Click Add Value and type a category component name in the text field.
  5. Click the check icon to add the data source field.
  6. Click Add Value to add more category components in the text field, clicking the check icon to add each one.
  7. Click Create when you are finished.
  8. Repeat to add more categories.

What to do next

  • Use these categories as attributes when adding a data source or data pipeline.
  • See also Adding a Single Node Service Domain or Manage a Multinode Service Domain, where you can add these categories to a Service Domain.

Adding a Data Source and IoT Sensor

You can add one or more data sources (a collection of sensors, gateways, or other input devices providing data) to associate with a Service Domain.

Each defined data source consists of the following:

  • Data source type (sensor, input device like a camera, or gateway) - the origin of the data
  • Communication protocol typically associated with the data source
  • Authentication type to secure access to the data source and data
  • One or more fields specifying the data extraction method - the data pipeline specification
  • Categories which are attributes that can be metadata you define to associate with the captured data

Add a Data Source - MQTT

About this task

Add one or more data sources using the Message Queuing Telemetry Transport lightweight messaging protocol (MQTT) to associate with a Service Domain.

Certificates downloaded from the cloud management console have an expiration date 30 years from the certificate creation date. Download the certificate ZIP file each time you create an MQTT data source. Nutanix recommends that you use each unique set of these security certificates and keys for any MQTT data sources you add.

When naming entities, Up to 200 alphanumeric characters are allowed.

Procedure

  1. Click Infrastructure > Data Sources and IoT Sensors > Add Data Source .
  2. Select a data source type: Sensor or Gateway .
    1. Name your data source.
    2. Associated Service Domain . Select a Service Domain for your data source.
    3. Select Protocol > MQTT .
    4. Authentication type . When you select the MQTT protocol, the Certificate authentication type is selected automatically to authenticate the connection between your data source and the Service Domain.
    5. Click Generate Certificates , then click Download to download a ZIP file that contains X.509 sensor certificate (public key) and its private key and Root CA certificates.
      See Certificates Used with MQTT Data Sources.
  3. Click Next to go to Data Extraction to specify the data fields to extract from the data source.

Data Extraction - MQTT

Procedure

  1. Click Add New Field .
    1. Name . Enter a relevant name for the data source field.
    2. Add MQTT Topic . Enter a topic (a case-sensitive UTF-8 string). For example, /temperature/frontroom for a temperature sensor located in a specific room.
      An MQTT topic name must be unique when creating an MQTT data source. You cannot duplicate or reuse an MQTT topic name for multiple MQTT data sources.
    3. Click the check icon to add the data source field.
  2. Click Add New Field to add another topic or click Next to go to the Category Assignment panel.

Category Assignment - MQTT

About this task

Categories enable you to define metadata associated with captured files from the data source.

Procedure

  1. Select All Fields from Select Fields .
  2. Select Category > Data Type .
  3. Select one of the Categories (as defined by Categories you have created) in the third selection menu.
  4. Click Add .

Add a Data Source - RTSP

About this task

Add one or more data sources using the Real Time Streaming protocol (RTSP) to associate with a Service Domain.

Procedure

  1. Click Infrastructure > Data Sources > Add Data Source .
  2. Select a data source type: Sensor or Gateway .
    1. Name your data source. Up to 200 alphanumeric characters are allowed.
    2. Associated Service Domain . Select a service domain for your data source.
    3. Select Protocol > RTSP .
    4. Authentication type . When you select RTSP , the Username and Password authentication type is selected automatically to authenticate the connection between your data source and the service domain.
    5. IP address . Enter the sensor / gateway IP address.
    6. Port . Specify a port number for the streaming device. Default port is 554.
    7. User Name . Enter the user name credential for the data source.
    8. Password . If your data source requires it, enter the password credential for the data source. Click Show to display the password as you type.
  3. Click Next to go to Data Extraction to specify the data fields to extract from the data source.
    These steps create an RTSP URL in the format rtsp://username:password@ip-address/ . For example: rstp://userproject2:

    In the next step, you will specify one or more streams.

Data Extraction - RTSP

Procedure

  1. Click Add New Field on the Data Extraction page depending on your data source type.
    1. Name . Enter a relevant name for the data source.
    2. Complete the RTSP URL field by specifying a named protocol stream. For example, BackRoomCam1 .
      An RTSP protocol stream name must be unique when creating an RTSP data source. You cannot duplicate or reuse an RTSP protocol stream name for multiple RTSP data sources.
    3. Click the check icon to add the data source.
  2. Click Add New Field to add another stream or click Next to go to the Category Assignment panel.

Category Assignment- RTSP

About this task

Categories enables you to define metadata associated with captured files from the data source.

Procedure

  1. Select All Fields from Select Fields .
  2. Select Category > Data Type .
  3. Select one of the Categories (as defined by Categories you have created) in the third selection menu.
  4. Click Add .

Add a Data Source - GigE Vision

About this task

Add one or more data sources using the GigE Vision camera protocol to associate with a Service Domain.

Before you begin

GigE Vision data sources must be on the same subnet as the Service Domain.

Procedure

  1. Click Infrastructure > Data Sources and IoT Sensors > Add Data Source .
  2. Select a data source type: Sensor or Gateway .
    1. Name your data source. Up to 200 alphanumeric characters are allowed.
    2. Associated Service Domain . Select a Service Domain for your data source.
    3. Select Protocol > GigE Vision .
    4. Authentication type . When you select GigE Vision , the None authentication type is selected automatically.
    5. IP address . Enter the sensor / gateway IP address.
  3. Click Next to go to Data Extraction to specify the data fields to extract from the data source.

Data Extraction

Procedure

Click Add New Field on the Data Extraction page depending on your data source type.
  1. Name . Enter a relevant name for the data source.
  2. Click the check icon to add the data source.
    A GigE Vision data source name must be unique when creating a GigE Vision data source. You cannot duplicate or reuse a GigE Vision data source name for multiple GigE Vision data sources.

Category Assignment

About this task

Categories enables you to define metadata associated with captured files from the data source.

Procedure

  1. Select All Fields from Select Fields .
  2. Select Category > Data Type .
  3. Select one of the Categories (as defined by Categories you have created) in the third selection menu.
  4. Click Add .

Adding a Container Registry Profile

About this task

Create a new container registry profile or associate an existing cloud profile. This profile stores container images which can be public (for example, Docker Hub) or private (an on-premise registry).

Procedure

  1. Click Administration > Container Registry Profiles > Add Profile .
  2. To add a new profile:
    1. Select Add New .
    2. Name your profile.
      • Up to 200 alphanumeric characters are allowed
      • Starts and ends with a lowercase alphanumeric character
      • Dash (-) and dot (.) characters are allowed
    3. Description . Describe the profile. Up to 75 alphanumeric characters are allowed.
    4. Container Registry Host Address . Provide a public or private registry host address. The host address can be in the format host.domain:port_number/repository_name
    For example, https://index.docker.io/v1/ or registry-1.docker.io/distribution/registry:2.1
    1. Username . Enter the user name credential for the profile.
    2. Password . Enter the password credential for the profile. Click Show to display the password as you type.
    3. Email Address . Add an email address in this field.
  3. To use an existing cloud profile that you have already added (see Creating Your Cloud Profile):
    1. Select Use Existing cloud profile :
    2. Select an existing profile from Cloud Profile .
    3. Enter the Name of your container registry profile.
      • Up to 200 alphanumeric characters are allowed
      • Starts and ends with a lowercase alphanumeric character
      • Dash (-) and dot (.) characters are allowed
    4. Description . Describe the profile. Up to 75 alphanumeric characters are allowed.
    5. Server . Provide a server URL to the container registry in the format used by your cloud provider.
      For example, an Amazon AWS Elastic Container Registry (ECR) URL might be: https:// aws_account_id .dkr.ecr. region .amazonaws.com
  4. Click Create .

Creating Users

As an infrastructure administrator, you can create infrastructure users or project users. Users without My Nutanix credentials log on as a local user.

Before you begin

You can also do this step from the Dashboard panel Getting Started.

Procedure

  1. Click Administration > Users > Create .
  2. Name . Provide a name for the user.
  3. Email . Enter a user email address which will be the user name to log on to the cloud management console.
  4. Password . Enter a password to be used when logging on to the Karbon Platform Services management console.
  5. Select a user role.
    1. Infrastructure Admin
    2. User
  6. Click Create .
  7. Repeat to add more users.

Certificates Used with MQTT Data Sources

Each Service Domain image is preconfigured with security certificates and public/private keys.

When you create an MQTT data source, you generate and download a ZIP file that contains X.509 sensor certificate (public key) and its private key and Root CA certificates. Install these components on the MQTT enabled sensor device to securely authenticate the connection between an MQTT enabled sensor device and Service Domain. See your vendor document for your MQTT enabled sensor device for certificate installation details.

Certificates downloaded from the Karbon Platform Services management console have an expiration date 30 years from the certificate creation date. Download the certificate ZIP file each time you create an MQTT data source. Nutanix recommends that you use each unique set of these security certificates and keys for any MQTT data sources you add.

Figure. Certificate Download When Creating a Data Source Click to enlarge Download a certificate

Onboard and Manage Your Service Domain

The Karbon Platform Services cloud management console provides a rich administrative control plane to manage your Service Domain and its infrastructure. The topics in this section describe how to create, add, and upgrade a Service Domain.

Checking Service Domain Details

In the cloud management console, go to Infrastructure > Service Domains to add a VM-based Service Domain. You can also view health status, CPU/Memory/Storage usage, version details, and more information for every service domain.

Upgrading a Service Domain

In the cloud management console, go to Administration > Upgrades to upgrade your existing Service Domains. This page provides you with various levels of control and granularity over your maintenance process. At your convenience, download new versions for all or specific Service Domains and upgrade them with "1-click".

Onboarding a Multinode Service Domain By Using Nutanix Karbon

You can now onboard a multinode Service Domain by using Nutanix Karbon as your infrastructure provider to create a Service Domain Kubernetes cluster. To do this, use Karbon on Prism Central with the kps command line and cloud management console Create a Service Domain workflow. See Onboarding a Multinode Service Domain By Using Nutanix Karbon. (You can also continue to use other methods to onboard and create a Service Domain, as described in Onboarding and Managing Your Service Domain.)

For advanced Service Domain settings, the Nutanix public Github repository includes a README file describing how to use the command line and the required YAML configuration file for cluster.

This public Github repository at https://github.com/nutanix/karbon-platform-services/tree/master/cli also describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

About the Service Domain Image Files

The Karbon Platform Services Release Notes include information about any new and updated features for the Service Domain. You can create one or more Service Domains depending on your requirements and manage them from the Karbon Platform Services management console.

The Service Domain is available as a qcow disk image provided by Nutanix for hosting the VM in an AOS cluster running AHV.

The Service Domain is also available as an OVA disk image provided by Nutanix for hosting the VM in a non-Nutanix VMware vSphere ESXi cluster. To deploy a VM from an OVA file on vSphere, see the documentation at the VMware web site describing how to deploy a virtual machine from an OVA file for your ESXi version

Each Service Domain you create by using these images are configured with X.509 security certificates.

If your network requires that traffic flow through an HTTP/HTTPS proxy, see HTTP/HTTPS Proxy Support for a Service Domain VM.

Download the Service Domain VM image file from the Nutanix Support portal Downloads page. This table describes the available image file types.

Table 1. Service Domain Image Types
Service Domain Image Type Use
QCOW2 Image file for hosting the Service Domain VM on an AHV cluster
OVA Image file for hosting the Service Domain VM on vSphere.
EFI RAW compressed file RAW file in GZipped TAR file format for bare metal installation, where the hosting machine is using an Extensible Firmware Interface (EFI) BIOS
RAW compressed file RAW file in GZipped TAR file format for bare metal installation, where the hosting machine is using a legacy or non-EFI BIOS
AWS RAW uncompressed file Uncompressed RAW file for hosting the Service Domain on Amazon Web Services (AWS)

Service Domain VM Resource Requirements

As a default, in a single VM deployment, the Service Domain requires these resources to support Karbon Platform Services features. You can download the Service Domain VM image file from the Nutanix Support portal Downloads page.

References
  • AHV Administration Guide, Host Network Management
  • Prism Web Console Guide, Network Management topic
  • Prism Web Console Guide, Virtual Machine Customization topic (cloud-init)
Table 1. Service Domain Infrastructure VM Cluster Requirements
VM Resource Requirements Supported Clusters
Environment AOS cluster running AHV (AOS-version-compatible version), where Service Domain Infrastructure VM is running as a guest VM.

VMware vSphere ESXi 6.0 or later cluster, where Service Domain Infrastructure VM is running as a guest VM (created from an OVA image file provided by Nutanix). The OVA image as provided by Nutanix is running virtual hardware version 11.

vCPUs 8 single core vCPUs
Memory 16 GiB memory. You might require more storage as determined by your applications.
Disk storage Minimum 200 GB storage. The Service Domain Infrastructure VM image file provides an initial disk size of 100 GiBs (gibibytes). You might require more storage as determined by your applications. Before first power on of the VM, you can increase (but not decrease) the VM disk size.
GPUs [optional] GPUs as required by any application using them

Karbon Platform Services Port and Firewall Requirements

Service Domain Infrastructure VM Network Requirements and Recommendations

Note: For more information about ports and protocols, see the Ports and Protocols Reference.
Table 1. Network Requirements and Recommendations
Item Requirement/Recommendation
Outbound port Allow connection for applications requiring outbound connectivity.

Starting with Service Domain 2.2.0, Karbon Platform Services now retrieves Service Domain package images from these locations. Ensure that your firewall or proxy allows outbound Internet access to the following.

  • *.dkr.ecr.us-west-2.amazonaws.com - Amazon Elastic Container Registry
  • https://gcr.io and *.gcr.io- Google container registry
Allow outbound port 443 for websocket connection to management console / cloud providers
NTP Allow outbound NTP connection For network time protocol server.
HTTPS proxy Service Domain Infrastructure VM supports a network configuration that includes an HTTPS proxy. Customers can now configure such a proxy as part of a cloud-init based method when deploying Service Domain Infrastructure VMs.
Service Domain Infrastructure VM static IP address The Service Domain Infrastructure VM requires a static IP address as provided through a managed network when hosted on an AOS / AHV cluster.
Configured network with one or more configured domain name servers (DNS) and optionally a DHCP server
Integrated IP address management (IPAM), which you can enable when creating virtual networks for VMs in the Prism web console
(Optional) cloud-init script which specifies network details including a DNS server
Miscellaneous The cloud-init package is included in the Service Domain VM image to enable support for Nutanix Calm and its associated deployment automation features.
Real Time Streaming protocol (rtsp) Port 554 (default)

Create and Onboard the Service Domain VM

Onboarding the Service Domain VM is a three-step process:

  1. For an AOS cluster running Nutanix AHV, log on to the Prism Element cluster web console and upload the disk image through the image service. See Uploading the Service Domain Image.
  2. Create and power on the Service Domain VM. See Creating and Starting the Service Domain VM.

    If your network requires that traffic flow through an HTTP/HTTPS proxy, see HTTP/HTTPS Proxy Support for a Service Domain VM.

  3. Add this Service Domain VM to the Karbon Platform Services cloud management console. See:
    • Adding a Single Node Service Domain. This VM acts as a single node that stores installation, configuration, and service/microservice data, metadata, and images, as well as providing Service Domain computing and cloud infrastructure management.
    • Adding a Multinode Service Domain. Create an initial three-node Service Domain to create a multinode Service Domain cluster.

See also:

  • Deploying the Service Domain Image on a Bare Metal Server
  • Deploying the Service Domain on Amazon Web Services

Uploading the Service Domain Image

How to upload the Service Domain VM disk image file on AHV running in an AOS cluster.

About this task

This topic describes how to initially install the Service Domain VM on an AOS cluster by uploading the image file. For details about your cluster AOS version and the procedures, see the Prism Web Console Guide.

To deploy a VM from an OVA file on vSphere, see the documentation at the VMware web site describing how to deploy a virtual machine from an OVF or OVA file for your ESXi version.

Procedure

  1. Log on as an administrator to the Prism Element web console through the Chrome web browser.
  2. Click the gear icon in the main menu and select Image Configuration .
    The Image Configuration window appears.
    Figure. Image Configuration Window Click to enlarge Image configuration windows with Upload Image button

  3. Click Upload Image to launch the Create Image window.
    Do the following in the indicated fields:
    Figure. Create Image Window Click to enlarge create image window

    1. Name : Enter a name for the image.
    2. Annotation (optional): Enter a description for the image.
    3. Image Type (optional): Select the image type Disk .
    4. Storage Container : Select the default SelfServiceContainer .
    5. Image Source : Click Upload a file , then Choose File to select the Service Domain VM disk image file (that you downloaded from the Nutanix Support portal) from the file search window.
    6. When all the fields are correct, click the Save button.
      The Create Image window closes after uploading the file and the Image Configuration window reappears with the new disk image appearing in the list. This window also displays the uploading progress and will show the image State as INACTIVE until uploading completes, where the State changes to ACTIVE .

What to do next

See the topic Creating and Starting the Service Domain.

Creating and Starting the Service Domain VM

After uploading the Service Domain VM disk image file, create the Service Domain VM and power it on. After creating the Service Domain VM, note the VM IP address and ID in the VM Details panel. You will need this information to add your Service Domain in the Karbon Platform Services management console Service Domains page.

About this task

This topic describes how to create the Service Domain VM on an AOS cluster and power it on. For details about your cluster's AOS version and VM management, see the Prism Web Console Guide.

To deploy a VM from an OVA file on vSphere, see the VMware documentation for your ESXi version.

The most recent requirements for the Service Domain VM is listed in the Karbon Platform Services Release Notes.

If your network requires that traffic flow through an HTTP/HTTPS proxy, you can use a cloud-init script. See HTTP/HTTPS Proxy Support for a Service Domain VM.

Procedure

  1. Log on as an administrator to your cluster's web console through the Chrome web browser.
  2. Click Home > VM , then click Create VM .
    The Create VM dialog box appears. You might need to scroll down to see everything that is displayed here.
    Figure. Create VM Dialog Box 1 Click to enlarge Create VM dialog box with fields

    Figure. Create VM Dialog Box 2 Click to enlarge Create VM dialog box with fields

    Figure. Create VM Dialog Box 3 Click to enlarge Create VM dialog box with fields

  3. Do the following in the indicated fields.
    You might require more memory and storage as determined by your applications.
    1. Name : Enter a name for the VM.
    2. Description (optional): Enter a description for the VM.
    3. Use this VM as an agent VM : Do not select. Not used in this case.
    4. vCPU(s) : Enter the number of virtual CPUs to allocate to this VM. For Karbon Platform Services, enter 8 .
    5. Number of Cores per vCPU : Enter the number of cores assigned to each virtual CPU. For Karbon Platform Services, enter 1 .
    6. Memory : Enter the amount of memory (in GiBs) to allocate to this VM. For Karbon Platform Services, enter 16 .
  4. Click Add Disk to add the uploaded Karbon Platform Services image file.
    Figure. Add Disk Dialog Click to enlarge Create a disk for your VM

    1. Select Type > DISK .
    2. Select Operation > Clone from Image Service .
    3. Select Bus Type > SCSI .
    4. Select the Image you uploaded in Uploading the Service Domain Image.
    5. Keep the default Index selection.
    6. Click Add .
  5. Click Add New Nic to assign the VM network interface to a vlan, then click Add .
  6. Select Legacy BIOS to start the VM using legacy BIOS firmware. This choice is selected by default on AHV clusters supporting legacy or UEFI boot firmware.
  7. (For GPU-enabled AHV clusters only) To configure GPU access, click Add GPU in the Graphics section, and then do the following in the Add GPU dialog box:
    Figure. Add GPU Dialog Box Click to enlarge

    1. Configure GPU pass-through by clicking Passthrough , selecting the GPU that you want to allocate, and then clicking Add .
      If you want to allocate additional GPUs to the VM, repeat the procedure as many times as you need to. Make sure that all the allocated pass-through GPUs are on the same host. If all specified GPUs of the type that you want to allocate are in use, you can proceed to allocate the GPU to the VM, but you cannot power on the VM until a VM that is using the specified GPU type is powered off.
  8. Click Save .
    The VM creation task progress appears in Tasks at the top of the web console.
    Note: You might require more storage as determined by your applications. Before first power on of the Service Domain VM, you can increase (but not decrease) the VM disk size.
    • When the VM creation task is completed (the VM is created successfully), select the new Service Domain VM in the Table view, scroll to the bottom of the VM page, and click Update .
    • Scroll to the disk, click the pencil icon to edit the disk, and increase the disk Size , then click Update and Save .
  9. When the VM creation task is completed (the VM is created successfully), select the new Service Domain VM in the Table view, scroll to the bottom of the VM page, and Power On the VM.
    Note the VM IP address and ID in the VM Details panel. You will need this information to add your Service Domain in the Karbon Platform Services management console Service Domains page.
  10. If you are creating a multinode Service Domain , repeat these steps to create at least two more VMs for a minimum of three VMs. The additional VMs you create here can become nodes in a multinode Service Domain cluster, or remain unclustered individual/single node Service Domains.

What to do next

Add the newly-created Service Domain VM in the Karbon Platform Services management console. See:
  • Getting Started - Infrastructure Administrator
  • Adding a Single Node Service Domain for single node service domains
  • Manage a Multinode Service Domain for Service Domains of up to three nodes

Deploying the Service Domain Image on a Bare Metal Server

Before you begin

  • For installing the Service Domain on bare metal, see About the Service Domain Image Files for available image types. In this procedure, you can use a RAW or EFI RAW compressed Service Domain image depending on the bare metal server BIOS type.
  • For a list of approved hardware for bare metal installation, contact Nutanix Support or send email to kps@nutanix.com.
  • Before imaging the bare metal server, prepare it by updating any required firmware and performing hardware RAID virtual disk configuration.
  • Minimum destination disk size is 200 GB.
  • Single node Service Domain support only. Multinode Service Domains are not supported.

Procedure

  1. Download and boot a live image.
    You can image the bare metal serve by live-booting any Linux operating system that includes destination driver support and these utilities or packages: lshw (list hardware), tar (tape archive), gzip (GNU compression utilitiy), dd (file convert and copy). You can live-boot an image through a USB or BMC connection. These steps use an Ubuntu distro and USB drive as an example.
    1. Use MacOS or Microsoft Windows, create an Ubuntu bootable USB drive.
    2. Download the Service Domain image and copy to the USB drive.
    3. Boot from the Ubuntu bootable USB drive with the Try Ubuntu option.
  2. In Ubuntu, open Terminal, find the destination disk, and image it.
    1. Find the destination disk.
      $ sudo lshw -c disk
    2. Go to the directory where you saved the Service Domain image. For example, where drive_label is the USB drive:
      $ cd /media/ubuntu/drive_label
    3. Image the destination disk. For example:
      $ sudo tar -xOzvf service-domain-image.raw.tgz | sudo dd of=destination_disk bs=1M status=progress
      • service-domain-image.raw .tgz. RAW or EFI RAW compressed Service Domain image you downloaded and copied to the USB drive.
      • destination_disk . Destination disk. For example, /dev/sda , /dev/nvme0n1 , and so on.
  3. When imaging is completed successfully, restart the baremetal server.
    1. Connect the primary network interface to a network with DHCP.
    2. Remove the USB drive.
    3. Restart the bare metal server and note the IP address received from DHCP on boot.
    During startup (boot), note the IP address assigned by the DHCP server.

What to do next

See Adding a Single Node Service Domain

Deploying the Service Domain on Amazon Web Services

Before you begin

  • For installing the Service Domain on Amazon Web Services (AWS), see About the Service Domain Image Files for available image types. In this procedure, you can use a RAW uncompressed Service Domain image.
  • You need an AWS account to deploy the Service Domain on AWS.
  • Nutanix recommends that you create a new dedicated Amazon S3 bucket for the Service Domain. This procedure applies a new trust policy to the S3 bucket storing the Service Domain. Creating a dedicated S3 bucket helps avoid other policies or configurations being inadvertently being applied to the Service Domain.
  • Install configure the AWS command line interface aws on your local machine, so that you can communicate with AWS as required in these steps. For convenience, download the Service Domain image to this machine.
  • Single node Service Domain support only. Multinode Service Domains are not supported.

Procedure

  1. Create an AWS S3 bucket and copy the Service Domain RAW image to it.
    1. Create the bucket.
      $ aws s3 mb s3://raw-image-bkt
      • raw-image-bkt . Name of the your S3 bucket.
      • aws_region . Your AWS Region.
    2. Copy the Service Domain RAW image to the bucket you just created.
      $ aws s3 cp service-domain-image.raw s3://raw-image-bkt
      • service-domain-image.raw . RAW uncompressed Service Domain image you downloaded.
  2. In a text editor, create a new trust policy configuration file named trust-policy.json .
    {
       "Version": "2012-10-17",
       "Statement": [
          {
             "Effect": "Allow",
             "Principal": { "Service": "vmie.amazonaws.com" },
             "Action": "sts:AssumeRole",
             "Condition": {
                "StringEquals":{
                   "sts:Externalid": "vmimport"
                }
             }
          }
       ]
    }
  3. Create a role named vmimport and grant VM Import/Export access to it.
    $ aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json
  4. Create a new role policy configuration file named role-policy.json .
    Make sure you replace raw-image-bkt with the actual name of your S3 bucket.
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Effect": "Allow",
             "Action": [
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:ListBucket" 
             ],
             "Resource": [
                "arn:aws:s3:::raw-image-bkt",
                "arn:aws:s3:::raw-image-bkt/*"
             ]
          },
          {
             "Effect": "Allow",
             "Action": [
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutObject",
                "s3:GetBucketAcl"
             ],
             "Resource": [
                "arn:aws:s3:::raw-image-bkt",
                "arn:aws:s3:::raw-image-bkt/*"
             ]
          },
          {
             "Effect": "Allow",
             "Action": [
                "ec2:ModifySnapshotAttribute",
                "ec2:CopySnapshot",
                "ec2:RegisterImage",
                "ec2:Describe*"
             ],
             "Resource": "*"
          }
       ]
    }
  5. Attach the role policy in role-policy.json to the vmimport role.
    $ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json
  6. Create a file named container.json .
    Make sure you replace raw-image-bkt with the actual name of your S3 bucket and service-domain-image.raw with the name of the RAW uncompressed Service Domain image.
    {
         "Description": "Karbon Platform Services Raw Image",
         "Format": "RAW",
         "UserBucket": {
            "S3Bucket": "raw-image-bkt",
            "S3Key": "service-domain-image.raw"
        }
     }
    
  7. Import the snapshot as an Amazon Machine Image (AMI) by specifying container.json .
    $ aws ec2 import-snapshot --description "exampletext" --disk-container "file://container.json"
    The command output displays a task_id . With the task_id , you can see the snapshot task progress as follows:
    $ aws ec2 describe-import-snapshot-tasks --import-task-ids task_id
  8. After the task completes successfully, get the snapshot ID ( SnapshotId , needed for the next steps).
    $ aws ec2 describe-import-snapshot-tasks --import-task-ids task_id
    Example completed task status output:
    {
        "ImportSnapshotTasks": [
            {
                "Description": "Karbon Platform Services Raw Image",
                "ImportTaskId": "import-task_id",
                "SnapshotTaskDetail": {
                    "Description": "Karbon Platform Services Raw Image",
                    "DiskImageSize": "disk_size",
                    "Format": "RAW",
                    "SnapshotId": "snapshot_ID"
                    "Status": "completed",
                    "UserBucket": {
                       "S3Bucket": "raw-image-bkt",
                       "S3Key": "service-domain-image.raw"
                    }
                }
            }
        ]
    }

Registering and Launching the Image in the EC2 Console

Procedure

  1. To register the AMI from the snapshot, run this command.
    $ aws ec2 register-image --virtualization-type hvm \
    --name "Karbon Platform Services Service Domain Image" --architecture x86_64 \
    --root-device-name "/dev/sda1"  --block-device-mappings \
    "[{\"DeviceName\": \"/dev/sda1\", \"Ebs\": {\"SnapshotId\": \"snapshot_ID\"}}]"
    
    1. snapshot_ID . Snapshot ID from completed task status output
  2. In the EC2 console, go to Images > AMIs , and select the AMI you created.
  3. Click Launch .
    1. For Choose Instance Type , select an instance with minimum 4 vCPUs and 16GiB memory.
    2. Click through to Add Storage and create an EBS volume with minimum disk size of 100 GiB and a device name of /dev/xvdf .
    3. Proceed to Review .
    4. In the Select an existing key pair or create a new key pair dialog box, use an existing key pair or create a new pair.
    5. If you create a new key pair, download the corresponding .PEM file.
  4. Launch the AMI.
  5. Return to the AWS CLI and get the Public and Private IP address of your instance by using the instance ID instance_id .
    $ aws ec2 describe-instances --instance-id instance_id --query 'Reservations[].Instances[].[PublicIpAddress]' --output text | sed '$!N;s/\n/ /'
  6. Back at the EC2 console, select Connect and follow the instructions to open an SSH session into your instance.
    Next, you need the serial number and gateway/subnet address to add your Service Domain to Karbon Platform Services.
  7. Log on to the instance and run these commands, noting the serial number and addresses:
    $ cat /config/serial_number.txt
    $ route -n

What to do next

See Adding a Single Node Service Domain

HTTP/HTTPS Proxy Support for a Service Domain VM

Attach a cloud-init script to configure HTTP/HTTPS proxy server support.

If your network policies require that all HTTP network traffic flow through a proxy server, you can configure a Service Domain to use an HTTP proxy. When you create the service domain VM, attach a cloud-init script with the proxy server details. When you then power on the VM and it fully starts, it will include your proxy configuration.

If you require a secure proxy (HTTPS), use the cloud-init script to upload SSL certificates to the Service Domain VM.

You can attach or copy the script in the Custom Script section of the Nutanix AOS cluster Create VM dialog box.
Figure. Create VM Cloud-Init for HTTP/HTTPS Proxy Click to enlarge This picture shows the cloud-init section of the Nutanix AOS cluster Create VM dialog box.

Sample cloud-init Script for HTTPS Proxy Server Configuration

This script creates an HTTP/HTTPS proxy server configuration on the Service Domain VM after you create and start the VM. Note that CACERT_PATH= in the first content spec is optional in this case, as it is already specified in the second path spec.

#cloud-config
#vim: syntax=yaml
write_files:
- path: /etc/http-proxy-environment
  content: |
    HTTPS_PROXY="http://ip_address:port"
    HTTP_PROXY="http://ip_address:port"
    NO_PROXY="127.0.0.1,localhost"
    CACERT_PATH="/etc/pki/ca-trust/source/anchors/proxy.crt"
- path: /etc/systemd/system/docker.service.d/http-proxy.conf
  content: |
    [Service]
    Environment="HTTP_PROXY=http://ip_address:port" 
    Environment="HTTPS_PROXY=http://ip_address:port" 
    Environment="NO_PROXY=127.0.0.1,localhost"
- path: /etc/pki/ca-trust/source/anchors/proxy.crt
  content: |
    -----BEGIN CERTIFICATE-----
PASTE CERTIFICATE DATA HERE
    -----END CERTIFICATE-----
runcmd:
- update-ca-trust force-enable
- update-ca-trust extract
- yum-config-manager --setopt=proxy=http://ip_address:port --save
- systemctl daemon-reload
- systemctl restart docker
- systemctl restart sherlock_configserver

Adding a Single Node Service Domain

Create and deploy a Service Domain cluster that consists of a single node.

About this task

To add (that is, create and deploy) a multinode Service Domain consisting of three or more nodes, see Manage a Multinode Service Domain.

If you deploy the Service Domain node as a VM, the procedure described in Creating and Starting the Service Domain VM can provide the VM ID (serial number) and IP address.

  • After adding a Service Domain, the status and dot color shows Service Domain status and can take a few minutes to update. See also Example: Managing a Service Domain From Its Dashboard.
    • Healthy (green). Service domain nodes started and connected.
    • Not Onboarded (gray). Service domain nodes have not started and are not yet connected. Might be transitioning to Healthy.
    • Disconnected (red). Service domain nodes have not started and are not connected. VM might be offline or unavailable.

Procedure

  1. Log on to the cloud management console at https://karbon.nutanix.com/.
  2. Click Infrastructure > Service Domains > + Service Domain .
  3. Select Any other On-prem or Cloud Environment and click Next .
  4. Name your Service Domain.
    • Starts and ends with a lowercase alphanumeric character
    • Maximum length of 63 lowercase alphanumeric characters
    • Dash (-) and dot (.) characters are allowed. For example, my-servicedomain.contoso.com is a valid Service Domain name.
  5. Select Single Node to create a single-node Service Domain.
    1. You cannot expand this Service Domain later by adding nodes.
  6. Click Add Node and enter the following node details:
    1. Serial Number of your Service Domain node VM.
      • If a Nutanix AOS cluster hosts your Service Domain node VM: in the cluster Prism web console, open the VM page, select the Service Domain node VM, and note the ID.
      • You can also display the serial number by opening this URL in a browser. Use your Service Domain node VM IP address: http://service-domain-node-ip-address:8080/v1/sn
    2. Name the node.
    3. IP Address of your Service Domain node VM.
    4. Subnet Mask and Gateway . Type the subnet mask and gateway IP address in these fields.
    5. Click the check mark icon. To change the details, hover over the ellipses menu and click Edit .
  7. Click Add Category .
    See Creating a Category. You can create one or more categories to add them to a Service Domain.
    1. Select a category and its associated value.
    2. Click Add to select another category and value.
  8. Click Next .
  9. Enter environment variables as one or more key-value pairs for the service domain. Click Add Key-Value Pair to additional pairs.
    You can set environment variables and associated values for each Service Domain as a key-value pair, which are available for use in Kubernetes apps.

    For example, you could set a secret variable key named SD_PASSWORD with a value of passwd1234 .For an example of how to use existing environment variables for a Service Domain in application YAML, see Using Service Domain Environment Variables - Example. See also Configure Service Domain Environment Variables.

  10. If your Service Domain includes a GPU/vGPU, choose its usage case.
    1. To allow access by any Kubernetes app or data pipeline, choose Use GPU for Kubernetes Apps and Data Pipelines .
    2. To allow access by AI Inferencing API (for example, if you are using ML Models), select Use GPU for AI Inferencing .
  11. To provide limited secure shell (SSH) administrator access to your service domain to manage Kubernetes pods. select Enable SSH Access .
    SSH Service Domain access enables you to run Kubernetes kubectl commands to help you with application development, debugging, and pod troubleshooting. See Secure Shell (SSH) Access to Service Domains.
  12. Click Add .

What to do next

See Adding a Data Source and IoT Sensor.
See [Optional] Creating Your Cloud Profile

Manage a Multinode Service Domain

Create and deploy a Service Domain cluster that consists of three or more nodes.

Caution: Nutanix does not support upgrading Tech Preview multinode Service Domains to generally available Service Domain versions, then deploying those Service Domains in a production environment. For example, upgrading a multinode Service Domain v1.18 to v2.0.0, then using that Service Domain in production.

A multinode Service Domain is a cluster initially consisting of a minimum of three leader nodes. Each node is a single Service Domain VM hosted in an AHV cluster.

Creating and deploying a multinode Service Domain is a three-step process:

  1. Log on to the AHV cluster web console and upload the Service Domain disk image to a Nutanix AHV cluster through the image service as described in Uploading the Service Domain Image.
  2. Create three or more Service Domain VMs from this disk image, as described in Creating and Starting the Service Domain VM.
  3. Add a multinode Service Domain in the Karbon Platform Services management console as described in Adding a Multinode Service Domain.

Multinode Service Domain Requirements and Limitations

The Service Domain image version where Karbon Platform Services introduces this feature is described in the Karbon Platform Services Release Notes.

Service Domain VM hosting
  • Create and then start Service Domain VMs in a Nutanix AHV cluster only.
Single node Service Domain
  • You can create a single node Service Domain from a multinode compatible image version. It must remain a single node Service Domain, as you cannot expand this domain type.
  • You can upgrade an existing single node Service Domain to a multinode compatible image version, but it is limited for use as a single node Service Domain only.
Supported multinode Service Domain configuration
  • You can add a multinode Service Domain consisting of three Service Domain nodes initially created from a multinode compatible image version only.
  • Each node in a multinode Service Domain must reside on the same subnet.
  • You cannot mix image versions. Create each node in a multinode Service Domain from the same image version. You can also upgrade them to the same multinode compatible version later.
Service Domain High Availability Kubernetes API Support

Starting with new Service Domain version 2.3.0 deployments, high availability support for the Service Domain is now implemented through the Kubernetes API server (kube-apiserver). This support is specific to new multinode Service Domain 2.3.0 deployments. When you create a multinode Service Domain to be hosted in a Nutanix AHV cluster, you must specify a Virtual IP Address (VIP), which is typically the IP address of the first node you add.

  • To enable the HA kube-apiserver support, ensure that the VIP address is part of the same subnet for all Service Domain VMs and the VIP address has not already been allocated to any VM. Otherwise, the Service Domain will not enable this feature.
  • Also ensure that the VIP address in this case is not part of any cluster IP address pool range that you have specified when you created a virtual network for guest VMs in the AHV cluster. That is, the VIP address must be outside this IP pool address range. Otherwise, creation of the Service Domain in this case will fail.
Adding or removing nodes from a multinode Service Domain
  • You can add or remove nodes to an existing multinode Service Domain. However, you cannot remove the first three nodes you add to the Service Domain. These nodes are tagged as leader nodes. Each subsequent node you add is a worker node and is removable.
Configuring Storage and Storage Infrastructure

Each node requires access to shared storage from an AOS cluster. Ensure that you meet the following requirements to create a storage profile. Adding a Multinode Service Domain requires these details.

On your AOS cluster:

  • Create a Cluster Administrator user dedicated for use with this feature. Do not use this admin user for day-to-day AOS cluster tasks. For information about creating a user role, see Creating a User Account in the Security Guide .
  • You must configure the AOS cluster with a cluster virtual IP address and an iSCSI Data Services IP address. See Modifying Cluster Details in the Prism Web Console Guide .
  • Ensure that the AOS cluster has at least one storage container available for shared storage use by the nodes. If you want to optionally maintain a storage quota for use by the nodes, create a new storage container. See Creating a Storage Container in the Prism Web Console Guide .
  • Get the storage container name from the Prism Element web console Storage dashboard. Do not use the NutanixManagementShare or SelfServiceContainer storage containers. AOS reserves these storage containers for special use.
Unsupported multinode Service Domain configurations
Nutanix does not support the following configurations.
  • You cannot create (add) a multinode Service Domain where you have upgraded each node to a multinode compatible version from a previous incompatible single-node-only version.

    For example, you have upgraded three older single-node Service Domains to a multinode image version. You cannot create a multinode Service Domain from these nodes.

  • In a multinode Service Domain, you cannot mix Service Domain nodes that you have upgraded to a multinode compatible version with newly created multinode compatible nodes.

    For example, you have upgraded two older single-node Service Domains to a multinode image version. You have a newly created multinode compatible single node. You cannot add these together to form a new muiltinode service domain.

  • Service domain version 1.14 and previous versions allow you to create single node Service Domains only. These Service Domain nodes are not "multinode aware" but are still useful as single node Service Domains.

Adding a Multinode Service Domain

Create and deploy a Service Domain cluster that consists of three or more nodes.

Before you begin

  • See Multinode Service Domain Requirements and Limitations.
  • If you deploy each Service Domain node as a VM, the procedure described in Creating and Starting the Service Domain VM can provide the VM ID (serial number) and IP address.
  • After adding a Service Domain, the status and dot color shows Service Domain status and can take a few minutes to update. See also Example: Managing a Service Domain From Its Dashboard.
    • Healthy (green). Service domain nodes started and connected.
    • Not Onboarded (gray). Service domain nodes have not started and are not yet connected. Might be transitioning to Healthy.
    • Disconnected (red). Service domain nodes have not started and are not connected. VM might be offline or unavailable.

Adding Each Node

Procedure

  1. Log on to the cloud management console at https://karbon.nutanix.com/.
  2. Click Infrastructure > Service Domains > + Service Domain .
  3. Select Any other On-prem or Cloud Environment and click Next .
  4. Name your Service Domain.
    • Starts and ends with a lowercase alphanumeric character
    • Maximum length of 63 lowercase alphanumeric characters
    • Dash (-) and dot (.) characters are allowed. For example, my.service-domain is a valid Service Domain name.
  5. Select Multi-Node to create a multinode Service Domain type.
  6. Click Add Node and enter the following node details:
    1. Name the node.
    2. Serial Number of your Service Domain node VM.
      • If a Nutanix AOS cluster hosts your Service Domain node VM: in the cluster Prism web console, open the VM page, select the Service Domain node VM, and note the ID.
      • You can also display the serial number by opening this URL in a browser. Use your Service Domain node VM IP address: http://service-domain-node-ip-address:8080/v1/sn
    3. IP Address of your Service Domain node VM.
      The IP address of the first node you add becomes the Service Domain Virtual IP Address . By default, Karbon Platform Services uses this node IP address to represent the Service Domain cluster in the Service Domains List page.
    4. Subnet Mask and Gateway . Type the subnet mask and gateway IP address in these fields.
    5. Click the check mark icon. To change the details, hover over the ellipses menu and click Edit .
    6. Click Add Node and repeat these steps to add two leader nodes and worker nodes.
      The first three nodes are tagged as leader nodes and cannot be removed. Nodes four and later are considered as worker nodes and can be removed.
    7. Virtual IP Address . The IP address of the first node you add becomes the Service Domain Virtual IP Address . By default, this node IP address represents the Service Domain cluster in the Service Domains List page.

      Starting with new Service Domain version 2.3.0 deployments, high availability support for the Service Domain is now implemented through the Kubernetes API server (kube-apiserver). This support is specific to new multinode Service Domain 2.3.0 deployments.

      To enable the HA kube-apiserver support, ensure that the VIP address is part of the same subnet as the Service Domain VMs and the VIP address is unique (that is, has not already been allocated to any VM). Otherwise, the Service Domain will not enable this feature.

      Also ensure that the VIP address in this case is not part of any cluster IP address pool range that you have specified when you created a virtual network for guest VMs in the AHV cluster. That is, the VIP address must be outside this IP pool address range. Otherwise, creation of the Service Domain in this case will fail.

Adding Categories

Procedure

  1. Click Add Category .
    See Creating a Category. You can create one or more categories to add them to a Service Domain.
    1. Select a category and its associated value.
    2. Click Add to select another category and value.
  2. Click Add .
    You can expand this Service Domain later by adding nodes. You can also remove nodes from this Service Domain.
  3. On the Service Domains page, you can add another Service Domain by clicking + Service Domain .
  4. You can also Edit Service Domain properties or Remove a Service Domain from Service Domains by selecting it.
  5. Click Next to configure storage and advanced settings.

Configuring Storage (Nutanix Volumes)

Before you begin

Each node requires access to shared storage from an AOS cluster. Ensure that you meet the following requirements to create a storage profile for the nodes. On your AOS cluster:
  • Create a Cluster Administrator user as described in the Nutanix Security Guide: Creating a User Account .
  • Ensure that the AOS cluster is configured with a cluster virtual IP address and an iSCSI Data Services IP address. See the Prism Web Console Guide: Modifying Cluster Details.
  • Ensure that the AOS cluster has at least one storage container available for shared storage use by the nodes. If you want to optionally maintain a storage quota for use by the nodes, create a new storage container. See the Prism Web Console Guide: Creating a Storage Container.
  • Get the storage container name from the Prism Element web console Storage dashboard. Do not use the NutanixManagementShare or SelfServiceContainer storage containers. AOS reserves these storage containers for special use.

Procedure

  1. From the Storage Infrastructure drop-down menu, select Nutanix Volumes .
  2. In the Nutanix Cluster Virtual IP Address field, enter the AOS cluster virtual IP address.
  3. In the Username and Password fields, enter the AOS cluster administrator user credentials.
  4. In the Data Services IP Address field, enter the iSCSI Data Services IP address.
  5. In the Storage Container Name field, enter the cluster storage container name.
  6. Continue to Advanced Settings . Otherwise, click Done .

What to do next

  • See Adding a Data Source and IoT Sensor or [Optional] Creating Your Cloud Profile.
  • See also Expanding a Multinode Service Domain or Removing a Worker Node from a Multinode Service Domain

Advanced Settings

Procedure

  1. Enter environment variables as one or more key-value pairs for the service domain. Click Add Key-Value Pair to additional pairs.
    You can set environment variables and associated values for each Service Domain as a key-value pair, which are available for use in Kubernetes apps.

    For example, you could set a secret variable key named SD_PASSWORD with a value of passwd1234 .For an example of how to use existing environment variables for a Service Domain in application YAML, see Using Service Domain Environment Variables - Example. See also Configure Service Domain Environment Variables.

  2. If your Service Domain includes a GPU/vGPU, choose its usage case.
    1. To allow access by any Kubernetes app or data pipeline, choose Use GPU for Kubernetes Apps and Data Pipelines .
    2. To allow access by AI Inferencing API (for example, if you are using ML Models), select Use GPU for AI Inferencing .
  3. To provide limited secure shell (SSH) administrator access to your service domain to manage Kubernetes pods. select Enable SSH Access .
    SSH Service Domain access enables you to run Kubernetes kubectl commands to help you with application development, debugging, and pod troubleshooting. See Secure Shell (SSH) Access to Service Domains.
  4. Click Done .

What to do next

  • See Adding a Data Source and IoT Sensor or [Optional] Creating Your Cloud Profile.
  • See also Expanding a Multinode Service Domain or Removing a Worker Node from a Multinode Service Domain

Expanding a Multinode Service Domain

Add nodes to an existing multinode Service Domain.

Before you begin

  • See Multinode Service Domain Requirements and Limitations.
  • You can add or remove nodes to an existing multinode Service Domain. However, you cannot remove the first three nodes you add to the Service Domain. These nodes are tagged as leader nodes. Each subsequent node you add is a worker node and is removable.
  • If you deploy each Service Domain node as a VM, the procedure described in Creating and Starting the Service Domain VM can provide the VM ID (serial number) and IP address.
  • After adding a Service Domain, the status dot color shows Service Domain status and can take a few minutes to update. See also Example: Managing a Service Domain From Its Dashboard.
    • Healthy (green). Service domain nodes started and connected.
    • Not Onboarded (grey). Service domain nodes have not started and are not yet connected. Might be transitioning to Healthy.
    • Disconnected (red). Service domain nodes have not started and are not connected. VM might be offline or unavailable.

Procedure

  1. log on to the cloud management console at https://karbon.nutanix.com/.
  2. Click Infrastructure > Service Domains > List .
  3. Select a multinode Service Domain, then click Edit .
    You can also rename the Service Domain as part of this procedure.
  4. Click Add Node and enter the following node details:
    1. Subnet Mask and Gateway . Type the subnet mask and gateway IP address in these fields.
    2. Click the check mark icon. To change the details, hover over the ellipses menu and click Edit .
  5. To expand the Service Domain, click Add Node and enter the following node details:
    1. Name the node.
    2. Serial Number of your Service Domain node VM.
      • If a Nutanix AOS cluster hosts your Service Domain node VM: in the cluster Prism web console, open the VM page, select the service domain node VM, and note the ID.
      • You can also display the serial number by opening this URL in a browser. Use your Service Domain node VM IP address: http://service-domain-node-ip-address:8080/v1/sn
    3. IP Address of your Service Domain node VM.
    4. Subnet Mask and Gateway . Type the subnet mask and gateway IP address in these fields.
    5. Click the check mark icon. To change the details, hover over the ellipses menu and click Edit .
    6. Click Add Node and repeat these steps to add more nodes.

Adding Categories and Updating the Service Domain

Procedure

  1. Click Add... .
    See Creating a Category. You can create one or more categories to add them to a Service Domain.
    1. Select a category and its associated value.
    2. Click Add to select another category and value.
  2. To edit an existing category, select a category and value from the drop-down menu.
  3. To delete a category, click the trash can icon next to it.
  4. When you finish adding or deleting categories, click Next , the click Update .
  5. Click Update .
  6. On the Service Domains page, you can add another Service Domain by clicking + Service Domain .
  7. You can also Edit Service Domain properties or Remove a Service Domain from Service Domains by selecting it.

What to do next

Adding a Data Source and IoT Sensor or [Optional] Creating Your Cloud Profile. See also Removing a Worker Node from a Multinode Service Domain
.

Removing a Worker Node from a Multinode Service Domain

Remove worker nodes from a multinode Service Domain. Any node added to an existing three-node Service Domain is considered a worker node.

Before you begin

  • See Multinode Service Domain Requirements and Limitations.
  • You can add or remove nodes to an existing multinode Service Domain. However, you cannot remove the first three nodes you add to the Service Domain. These nodes are tagged as leader nodes. Each subsequent node you add is a worker node and is removable.
  • Before you remove a node, ensure that the remaining nodes in the Service Domain have enough CPU, storage, and memory capacity to run applications running on the removed node. The Service Domains dashboard shows health status, CPU/Memory/Storage usage, version details, and more information for every Service Domain.
  • After adding a Service Domain, the status dot color shows Service Domain status and can take a few minutes to update. See also Example: Managing a Service Domain From Its Dashboard.
    • Healthy (green). Service domain nodes started and connected.
    • Not Onboarded (grey). Service domain nodes have not started and are not yet connected. Might be transitioning to Healthy.
    • Disconnected (red). Service domain nodes have not started and are not connected. VM might be offline or unavailable.

Procedure

  1. log on to the cloud management console at https://karbon.nutanix.com/.
  2. Click Infrastructure > Service Domains > List .
  3. Select a multinode Service Domain, then click Edit .
    You can also rename the Service Domain as part of this procedure.
  4. In the Service Domain table, click the elipsis menu next to a node and select Remove .
    This step removes the node.

Adding, Editing, or Deleting Categories and Updating the Service Domain

Procedure

  1. Click Add... .
    See Creating a Category. You can create one or more categories to add them to a Service Domain.
    1. Select a category and its associated value.
    2. Click Add to select another category and value.
  2. To edit an existing category, select a category and value from the drop-down menu.
  3. To delete a category, click the trash can icon next to it.
  4. When you finish adding or deleting categories, click Next , the click Update .
  5. On the Service Domains page, you can add another Service Domain by clicking + Service Domain .
  6. You can also Edit Service Domain properties or Remove a Service Domain from Service Domains by selecting it.

What to do next

Adding a Data Source and IoT Sensor or [Optional] Creating Your Cloud Profile. See also Expanding a Multinode Service Domain.

Onboarding a Multinode Service Domain By Using Nutanix Karbon

You can now onboard a multinode Service Domain by using Nutanix Karbon as your infrastructure provider to create a Service Domain Kubernetes cluster.

About this task

For advanced Service Domain settings, the Nutanix public Github repository includes a README file describing how to use the kps command line and the required YAML configuration file for the cluster. This public Github repository at https://github.com/nutanix/karbon-platform-services/tree/master/cli also describes how to install the kps command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

Before you begin

This task requires the following.
  • Prism Central deployment managing at least one Prism Element Cluster running AHV.
  • Nutanix Karbon installed on Prism Central. Ensure that you have downloaded the ntnx-1.0 image. See Downloading Images in the Nutanix Kubernetes Engine Guide .
  • kps command line installed on a machine running Linux or MacOS, as described at the Nutanix public Github repository.

Procedure

  1. Log on to the cloud management console at https://karbon.nutanix.com/.
  2. Click Infrastructure > Service Domains > + Service Domain .
  3. Select Nutanix AOS and click Next .
  4. Download and configure the kps command line if you have not previously done so.
    See the README at https://github.com/nutanix/karbon-platform-services/tree/master/cli for details.
  5. Name your Service Domain.
    • Starts and ends with a lowercase alphanumeric character
    • Maximum length of 63 lowercase alphanumeric characters
    • Dash (-) and dot (.) characters are allowed. For example, my-servicedomain.contoso.com is a valid Service Domain name.
  6. Enter these details about your Prism Central, Prism Element, and Nutanix Karbon deployments.
    1. Prism Central IP Address. IP address of the Prism Central cluster.
    2. Prism Central Username. User name of a Prism Central admin user.
    3. Prism Central Password. Password for the Prism Central admin user.
    4. Nutanix Cluster Name. Cluster name of the AHV cluster being managed by Prism Central
    5. Storage Container Name. Name of the storage container located on the Prism Element AHV cluster.
    6. Subnet Name. Name of the subnet configured on the Prism Element AHV cluster.
    7. Karbon Cluster Master VIP Address. Master VIP IP address for the Kubernetes API server (kube-apiserver). AHV IP address management (IPAM) provides an IP address for each node. Ensure that the VIP address has not been already allocated and is not part of any cluster IP address pool range that you have specified when you created a virtual network in the AHV cluster. The VIP address must be outside this IP pool address range but part of the same VLAN.
      For more information about network, see Cluster Setup in the Nutanix Kubernetes Engine Guide .

      This information populates the kps command line options and parameters in next step.

    Figure. Example Create Service Domain with Karbon Page Click to enlarge Image shows the fields need to create the Service Domain through Nutanix Karbon

  7. Click Copy next to the kps command line and run it on the machine where you installed the kps command line.
    For advanced Service Domain settings, the Nutanix public Github repository includes a README file describing how to use the kps command line and the required YAML configuration file for the cluster.
  8. Click Close .

Upgrade Service Domains

Your network bandwidth might affect how long it takes to completely download the latest Service Domain version. Nutanix recommends that you perform any upgrades during your scheduled maintenance window.

Note: Each Kubernetes release potentially deprecates or removes support for API versions or resources. To help ensure that your applications run correctly after upgrading your Service Domain version, Nutanix recommends that you check the Deprecated API Migration Guide available at the Kubernetes Documentation web site. Depending on the deprecated or removed APIs, you might have to modify your Kubernetes App YAML files.

Upgrade your existing Service Domain VM by using the Upgrades page in the cloud management console. From Upgrades , you can see available updates that you can download and install on one or more Service Domains of your choosing.

Upgrading the Service Domain is a two-step process where you:

  1. Download an available upgrade to one or more existing Service Domains.
  2. Upgrade selected Service Domains to the downloaded version.
Upgrades lets you choose how to upgrade your Service Domains.
Note: During a Service Domain upgrade operation, you might see service or app related alerts such as Kafka is degraded, partitions are under-replicated, and others. These messages are expected and can be safely ignored when you are upgrading your Service Domain. Nutanix recommends that you perform installation and upgrades during your scheduled maintenance window or outside your normal business hours.
Table 1. Upgrades Page Links
Link Use Case
Service Domains "1-click" download or upgrade for all upgrade-eligible Service Domains.
  • Select the most recent available version (N) or the next most recent available versions (N-1, and N-2). For example, if your current Service Domain version is 1.16, you can download, then upgrade your Service Domain to available versions 1.16.1, 1.17, and 1.18.
  • Upgrade all Service Domains where you have already downloaded the selected available Service Domain version(s)
Download and upgrade on all eligible Use this workflow to download an available version to all Service Domains eligible to be upgraded. You can then decide when you want to upgrade the Service Domain to the downloaded version. See Upgrading All Service Domains.
Download and upgrade on selected Use this workflow to download an available version to one or more Service Domains that you select and are eligible to be upgraded. This option appears after you select one or more Service Domains. After downloading an available Service Domain version, upgrade one or more Service Domains when convenient. See Upgrading Selected Service Domains.
Task History

View Recent History

See Checking Upgrade Task History.

View Recent History appears in the Service Domains page list for each Service Domain, and shows a status summary.

Upgrading All Service Domains

Before you begin

Note:
  • During a Service Domain upgrade operation, you might see service or app related alerts such as Kafka is degraded, partitions are under-replicated, and others. These messages are expected and can be safely ignored when you are upgrading your Service Domain. Nutanix recommends that you perform installation and upgrades during your scheduled maintenance window or outside your normal business hours.
  • Each Kubernetes release potentially deprecates or removes support for API versions or resources. To help ensure that your applications run correctly after upgrading your Service Domain version, Nutanix recommends that you check the Deprecated API Migration Guide available at the Kubernetes Documentation web site. Depending on the deprecated or removed APIs, you might have to modify your Kubernetes App YAML files.
Log on to the cloud management console. Nutanix recommends that you perform any upgrades during your scheduled maintenance window.

About this task

  • Download an available Service Domain version to all eligible Service Domains
  • Upgrade all Service Domains where you have already downloaded an available service domain version
  • You can select the most recent available version (N) or the next most recent available versions (N-1, and N-2). For example, if your current Service Domain version is 1.16, you can download, then upgrade your Service Domain to available versions 1.16.1, 1.17, and 1.18.

Procedure

  1. Go to Administration > Upgrades > Service Domains .
    A Service Domain table lists each Service Domain, current status (Healthy, Not Healthy, Disconnected), current installed version, and upgrade actions and history.
  2. To download a version to all eligible Service Domains, click Download on all eligible and select a version (for example 1.18).
    The Download on all eligible drop-down menu shows the latest version as vX.xx.x (Latest) .
    The Download popup window shows the eligible service domains as selected. Clear (unselect) any Service Domain to prevent the bundle from downloading onto it.
  3. Click Download .
    The Service Domain version downloads to all Healthy Service Domains. The Actions column shows a downloading message. Hover over the task circle to see current progress. When the Actions column displays Upgrade to vX.xx.x , downloads are completed and you can upgrade the Service Domains.
  4. To upgrade the Service Domains when downloads are completed, click Upgrade all eligible to and select a version.
  5. In the Upgrade Service Domains popup window, click Upgrade . Clear (unselect) any Service Domain to prevent it from being upgraded.
    The Actions column shows upgrade progress similar to download. When the Actions column displays View Recent History and the Version column is updated, upgrade operations are complete. See also Checking Upgrade Task History.

Upgrading Selected Service Domains

Before you begin

Note:
  • During a Service Domain upgrade operation, you might see service or app related alerts such as Kafka is degraded, partitions are under-replicated, and others. These messages are expected and can be safely ignored when you are upgrading your Service Domain. Nutanix recommends that you perform installation and upgrades during your scheduled maintenance window or outside your normal business hours.
  • Each Kubernetes release potentially deprecates or removes support for API versions or resources. To help ensure that your applications run correctly after upgrading your Service Domain version, Nutanix recommends that you check the Deprecated API Migration Guide available at the Kubernetes Documentation web site. Depending on the deprecated or removed APIs, you might have to modify your Kubernetes App YAML files.
Log on to the cloud management console.

About this task

Use this workflow to download an available version to one or more Service Domains. You can then decide when you want to upgrade the Service Domain to the downloaded version. Your network bandwidth might affect how long it takes to completely download the latest Service Domain version. Nutanix recommends that you perform any upgrades during your scheduled maintenance window.

Procedure

  1. Go to Administration > Upgrades > Service Domains .
    A Service Domain table lists each Service Domain, current status (Healthy, Not Healthy, Disconnected), current installed version, and upgrade actions and history.
  2. To download the upgrade to selected eligible Service Domains, do these steps.
    1. Select one or more Service Domains in the table.
    2. Click Download on selected num and select a version (for example 1.18). num is the number of eligible Service Domains you selected. The drop-down menu shows the latest version as vX.xx.x (Latest)
    The Download popup window shows the eligible service domains as selected. Clear (unselect) any Service Domain to prevent the bundle from downloading onto it.
  3. Click Download .
    The Service Domain version downloads to all Healthy Service Domains. The Actions column shows a downloading message. Hover over the task circle to see current progress. When the Actions column displays Upgrade to vX.xx.x , downloads are completed and you can upgrade the Service Domains.
  4. To upgrade selected eligible Service Domains, do these steps.
    1. Select one or more Service Domains where the Actions status is Upgrade to vX.xx.x .
    2. Click Upgrade selected num . num is the number of eligible Service Domains you selected.
    The Download popup window shows the eligible service domains as selected. Clear (unselect) any Service Domain to prevent the bundle from downloading onto it.
  5. Click Upgrade .
    The Actions column shows upgrade progress similar to download. When the Actions column displays View Recent History and the Version column is updated, upgrade operations are complete. See also Checking Upgrade Task History.

Checking Upgrade Task History

Before you begin

Log on to the cloud management console.

About this task

After downloading and upgrading an available Service Domain version, upgrade task history is available in the console.

View Recent History appears in the Service Domains page list for each Service Domain and shows a status summary.

Procedure

  1. To see Service Domain upgrade task history, go to Administration > Upgrades > Task History .
  2. You can see status for single node and multinode Service Domains.
    1. Version . The available Service Domain version to which you attempted to download or upgrade.
    2. Status . Downloaded (version download is complete), Upgraded (Service Domain is upgraded), Upgrade Failed (Service Domain was not upgraded).
    3. Completed on . Date and time task occurred (Upgrade Failed) or completed (Upgraded or Downloaded).
    4. Service Domains . Link to the Service Domain(s) where the action occurred. This link is especially helpful with multinode Service Domains. After you click the Service Domain link, a popup status windows shows for each service domain. The window includes a filter field so you can see individual status.

Projects

A project is an infrastructure, apps, and data collection created by the infrastructure administrator for use by project users.

When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project. When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.

A project can consist of:

  • Existing administrator or project users
  • A Service Domain
  • Cloud profile
  • Container registry profile

When you add a Service Domain to a project, all resources such as data sources associated with the Service Domain are available to the users added to the project.

Projects Page

The Projects page lets an infrastructure administrator create a new project and lists projects that administrators can update and view.

For project users, the Projects page lists projects created and assigned by the infrastructure administrator that project users can view and add apps and data. Project users can view and update any projects assigned by the administrator with applications, data pipelines, and so on. Project users cannot remove a project.

When you click a project name, the project Summary dashboard is displayed and shows resources in the project.

You can click any of the project resource menu links to edit or update existing resources, or create and add resources to a project. For example, you can edit an existing data pipeline in the project or create a new one and assign it to the project. For example, click Kafka to shows details for the Kafka data service associated with the project. See Viewing Kafka Status.).

Figure. Project Summary Dashboard Click to enlarge Project summary dashboard with component links.

Creating a Project

As an infrastructure administrator, create a project. To complete this task, log on to the cloud management console.

About this task

  • When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project.
  • When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.

Procedure

  1. Click the menu button, then click Projects > Create .
    1. Name your project. Up to 200 alphanumeric characters are allowed.
    2. Description . Provide a relevant description for your project.
    3. Click Add Users to add users to a project.
    4. Click Next .
  2. Add a Service Domain, which associates any resources available to the Service Domain to your project.
    1. Select Select By Category to choose and apply one or more categories associated with a Service Domain.
      Click the plus sign ( + ) to add more categories.
    2. Select Select Individually to choose a Service Domain, then click Add Service Domain to select one or more Service Domains.
  3. Select a Cloud Profile and Container Registry .
    Click the plus sign ( + ) to select additional cloud profiles or container registries.
  4. Click Next to enable one or more project services.
    1. Enabled indicates that this service is available to all projects.
    2. Click Enable to enable a service for all project users.
      See Enabling or Disabling One or More Services for a Project.
  5. Click Create .

Editing a Project

Update an existing project. To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard is displayed along with Edit and Manage Services buttons. See Enabling or Disabling One or More Services for a Project.
  3. Click Edit .
    1. Update the projects general information by updating the name and description.
    2. Click the trashcan to remove a user from the list. Click Edit to open the user dialog box where you add or remove users associated with the project.
    3. Click Next to update project resources.
  4. Update the associated Service Domains.
    1. Select Select By Category to add or remove one or more categories associated with a Service Domain.
      • Click the trashcan to remove a category.
      • Click the plus sign ( + ) to add more categories.
    2. Select Select Individually to remove or add a Service Domain. Do one of the following.
      • Click the trashcan to remove a Service Domain in the list.
      • Click Edit to select one or more Service Domains.
  5. For Cloud Profile and Container Registry :
    1. Click the trashcan to remove cloud profiles or container registries.
    2. Click the plus sign ( + ) to add cloud profiles or container registries.
  6. Click Next to enable one or more project services.
    1. Enabled indicates that this service is available to all projects.
    2. Click Enable to enable a service for all project users.
      See Enabling or Disabling One or More Services for a Project.
  7. Click Update .

Removing a Project

As an infrastructure administrator, delete a project. To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Select a project in the list.
    The project dashboard is displayed along with Edit , Manage Services , and Remove buttons. See Enabling or Disabling One or More Services for a Project.
  3. If Remove does not appear as an option, you might need to delete any apps associated with the project.
  4. After removing associated apps, return to the Projects page and select the project.
  5. Click Remove , then click Delete to confirm.
    The Projects page lists any remaining projects.

Managing Project Services

The Karbon Platform Services cloud infrastructure provides services that are enabled by default. It also provides access to services that you can enable for your project.

The platform includes these ready-to-use services, which provide an advantage over self-managed services:

  • Secure by design.
  • No life cycle management. Service Domains do not require any user intervention to maintain or update service versions, patches, or other hands-on activities. For ease of use, all projects using the Service Domain use the same service version.
  • Dedicated service resources. Your project uses isolated service resources. Service resources are not shared with other projects. Applications within a project are not required to authenticate to use the service.
  • Service-specific alerts. Like other Karbon Platform Services features, the Service Domain helps monitor service health and raises service-specific alerts.
App Runtime Services
These services are enabled by default on each Service Domain.
  • Kubernetes Apps. Containers as a service. You can create and run Kubernetes apps without having to manage the underlying infrastructure.
  • Functions and Data Pipelines. Run server-less functions based on data triggers, then publish data to specific cloud or Service Domain endpoints.
  • AI Inferencing. Enable this service to use your machine learning (ML) models in your project. The ML Model feature provides a common interface for functions (that is, scripts) or applications.
Ingress Controller
Traefik or Nginx-Ingress. If your project requires Ingress controller routing, you can choose the open source Traefik router or the NGINX Ingress controller to enable on your Service Domain. See Enable an Ingress Controller.
Service Mesh
Istio. Provides secure connection, traffic management, and telemetry. See Istio Service Mesh.
Data Streaming | Messaging
  • Kafka. Available for use within project applications and data pipelines, running on a Service Domain hosted in your environment. See Kafka as a Service.
  • NATS. Available for use within project applications and data pipelines. In-memory high performance data streaming including pub/sub (publish/subscribe) and queue-based messaging.
Logging | Monitoring | Alerting
  • Prometheus. Provides Kubernetes app monitoring and logging, alerting, metrics collection, and a time series data model (suitable for graphing). See Prometheus Application Monitoring and Metrics as a Service.
  • Logging. Provides log monitoring and log bundling. See Audit Trail and Log Management.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Enabling or Disabling One or More Services for a Project

Enable or disable services associated with your project.

Before you begin

  • You cannot disable the default Kubernetes Apps, Functions and Data Pipelines, and AI Inferencing services.
  • Disabling Kafka. When you disable Kafka, data stored in PersistentVolumeClaim (PVC) is not deleted and persists. If you later enable Kafka on the same Service Domain, the data is reused.
  • Disabling Prometheus. When you disable Prometheus and then later enable it, the service creates a new PVC and clears any previously-stored metrics data stored in PVC.

Procedure

  1. From the home menu, click Projects , then click a project.
    Depending on your role, you can also select a project from Admin Console drop-down menu.
  2. Click Manage Services .
  3. Enable a service.
    1. Click Enable in a service tile.
    2. Click Enable service_name .
      For Istio, apps in this project restart.
    3. Click Confirm .
    If the minimum Service Domain version is detected, the service is available for use in your project in a few minutes, after the service initializes. Otherwise, an alert is triggered if the Service Domain does not support the service.
  4. Disable a service.
    1. Click Disable in a service tile.
    2. Click Disable service_name .
      For Istio, apps in this project restart.
    3. Click Confirm .
    The service is no longer available for use in your project.

Kafka as a Service

Kafka is available as a data service through your Service Domain.

The Kafka data service is available for use within a project's applications and data pipelines, running on a Service Domain hosted in your environment. The Kafka service offering from Karbon Platform Services provides the following advantages over a self-managed Kafka service:

  • Secure by design.
  • No need to explicitly declare Kafka topics. With Karbon Platform Services Kafka as a service, topics are automatically created.
  • No life cycle management. Service Domains do not require any user intervention to maintain or update service versions, patches, or other hands-on activities. For ease of use, all projects using the Service Domain use the same service version.
  • Dedicated service resources. Your project uses isolated service resources. Service resources are not shared with other projects. Applications within a project are not required to authenticate to use the service.
  • Service-specific alerts. Like other Karbon Platform Services features, the Service Domain monitors service health and raises service-specific alerts.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Using Kafka in an Application

Information about application requirements and sample YAML application file

Create an application for your project as described in Creating an Application and specify the application attributes and configuration in a YAML file.

Sample Kafka Application YAML Template File


apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-deployment
  labels:
    app: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: some.container.registry.com/myapp:1.7.9
        ports:
        - containerPort: 80
     env:
        - name: KAFKA_ENDPOINT
          value: {{.Services.Kafka.Endpoint}}
Field Name Value or Field Name / Description Value or Field Name / Description
kind Deployment Specify the resource type. Here, use Deployment .
metadata name Provide a name for your deployment.
labels Provide at least one label. Here, specify the application name as app: my-app
spec Define the Kafka service specification.
replicas Here, 1 to indicate a single Kafka cluster (single Service Domain instance or VM) to keep data synchronized.
selector Use matchLabels and specify the app name as in labels above.
template
Specify the application name here ( my-app ), same as metadata specifications above.
spec Here, define the specifications for the application using Kafka.
containers
  • name: my-app . Specify the container application
  • image: some.container.registry.com/myapp:1.7.9 . Define the container registry host address where the container image is stored.
  • ports: containerPort: 80 . Define the container registry port. Here, 80.
env

name: KAFKA_ENDPOINT

value: {{.Services.Kafka.Endpoint}}

Leave these values as shown.

Using Kafka in a Function for a Data Pipeline

Information about data pipeline function requirements.

See Functions and Data Pipelines.

You can specify a Kafka endpoint type in a data pipeline. A data pipeline consists of:

  • Input. An existing data source or real-time data stream (output from another data pipeline).
  • Transformation. A function (code block or script) to transform data from a data source (or no function at all to pass data directly to the endpoint).
  • Output. An endpoint destination for the transformed or raw data.

Kafka Endpoint Function Requirements

For a data pipelines with a Kafka topic endpoint:

  • Language . Select the golang scripting language for the function.
  • Runtime Environment . Select the golang runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.

Viewing Kafka Status

In the cloud management console, you can view Kafka data service status when you use Kafka in an application or as a Kafka endpoint in a data pipeline as part of a project. This task assumes you are logged in to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard is displayed along with Edit and Manage Services buttons.
  3. Click Kafka .
    This dashboard shows a consolidated, high-level view of all Kafka topics, deployments, and related alerts. The default view is Topics.
  4. To show all topics for this project, click Topics .
    1. Topics. All Kafka topics used in this project.
    2. Service Domain. Service Domains in the project employing Kafka messaging.
    3. Bytes/sec produced and Bytes/sec consumed. Bandwidth use.
  5. To view Service Domain status where Kafka is used, click Deployments .
    1. List of Service Domains and provisioned status.
    2. Brokers. A No Brokers Available message might mean the Service Domain is not connected.
    3. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  6. To see all alerts associated with Kafka for this project, click Alerts .

Enable an Ingress Controller

The Service Domain supports the Traefik open source router as the default Kubernetes Ingress controller. You can also choose the NGINX ingress controller instead.

An infrastructure admin can enable an Ingress controller for your project through Manage Services as described in Managing Project Services. You can only enable one Ingress controller per Service Domain.

When you include Ingress controller annotations as part of your application YAML file, Karbon Platform Services uses Traefik as the default on-demand controller.

If your deployment requires it, you can alternately use NGINX (ingress-nginx) as a Kubernetes Ingress controller instead of Traefik.

In your application YAML, specify two snippets:

  • Service snippet. Use annotations to define the HTTP or HTTPS host protocol, path, service domain, and secret for each Service Domain to use as an ingress controller. You can specify one more Service Domains.
  • Secret snippet. Use this snippet to specify the certificates used to secure app traffic.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Sample Ingress Controller Service Domain Configuration

To securely route application traffic with a Service Domain ingress controller, create YAML snippets to define and specify the ingress controller for each Service Domain.

You can only enable and use one Ingress controller per Service Domain.

Create an application for your project as described in Creating an Application to specify the application attributes and configuration in a YAML file. You can include these Service and Secret snippets Service Domain ingress controller annotations and certificate information in this app deployment YAML file.

Ingress Controller Service Domain Host Annotations Specification

apiVersion: v1
kind: Service
metadata:
  name: whoami
  annotations:
    sherlock.nutanix.com/http-ingress-path: /notls
    sherlock.nutanix.com/https-ingress-path: /tls
    sherlock.nutanix.com/https-ingress-host: DNS_name
    sherlock.nutanix.com/http-ingress-host: DNS_name
    sherlock.nutanix.com/https-ingress-secret: whoami
spec:
  ports:
    - protocol: TCP
      name: web
      port: 80
  selector:
    app: whoami
Table 1. Ingress Controller Annotations Specification
Field Name Value or Field Name / Description Value or Field Name / Description
kind Service Specify the Kubernetes service. Here, use Service to indicate that this snippet defines the ingress controller details.
apiVersion v1 Here, the Kubernetes API version.
metadata name Provide an app name to which this controller applies.
annotations These annotations define the ingress controller encryption type and paths for Karbon Platform Services.
sherlock.nutanix.com/http-ingress-path: /notls /notls specifies no Transport Layer Security encryption.
sherlock.nutanix.com/https-ingress-path: /tls

/tls specifies Transport Layer Security encryption.

sherlock.nutanix.com/http-ingress-host: DNS_name Ingress service host path, where the service is bound to port 80.

DNS_name . DNS name you can give to your application. For example, my.contoso.net . Ensure that the DNS name resolves to the Service Domain IP address.

sherlock.nutanix.com/https-ingress-host: DNS_name Ingress service host path, where the service is bound to port 443.

DNS_name . DNS name you can give to your application. For example, my.contoso.net . Ensure that the DNS name resolves to the Service Domain IP address.

sherlock.nutanix.com/https-ingress-secret: whoami The sherlock.nutanix.com/https-ingress-secret: whoami snippet links the authentication Secret information defined above to this controller.
spec Define the transfer protocol, port type, and port for the application.
  • protocol: TCP . Transfer protocol of TCP.
  • ports: Define the port and type.

    name: web Port type.

    port: 80 TCP port.

A selector to specify the application. selector app: whoami

Securing The Application Traffic

Use a Secret snippet to specify the certificates used to secure app traffic.

apiVersion: v1
kind: Secret
metadata:
  name: whoami
type: kubernetes.io/tls
data:
  ca.crt: cert_auth_cert
  tls.crt: tls_cert
  tls.key: tls_key
Table 2. TLS Certificate Specification
Field Name Value or Field Name / Description Value or Field Name / Description
apiVersion v1 Here, the TLS API version.
kind Secret Specify the resource type. Here, use Secret to indicate that this snippet defines the authentication details.
metadata name Provide an app name to which this certification applies.
type Define the authentication type used to secure the app. Here, kubernetes.io/tls
data ca.crt Add the keys for each certification type: certificate authority certificate (ca.crt), TLS certificate (tls.crt), and TLS key (
tls.crt
tls.key

Viewing Ingress Controller Details

In the cloud management console, you can view Ingress controller status for any controller used as part of a project. This task assumes you are logged in to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard is displayed along with Edit and Manage Services buttons.
  3. Depending on the deployed Ingress controller, click Nginx-Ingress or Traefik .
    This dashboard shows a consolidated, high-level view of all rules, deployments, and related alerts. The default view is Rules.
  4. To show all rules for this project, click Rules .
    1. Application. App where traffic is being routed.
    2. Rules. Rules you have configured for this app. Here, Rules shows the host and paths
    3. Destination. Application and port number.
    4. Service Domain. Service Domains in the project employing routing.
    5. TLS. Transport Layer Security (TLS) protocol status. On indicates encrypted communication is enabled. Off indicates it is not used.
  5. To view Service Domain status where the controller is used, click Deployments .
    1. List of Service Domains and provisioned status.
    2. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  6. To see all alerts associated with the controller for this project, click Alerts .

Istio Service Mesh

Istio provides secure connection, traffic management, and telemetry.

Add the Istio Virtual Service and DestinationRules to an Application - Example

In the application YAML snippet or file, define the VirtualService and DestinationRules objects.

These objects specify traffic routing rules for the recommendation-service app host. If the traffic rules match, traffic flows to the named destination (or subset/version of it) as defined here.

In this example, traffic is routed to the recommendation-service app host if it is sent from the FireFox browser. The specific policy version ( subset ) for each host helps you identify and manage routed data.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: recomm-svc
spec:
  hosts:
  - recommendation-service
  http:
  - match:
    - headers:
        user-agent:
          regex: .*Firefox.*
    route:
    - destination:
        host: recommendation-service
        subset: v2
  - route:
    - destination:
        host: recommendation-service
        subset: v1

This DestinationRule YAML snippet defines a load-balancing traffic policy for the policy versions ( subsets ), where any healthy host can service the request.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: recomm-svc
spec:
  host: recommendation-service
  trafficPolicy:
    loadBalancer:
      simple: RANDOM
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

Manage Traffic by Weight - Example

In this YAML snipped, you can split traffic for each subset by specifying a weight of 30 in one case, and 70 in the other. You can also weight them evenly by giving each a weight value of 50.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: recomm-svc
spec:
  hosts:
  - recommendation-service
  http:
  - route:
    - destination:
       host: recommendation-service
       subset: v2
      weight: 30   
    - destination:
       host: recommendation-service
       subset: v1
      weight: 70

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Viewing Istio Details

In the cloud management console, you can view Istio service mesh status associated with applications in a project. This task assumes you are logged in to the cloud management console.

About this task

The Istio tabs show service resource and routing configuration information derived from project-related Kubernetes Apps YAML files and service status.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list, then click Istio in the navigation sidebar to show the default Application Metrics page.
    Application Metrics shows initial details about the applications and related traffic metrics.
    1. Name and Service Domain. Application name and Service Domain where the application is deployed and the Istio service is enabled.
    2. Workloads. One or more workloads associated with the application. For example, Kubernetes Deployments, StatefulSets, DaemonSets, and so on.
    3. Inbound Request Volume / Outbound Request Volume. Inbound/Outbound HTTP requests per second for the last 5 minutes
  3. To see details about any traffic routing configurations associated with the application, click Virtual Services .
    An application can specify one or more virtual services, which you can deploy across one or more Service Domains.
    1. Application and Virtual Services. Application and service name.
    2. Service Domains. Number and name of the Service Domains where the service is deployed.
    3. Matches. Lists matching traffic routes.
    4. Destinations. Number of service destination host connection or request routes. Expand the number to show any destinations served by the virtual service (v1, v2, and so on) where traffic is routed. If specified in the Virtual Service YAML file, Weight indicates the proportion of traffic routed to each host. For example, Weight: 50 indicates that 50 percent of the traffic is routed to that host.
  4. To see rules associated with Destinations, click Destination Rules .
    An application can specify one or more destination rules, which you can deploy across one or more Service Domains. A destination rule is a policy that is applied after traffic is routed (for example, a load balancing configuration).
    1. Application. Application where the rule applies.
    2. Destination Rules. Rules by name.
    3. Service Domains. Number and name of the Service Domains where the rule is deployed.
    4. Subsets. Name and number of the specific policies (subsets). Expand the number to show any pod labels (v1, v2, and so on), service versions (v1, v2, and so on), and traffic weighting, where rules apply and traffic is routed.
  5. To view Service Domain status where Istio is used, click Deployments .
    1. List of Service Domains where the service is deployed and status.
      • PROVISIONED. The service has been successfully installed and enabled.
      • PROVISIONING. The service initialization and provisioning state is in progress.
      • UNPROVISIONED. The service has been successfully disabled (uninstalled).
      • FAILED. The service provisioning has failed.
    2. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  6. To see all alerts associated with the controller for this project, click Alerts .

Prometheus Application Monitoring and Metrics as a Service

Note: When you disable Prometheus and then later enable it, the service creates a new PersistentVolumeClaim (PVC) and clears any previously-stored metrics data stored in PVC.

The Prometheus service included with Karbon Platform Services enables you to monitor endpoints you define in your project's Kubernetes apps. Karbon Platform Services allows one instance of Prometheus per project.

Prometheus collects metrics in your app endoints. The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, endpoints, and alerts. See Viewing Prometheus Service Status.

You can then decide how to view the collected metrics, through graphs or other Prometheus-supported means. See Create Prometheus Graphs with Grafana - Example.

Default Service Settings

Table 1. Prometheus Default Settings
Setting Default Value or Description
Frequency interval to collect and store metrics (also known as scrape and store) Every 60 seconds 1
Collection endpoint /metrics 1
Default collection app collect-metrics
Data storage retention time 10 days
  1. You can create a customized ServiceMonitor YAML snippet to change this default setting. See Enable Prometheus App Monitoring and Metric Collection - Examples.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Enable Prometheus App Monitoring and Metric Collection - Examples

Monitor an Application with Prometheus - Default Endpoint

This sample app YAML specifies an app named metricsmatter-sample-app and creates one instance of this containerized app ( replicas: 1 ) from the managed Amazon Elastic Container Registry.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: metricsmatter-sample-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: metricsmatter-sample-app 
  template:
    metadata:
      name: metricsmatter-sample-app
      labels:
        app: metricsmatter-sample-app
    spec:
      containers:
        - name: metricsmatter-sample-app
          imagePullPolicy: Always
          image: 1234567890.dkr.ecr.us-west-2.amazonaws.com/app-folder/metricmatter_sample_app:latest

Next, in the same application YAML file, create a Service snippet. Add the default collect-metrics app label to the Service object. When you add app: collect-metrics , Prometheus scrapes the default /metrics endpoint every 60 seconds, with metrics exposed on port 8010.

---
apiVersion: v1
kind: Service
metadata:
  name: metricsmatter-sample-service
  labels:
    app: collect-metrics
spec:
  selector:
    app: metricsmatter-sample-app
  ports:
    - name: web
      protocol: TCP
      port: 8010

Monitor an Application with Prometheus - Custom Endpoint and Interval Example

Add a ServiceMonitor snippet to the app YAML above to customize the endpoint to scrape and change the interval to collect and store metrics. Make sure you include the Deployment and Service snippets.

Here, change the endpoint to /othermetrics and the collection interval to 15 seconds ( 15s ).

Prometheus discovers all ServiceMonitors in a given namespace (that is, each project app) where it is installed.

---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
 name: metricsmatter-sample-app
 labels:
   app: collect-metrics
spec:
 selector:
   matchLabels:
     app: collect-metrics
 endpoints:
   - path: /othermetrics
     interval: 15s
     port: 8010

Use Environment Variables as the Endpoint

You can also use endpoint environment variables in an application template for the service and AlertManager.

  • {{.Services.Prometheus.Endpoint}} defines the service endpoint.
  • {{.Services.AlertManager.Endpoint}} defines a custom Alert Manager endpoint.

Configure Service Domain Environment Variables describes how to use these environment variables.

Viewing Prometheus Service Status

The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, Prometheus endpoints, and alerts.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard is displayed along with Edit and Manage Services buttons.
  3. Click Prometheus to view the default Deployments page.
    1. Service Domain and Status. Service Domains where the Prometheus service is deployed and status.
      • PROVISIONED. The service has been successfully installed and enabled.
      • PROVISIONING. The service initialization and provisioning state is in progress.
      • UNPROVISIONED. The service has been successfully disabled (uninstalled).
      • FAILED. The service provisioning has failed.
    2. Prometheus Endpoints. Endpoints that the service is scraping, by ID.
    3. Alert Manager. Alert Manager shows alerts associated with the Prometheus endpoints, by ID.
    4. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  4. To see all alerts associated with Prometheus for this project, click Alerts .

Create Prometheus Graphs with Grafana - Example

This example shows how you can set up a Prometheus metrics dashboard with Grafana.

This topic provides examples to help you expose Prometheus endpoints to Grafana, an open-source analytics and monitoring visualization application. You can then view scraped Prometheus metrics graphically.

Define Prometheus as the Data Source for Grafana

The first ConfigMap YAML snippet example uses the environment variable {{.Services.Prometheus.Endpoint}} to define the service endpoint. If this YAML snippet is part of an application template created by an infra admin, a project user can then specify these per-Service Domain variables in their application.

The second snippet provides configuration information for the Grafana server web page. The host name in this example is woodkraft2.ntnxdomain.com


apiVersion: v1
kind: ConfigMap
metadata:
 name: grafana-datasources
data:
 prometheus.yaml: |-
   {
       "apiVersion": 1,
       "datasources": [
           {
              "access":"proxy",
               "editable": true,
               "name": "prometheus",
               "orgId": 1,
               "type": "prometheus",
               "url": "{{.Services.Prometheus.Endpoint}}",
               "version": 1
           }
       ]
   }
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-ini
data:
  grafana.ini: |
    [server]
    domain = woodkraft2.ntnxdomain.com
    root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana
    serve_from_sub_path = true
---

Specify Deployment Information on the Service Domain

This YAML snippet provides a standard deployment specification for Grafana.


apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      name: grafana
      labels:
        app: grafana
    spec:
      containers:
        - name: grafana
          image: grafana/grafana:latest
          ports:
            - name: grafana
              containerPort: 3000
          resources:
            limits:
              memory: "2Gi"
              cpu: "1000m"
            requests:
              memory: "1Gi"
              cpu: "500m"
          volumeMounts:
            - mountPath: /var/lib/grafana
              name: grafana-storage
            - mountPath: /etc/grafana/provisioning/datasources
              name: grafana-datasources
              readOnly: false
            - name: grafana-ini
              mountPath: "/etc/grafana/grafana.ini"
              subPath: grafana.ini
      volumes:
        - name: grafana-storage
          emptyDir: {}
        - name: grafana-datasources
          configMap:
            defaultMode: 420
            name: grafana-datasources
        - name: grafana-ini
          configMap:
            defaultMode: 420
            name: grafana-ini
---

Define the Grafana Service and Use an Ingress Controller

Define the Grafana Service object to use port 3000 and an Ingress controller to manage access to the service (through woodkraft2.ntnxdomain.com).


apiVersion: v1
kind: Service
metadata:
  name: grafana
spec:
  selector:
    app: grafana
  ports:
    - port: 3000
      targetPort: 3000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: grafana
  labels:
    app: grafana
spec:
  rules:
    - host: woodkraft2.ntnxdomain.com
      http:
        paths:
        - path: /grafana
          backend:
            serviceName: grafana
            servicePort: 3000

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Kubernetes Apps

You can create intelligent applications to run on the Service Domain infrastructure or the cloud where you have pushed collected data. You can implement application YAML files to use as a template, where you can customize the template by passing existing Categories associated with a Service Domain to it.

You need to create a project with at least one user to create an app.

You can undeploy and deploy any applications that are running on Service Domains or in the cloud. See Deploying and Undeploying a Kubernetes Application.

Privileged Kubernetes Apps

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.

For Kubernetes apps running as privileged, you might have to specify the Kubernetes namespace where the application is deployed. You can do this by using the {{ .Namespace }} variable you can define in app YAML template file.

In this example, the resource kind of ClusterRoleBinding specifies the {{ .Namespace }} variable as the namespace where the subject ServiceAccount is deployed. As all app resources are deployed in the project namespace, specify the project name as well (here, name: my-sa).

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-sa
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: my-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: my-cluster-role
subjects:
  - kind: ServiceAccount
    name: my-sa
    namespace: {{ .Namespace }}

Creating an Application

Create a Kubernetes application that you can associate with a project.

Before you begin

To complete this task, log on to the cloud management console.

About this task

  • If your app requires a service, make sure you enable it in the associated project.
  • Application YAML files can also be a template where you can customize the template by passing existing Categories associated with a Service Domain to it.
  • See also Configure Service Domain Environment Variables to set and use environment variables in your app.
  • Nutanix recommends that you do not reuse application names after creating and then deleting (removing) an application. If you subsequently create an application with the same name as a deleted application, your application might re-use stale data from the deleted application.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps > Create Kubernetes App .
  3. Name your application. Up to 200 alphanumeric characters are allowed.
  4. Description . Provide a relevant description for your application.
  5. Select one or more Service Domains, individually or by category.
    How your Service Domain was added in your project determines what you can select in this step. See Creating a Project.
    1. For Select Individually , click Add Service Domains to select one or more Service Domains.
    2. For Select By Category , select to choose and apply one or more categories associated with a Service Domain.
      Click the plus sign ( + ) to add more categories.
  6. Click Next .
  7. To use an application YAML file or template, select YAML based configuration , then click Choose File to navigate to a YAML file or template.
    After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
  8. To define an application with a Helm chart, select Upload a Helm Chart . See Helm Chart Support and Requirements.
    1. Helm Chart File . Browse to a Helm Chart package file.
    2. Values Override File (Optional) . You also can optionally browse to a values YAML file to override the values you specified in the Helm Chart package.
    3. View Generated YAML File . After uploading these files, you can view the generated YAML configuration by clicking Show YAML .
  9. Click Create .
    If your app requires a service, make sure you enable it in the associated project.
    The Kubernetes Apps page lists the application you just created as well as any existing applications. Apps start automatically after creation.

Editing an Application

Update an existing Kubernetes application.

Before you begin

To complete this task, log on to the cloud management console.

About this task

  • Application YAML files can also be a template where you can customize the template by passing existing Categories associated with a Service Domain to it.
  • To set and use environment variables in your app, see Configure Service Domain Environment Variables.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps , then click an application in the list.
    The application dashboard is displayed along with an Edit button.
  3. Click Edit .
  4. Name . Update the application name. Up to 200 alphanumeric characters are allowed.
  5. Description . Provide a relevant description for your application.
  6. You cannot change the application's Project .
  7. Update the associated Service Domains.
    How your Service Domain was added in your project determines what you can select in this step. See Creating a Project.
    1. If Select Individually is selected:
      • Click X to remove a Service Domain in the list.
      • Click Edit to select or deselect one or more Service Domains.
    2. If Select By Category is selected, add or remove one or more categories associated with a Service Domain.
      • Select a category and value.
      • Click X to remove a category.
      • Click the plus sign ( + ) to add more categories and values.
  8. Click Next .
  9. To use an application YAML file or template, select YAML based configuration , then click Choose File to navigate to a YAML file or template.
    1. You can edit the file directly on the Yaml Configuration page.
    2. Click Choose File to navigate to a YAML file or template.
    After uploading the file, you can also edit the file contents. You can choose the contrast levels Normal , Dark , or High Contrast Dark .
  10. To define an application with a Helm chart, select Upload a Helm Chart . See Helm Chart Support and Requirements.
    1. Helm Chart File . Browse to a Helm Chart package file.
    2. Values Override File (Optional) . You also can optionally browse to a YAML file to override the values you specified in the Helm Chart package. Use this option to update your application without having to re-upload a new chart.
    3. View Generated YAML File . After uploading these files, you can view the generated YAML configuration by clicking Show YAML .
  11. Click Update .
    The Kubernetes Apps page lists the application you just updated as well as any existing applications.

Removing an Application

Delete an existing Kubernetes application.

About this task

  • To complete this task, log on to the cloud management console.
  • Nutanix recommends that you do not reuse application names after creating and then deleting (removing) an application. If you subsequently create an application with the same name as a deleted application, your application might re-use stale data from the deleted application.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps , then select an application in the list.
  3. Click Actions > Remove , then click Delete to confirm.
    The Kubernetes Apps page lists any remaining applications.

Deploying and Undeploying a Kubernetes Application

You can undeploy and deploy any applications that are running on Service Domains or in the cloud. You can choose the Service Domains where you want the app to deploy or undeploy. Select the table or tile view on this page by clicking one of the view icons.

Before you begin

Undeploying a Kubernetes app deletes all config objects directly created for the app, including PersistentVolumeClaim data. Your app can create a PersistentVolumeClaim indirectly through StatefulSets. The following points describe scenarios when data is deleted and when data is preserved after you undeploy an app.

  • If your application specifies an explicit PersistentVolumeClaim object, when the app is undeployed, data stored in PersistentVolumeClaim is deleted. This data is not available if you then deploy the app.
  • If your application specifies a VolumeClaimTemplates object, when the app is undeployed, data stored in PersistentVolumeClaim persists. This data is available for reuse if you later deploy the app. If you plan to redeploy apps, Nutanix recommends using VolumeClaimTemplates to implement StatefulSets with stable storage.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps , then select an application in the list.
  3. To undeploy a running application, select an application, then click Actions > Undeploy
    1. To undeploy every instance of the app running on all Service Domains, select Undeploy All , then click Undeploy .
    2. To undeploy the app running on specific Service Domains, select Undeploy Selected , select one or more Service Domains, then click Undeploy .
    3. Click Undeploy App to confirm.
  4. To deploy an undeployed application, select an application, then click Actions > Deploy , then click Deploy to confirm.
    1. To deploy every instance of the app running on all Service Domains, select Deploy All , then click Deploy .
    2. To deploy the app running on specific Service Domains, select Deploy Selected , select one or more Service Domains, then click Deploy .

Helm Chart Support and Requirements

Helm Chart Format
When you create a Kubernetes app in the cloud management console, you can upload a Helm chart package in a gzipped TAR file (.TGZ format) that describes your app. For example, it can include:
  • Helm chart definition YAML file
  • App YAML template files that define your application (deployment, service, and so on). See also Privileged Kubernetes Apps
  • Values YAML file where you declare variables to pass to your app templates. For example, you can specify Ingress controller annotations, cloud repository details, and other required settings
Supported Helm Version
Karbon Platform Services supports Helm version 3.
Kubernetes App Support
Karbon Platform Services supports the same Kubernetes resources in Helm charts as in application YAML files. For example, daemonsets, deployment, secrets, services, statefulsets, and so on are supported when defined in a Helm chart.

Data Pipelines

The Data Pipelines page enables you to create and view data pipelines, and also see any alerts associated with existing pipelines.

A data pipeline is a path for data that includes:

  • Input . An existing data source or real-time data stream.
  • Transformation . Code block such as a script defined in a Function to process or transform input data.
  • Output . A destination for your data. Publish data to the Service Domain, cloud, or cloud data service (such as AWS Simple Queue Service).

It also enables you to process and transform captured data for further consumption or processing.

To create a data pipeline, you must have already created or defined at least one of the following:

  • Project
  • Category
  • Data source
  • Function
  • Cloud profile. Required for cloud data destinations or Service Domain endpoints
  • When you create, clone, or edit a function, you can define one or more parameters. When you create a data pipeline, you define the values for the parameters when you specify the function in the pipeline.
  • Data pipelines can share functions, but you can specify unique parameter values for the function in each data pipeline.
  • You can also stop and start a data pipeline. See Stopping and Starting a Data Pipeline.

Creating a Data Pipeline

Create a data pipeline, with a data source or real-time data source as input and infrastructure or external cloud as the output.

Before you begin

You must have already created at least one of each: project, data source, function, and category. Also, a cloud profile is required for cloud data destinations or Service Domain endpoints. See also Naming Guidelines.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Data Pipelines .
  4. Click Create .
  5. Data Pipeline Name . Name your data pipeline. Up to 63 lowercase alphanumeric characters and the dash character (-) are allowed.

Input - Add a Data Source

Procedure

Click Add Data Source , then select Data Source .
  1. Select a data source Category and one related Value .
  2. [Optional] Click Add new to add another Category and Value . Continue adding as many as you have defined.
  3. Click the trashcan to delete a data source category and value.

Input - Add a Real-Time Data Stream

Procedure

Click Add Data Source , then select Real-Time Data Stream .
  1. Select an existing pipeline as a Real-Time Data Stream .

Transformation - Add a Function

Procedure

  1. Click Add Function and select an existing Function.
  2. Define a data Sampling Interval by selecting Enable .
    1. Enter a interval value.
    2. Select an interval: Millisecond , Second , Minute , Hour , or Day .
  3. Enter a value for any parameters defined in the function.
    If no parameters are defined, this field's name is shown as parens characters. ( )
  4. [Optional] Click the plus sign ( + ) to add another function.
    This step is useful if you want to chain functions together, feeding transformed data from one function to another, and so on.
  5. You can also create a function here by clicking Upload , which displays the Create Function page.
    1. Name your function. Up to 200 alphanumeric characters are allowed.
    2. Description . Provide a relevant description for the function.
    3. Language . Select the scripting language for the function: golang, python, node.
    4. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
    5. Click Next .
    6. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    7. Click Add Parameter to define and use any parameters for the data transformation in the function.
    8. Click Create .

Output - Add a Destination

Procedure

  1. Click Add Destination to specify where the transformed data is to be output: Publish to Service Domain or Publish to External Cloud .
  2. If you select Destination > Service Domain :
    1. Endpoint Type . Select Kafka , MQTT , Realtime Data Stream , or Data Interface .
    2. Data Interface Name . If shown, name your data interface.
    3. Endpoint Name . If shown, name your endpoint.
  3. If you select Destination > External Cloud and Cloud Type as AWS , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select an AWS region.
  4. If you select Destination > External Cloud and Cloud Type as Azure , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. EndpointType . Select an existing stream type, such as Blob Storage .
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
  5. If you select Destination > External Cloud and Cloud Type as GCP , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
      d
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select a GCP region.
  6. Click Create .

Editing a Data Pipeline

Update a data pipeline, including data source, function, and output destination. See also Naming Guidelines.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Data Pipelines .
  4. Select a data pipeline in the list, then click Actions > Edit
    You cannot update the data pipeline name.

Input - Edit a Data Source

Procedure

You can do any of the following:
  1. Select a different Data Source to change the data pipeline Input source (or keep the existing one).
  2. Select a different value for an existing Category.
  3. Click the trashcan to delete a data source category and value.
  4. Click Add new to add another Category and Value . Continue adding as many as you have defined.
  5. Select a different category.

Input - Edit a Real-Time Data Stream

Procedure

Select a different Realtime Data Stream to change the data pipeline Input source.

Transformation - Edit a Function

About this task

You can do any of the following tasks.

Procedure

  1. Select a different Function .
  2. Add or update the Sampling Interval for any new or existing function.
    1. If not selected, create a Sampling Interval by selecting Enable
    2. Enter a interval value.
    3. Select an interval: Millisecond , Second , Minute , Hour , or Day .
  3. Enter a value for any parameters defined in the function.
    If no parameters are defined, this field's name is shown as parens characters. ( )
  4. [Optional] Click the plus sign ( + ) to add another function.
    This step is useful if you want to chain functions together, feeding transformed data from one function to another, and so on.
  5. You can also create a function here by clicking Upload , which displays the Add Function page.
    1. Name your function. Up to 200 alphanumeric characters are allowed.
    2. Description . Provide a relevant description for the function.
    3. Language . Select the scripting language for the function: golang, python, node.
    4. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
    5. Click Next .
    6. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    7. Click Add Parameter to define and use any parameters for the data transformation in the function.
    8. Click Create .
  6. Click Create .

Output - Edit a Destination

About this task

You can do any of the following tasks.

Procedure

  1. Select a Infrastructure or External Cloud Destination to specify where the transformed data is to be output.
  2. If you select Destination > Infrastructure :
    1. Endpoint Type . Select MQTT , Realtime Data Stream , Kafka , or Data Interface .
    2. Data Interface Name . If shown, name your data interface.
    3. Endpoint Name . If shown, name your endpoint.
  3. If you select Destination > External Cloud and Cloud Type as AWS , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select an AWS region.
  4. If you select Destination > External Cloud and Cloud Type as GCP , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select a GCP region.
  5. If you select Destination > External Cloud and Cloud Type as Azure , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. EdType . Select an existing stream type, such as Blob Storage .
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
  6. Click Update .

Removing a Data Pipeline

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Data Pipelines .
  4. Select a data pipeline, click Actions > Remove , then click Delete again to confirm.
    The Data Pipelines page lists any remaining data pipelines.

Stopping and Starting a Data Pipeline

About this task

  • You can stop and start a data pipeline.
  • You can select the table or tile view on this page by clicking one of the view icons.
    Figure. View Icons Click to enlarge The view icons on the page let you switch between table and tile view.

About this task

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list, then click Functions and Data Pipelines .
  3. To stop an active data pipeline, select a data pipeline, then click Actions > Stop .
    You can also click a data pipeline, then click Stop or Start , depending on the data pipeline state.
    Stop stops any data from being transformed or processed, and terminates any data transfer to your data destination.
  4. To start the data pipeline, select Start (after stopping a data pipeline) from the Actions drop-down menu.

Functions

A function is code used to perform one or more tasks. Script languages include Python, Golang, and Node.js. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.

An infrastructure administrator or project user can create a function, and later can edit or clone it. You cannot edit a function that is used by an existing data pipeline. In this case, you can clone it to make an editable copy.

  • You can use ML models in your function code. Use the ML Model API to call the model and version.
  • When you create, clone, or edit a function, you can define one or more parameters.
  • When you create a data pipeline, you define the values for the parameters when you specify the function in the pipeline.
  • Data pipelines can share functions, but you can specify unique parameter values for the function in each data pipeline.

Creating a Function

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Functions > Create .
  4. Name . Name the function. Up to 200 alphanumeric characters are allowed.
  5. Description . Provide a relevant description for your function.
  6. Language . Select the scripting language for the function: golang, python, node.
  7. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
  8. Click Next .
  9. Add function code.
    1. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    2. Click Add Parameter to define and use one or more parameters for data transformation in the function. Click the check icon to add each parameter.
  10. Click Create .

Editing a Function

Edit an existing function. To complete this task, log on to the cloud management console.

About this task

Other than the name and description, you cannot edit a function that is in use by an existing data pipeline. In this case, you can clone a function to duplicate it. See Cloning a Function.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Functions .
  4. Select a function in the list, then click Edit .
  5. Name . Update the function name. Up to 200 alphanumeric characters are allowed.
  6. Description . Provide a relevant description for your function.
  7. You cannot change the function's Project .
  8. Language . Select the scripting language for the function: golang, python, node.
  9. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
  10. Click Next .
  11. If you want to choose a different file or edit the existing function code:
    1. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    2. Click Add Parameter to define and use one or more parameters for data transformation in the function. Click the check icon to add each parameter.
  12. Click Update .

Cloning a Function

Clone an existing function. To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Functions .
  4. Select a function in the list, then click Clone .
  5. Name . Update the function name. Up to 200 alphanumeric characters are allowed.
  6. Description . Provide a relevant description for your function.
  7. You cannot change the function's Project .
  8. Language . Select the scripting language for the function: golang, python, node.
  9. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
  10. Click Next .
  11. If you want to choose a different file or edit the existing function code:
    1. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    2. Click Add Parameter to define and use one or more parameters for data transformation in the function. Click the check icon to add each parameter.
  12. Click Next to update a data pipeline with this function, if desired.
  13. Click Create , then click Confirm in response to the data pipeline warning.
    An updated function can cause data pipelines to break (that is, stop collecting data correctly).

Removing a Function

About this task

To complete this task, log on to the cloud management console. You cannot remove a function that is associated with a data pipeline or realtime data stream.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Functions .
  4. Select a function, click Remove , then click Delete again to confirm.
    The Functions page lists any remaining functions.

AI Inferencing and ML Model Management

You can create machine learning (ML) models to enable AI inferencing for your projects. The ML Model feature provides a common interface for functions (that is, scripts) or applications to use the ML model Tensorflow runtime environment on the Service Domain.

The Karbon Platform Services Release Notes list currently supported ML model types.

An infrastructure admin can enable the AI Inferencing service for your project through Manage Services as described in Managing Project Services.

You can add multiple models and model versions to a single ML Model instance that you create. In this scenario, multiple client projects can access any model configured in the single ML model instance.

Before You Begin
  • To allow access by the AI Inferencing API, ensure that the infrastructure admin selects Use GPU for AI Inferencing in the associated project Service Domain Advanced Settings.
  • Ensure that the infrastructure admin has enabled the AI Inferencing service for your project.
ML Models Guidelines and Limitations
  • The maximum ML model zip file size you can upload is 1 GB.
  • Each ML model zip file must contain a binary file and metadata file.
  • You can use ML models in your function code. Use the ML Model API to call the model and version.
ML Model Validation
  • The Karbon Platform Services platform validates any ML model that you upload. It checks the following:
    • The uploaded model is a ZIP file, with the correct folder of file structure in the ZIP file.
    • The ML model binaries in the ZIP file are valid and in the required format.
    • TensorFlow ML model binaries in the ZIP file are valid and in the required format.

Adding an ML Model

About this task

To complete this task, log on to the cloud management console.

Before you begin

  • An infrastructure admin can allow access to the AI Inferencing API by selecting Use GPU for AI Inferencing in the associated project Service Domain Advanced Settings.
  • An infrastructure admin can enable the AI Inferencing service for your project through Manage Services as described in Managing Project Services.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing > Add Model .
  3. Name . Name the ML model. Up to 200 alphanumeric characters are allowed.
  4. Description . Provide a relevant description.
  5. Select a specific project to make your ML model available to that project.
  6. Select a Framework Type from the drop-down menu. For example, Tensorflow 2.1.0 .
  7. Click Add Version in the List of Model Versions panel.
    1. Type an ML Model Version number, such as 1, 2, 3 and so on.
    2. Click Choose File and navigate to a model ZIP file stored on your local machine.
      • The maximum zip file size you can upload is 1 GB.
      • The console validates the file as it uploads it.
      • Uploading a file is a background task and might take a few moments depending on file size. Do not close your browser.
    3. Type any relevant Version Information , then click Upload .
  8. [Optional] Click Add new to add another model version to the model.
    A single ML model can consist of multiple model versions.
  9. Click Done .
    The ML Model page displays the added ML model. It might also show the ML model as continuing to upload.

What to do next

Check ML model status in the ML Model page.

Editing an ML Model

About this task

  • To complete this task, log on to the cloud management console.
  • You cannot change the name or framework type associated with an ML model.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing , select a model in the list, and click Edit .
  3. Description . Update an existing description.
  4. Edit or Remove an existing model version from the List of Model Versions.
  5. [Optional] Click Add new to add another model version.
    A single ML model can consist of multiple model versions.
    1. Type an ML Model Version number, such as 1, 2, 3 and so on.
    2. Click Choose File and navigate to a model ZIP file stored on your local machine.
      • The maximum zip file size you can upload is 1 GB.
      • The console validates the file as it uploads it.
      • Uploading a file is a background task and might take a few moments depending on file size. Do not close your browser.
    3. Type any relevant Version Information , then click Upload .
  6. Click Done .
    The ML Model page displays the added ML model. It might also show the ML model as continuing to upload.

What to do next

Check ML model status in the ML Model page.

Removing an ML Model

How to delete an ML model.

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing and select a model in the list.
  3. Cick Remove , then click Delete again to confirm.
    The ML Models page lists any remaining ML models.

What to do next

Check ML model status in the ML Model page.

Viewing ML Model Status

The ML Models page in the Karbon Platform Services management console shows version and activity status for all models.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing to show model information like Version, File Size, and Framework Type.
  3. To see where a model is deployed, click the model name.
    This page shows a list of associated Service Domains.

Runtime Environments

A runtime environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.

Karbon Platform Services includes standard runtime environments including but not limited to the following. These runtimes are read-only and cannot be edited, updated, or deleted by users. They are available to all projects, functions, and associated container registries.

  • Golang
  • NodeJS
  • Python 2
  • Python 3
  • Tensorflow Python
You can add your own custom runtime environment for use by all or specific projects, functions, and container registries.
Note: Custom Golang runtime environments are not supported. Use the provided standard Golang runtime environment in this case.

Creating a Runtime Environment

How to create a user-added runtime environment for use with your project.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines , then click the Runtime Environments tab.
  4. Click Create .
  5. Name . Name the runtime environment. Up to 75 alphanumeric characters are allowed.
  6. Description . Provide a relevant description.
  7. Container Registry Profile . Click Add Profile to create a new container registry profile or use an existing profile. Do one of the following sets of steps.
    1. Select Add New .
    2. Name your profile.
    3. Description . Describe the profile. Up to 75 alphanumeric characters are allowed.
    4. Container Registry Host Address . Provide a public or private registry host address. The host address can be in the format host.domain:port_number/repository_name
      For example, https://index.docker.io/v1/ or registry-1.docker.io/distribution/registry:2.1
    5. Username . Enter the user name credential for the profile.
    6. Password . Enter the password credential for the profile. Click Show to display the password as you type.
    7. Email Address . Add an email address in this field.
    1. Select Use Existing cloud profile and elect an existing profile from Cloud Profile .
    2. Name your profile.
    3. Description . Describe the profile. Up to 75 alphanumeric characters are allowed.
    4. Server . Provide a server URL to the container registry in the format used by your cloud provider.
      For example, an Amazon AWS Elastic Container Registry (ECR) URL might be: https:// aws_account_id .dkr.ecr. region .amazonaws.com
  8. Click Done .
  9. Container Image Path . Provide a container image URL. For example, https:// aws_account_id .dkr.ecr. region .amazonaws.com/ image : imagetag
  10. Languages . Select the scripting language for the runtime: golang, python, node.
  11. [Optional] Click Add Dockerfile to choose and upload a Dockerfile for the container image, then click Done .
    A Dockerfile typically includes instructions used to build images automatically or run commands.
  12. Click Create .

Editing a Custom Runtime Environment

How to edit a user-added runtime environment from the cloud management console. You cannot edit the included read-only runtime environments.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines , then click the Runtime Environments tab.
  4. Select a runtime environment in the list, then click Edit .
  5. Name . Name the runtime environment. Up to 75 alphanumeric characters are allowed.
  6. Description . Provide a relevant description.
  7. Container Registry Profile . Select an existing container registry profile.
  8. Container Image Path . Provide a container image URL. For example, https:// aws_account_id .dkr.ecr. region .amazonaws.com/ image : imagetag
  9. Languages . Select the scripting language for the runtime: golang, python, node.
  10. [Optional] Remove or Edit any existing Docker file.
    After editing a file, click Done .
  11. Click Add Docker Profile to choose a Docker file for the container image.
  12. Click Update .

Removing a Runtime Environment

How to remove a user-added runtime environment from the cloud management console. To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines , then click the Runtime Environments tab.
  4. Select a runtime environment, click Remove , then click Delete again to confirm.
    The Runtime Environments page lists any remaining runtime environments.

Audit Trail and Log Management

Logging provides a consolidated landing page enabling you to collect, forward, and manage logs from selected Service Domains.

From Logging or System Logs > Logging or the summary page for a specific project, you can:

  • Run Log Collector on every or selected Service Domains (admins only)
  • See created log bundles, which you can then download or delete
  • Create, edit, and delete log forwarding policies to help make collection more granular and then forward those logs to the cloud
  • View alerts
  • View an audit trail of any operations performed by users in the last 30 days

You can also collect logs and stream them to a cloud profile by using the kps command line available from the Nutanix public Github channel https://github.com/nutanix/karbon-platform-services/tree/master/cli. Readme documentation available there describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

Audit Trail Dashboard

Access the Audit Log dashboard to view the most recent operations performed by users.

Karbon Platform Services maintains an audit trail of any operations performed by users in the last 30 days and displays them in an Audit Trail dashboard in the cloud management console. To display the dashboard, log on to the console, open the navigation menu, and click Audit Trail .

Click Filters to display operations by Operation Type, User Name, Resource Name, Resource Type, Time Range, and other filter types so you can narrow your search. Filter Operation Type by CREATE, UPDATE, and DELETE actions.

Running Log Collector - Service Domains

Log Collector examines the selected Service Domains and collects logs and configuration information useful for troubleshooting issues and finding out details about any Service Domain.

Procedure

  1. Go to System Logs > Logging from the Dashboard in the navigation sidebar menu.
  2. Click Run Log Collector .
    1. To collect log bundles for every Service Domain, choose Select all Service Domains , then click Select Logs .
    2. To collector log bundles for specific Service Domains, choose them, then click Collect Logs
      The table displays the collection status.
    • Collecting . Depending on how many Service Domains you have chosen, it can take a few minutes to collect the logs.
    • Success . All logs are collected. You can now Download them or Delete them.
    • Failed . Logs could not be collected. Click Details to show error messages relating to this collection attempt.
  3. After downloading the log bundle, you can delete it from the console.

What to do next

  • The downloaded log bundle is in a gzipped TAR file. Extract the TAR file, then untar or unzip the TAR file to see the collected logs.
  • You can also forward logs to the cloud based on your cloud profile. See Creating and Updating a Log Forwarding Policy.

Running Log Collector - Kubernetes Apps

Log Collector examines the selected project application and collects logs and configuration information useful for troubleshooting issues and finding out details about an app.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard and navigation sidebar is displayed.
  3. Click Kubernetes Apps , then click an app in the list.
  4. Select the Log Bundles tab, then click Run Log Collector .
    1. To collect log bundles for every Service Domain, choose Select all Service Domains , then click Select Logs .
    2. To collector log bundles for specific Service Domains, choose them, then click Collect Logs
      The table displays the collection status.
    • Collecting . Depending on how many Service Domains you have chosen, it can take a few minutes to collect the logs.
    • Success . All logs are collected. You can now Download them or Delete them.
    • Failed . Logs could not be collected. Click Details to show error messages relating to this collection attempt.
  5. After downloading the log bundle, you can delete it from the console.

What to do next

  • The downloaded log bundle is in a gzipped TAR file. Extract the TAR file, then untar or unzip the TAR file to see the collected logs.
  • You can also forward logs to the cloud based on your cloud profile. See Creating and Updating a Log Forwarding Policy.

Creating and Updating a Log Forwarding Policy

Create, edit, and delete log forwarding policies to help make collection more granular and then forward those Service Domain logs to the cloud.

Before you begin

Make sure your infrastructure admin has created a cloud profile first.

Procedure

  1. Go to System Logs > Logging from Dashboard or the summary page for a specific project.
  2. Click Log Forwarding Policy > Create New Policy .
  3. Name . Name your policy.
  4. Select a Service Domain.
    1. Select All . Logs for all Service Domains are forwarded to the cloud destination.
    2. Select Select By Category to choose and apply one or more categories associated with a Service Domain. Click the plus sign ( + ) to add more categories.
    3. Select Select Individually to choose a Service Domain, then click Add Service Domain to select one or more Service Domains.
  5. Select a destination.
    1. Select Profile . Choose an existing cloud profile.
    2. Select Service . Choose a cloud service. For example, choose Cloudwatch to stream logs to Amazon CloudWatch. Other services include Kinesis and Firehose .
    3. Select Region . Enter a valid AWS region name or CloudWatch endpoint fully qualified domain name. For example, us-west-2 or monitoring.us-west-2.amazonaws.com
    4. Select Stream . Enter a log stream name.
    5. Select Groups . (CloudWatch only) Enter a Log group name.
  6. Click Done .
    The dashboard or summary page shows the policy tile.
  7. From the Logging dashboard, you can edit or delete a policy.
    1. Click Edit and change any of the fields, then click Done .
    2. To delete a policy, click Delete , then click Delete again to confirm.

Creating a Log Collector for Log Forwarding - Command Line

Create a log collector for log forwarding by using the kps command line.

Nutanix has released the kps command line on its public Github channel. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

Each sample YAML file defines a log collector. Log collectors can be:

  • Infrastructure-based: collects infrastructure-related (Service Domain) information. For use by infra admins.
  • Project-based: project-related information (applications, data pipelines, and so on). For use by project users and infra admins (with assigned projects).

See the most up-to-date sample YAML files and descriptions at https://github.com/nutanix/karbon-platform-services/tree/master/cli.

Example Usage

Create a log collector defined in a YAML file:

user@host$ kps create -f infra-logcollector-cloudwatch.yaml

infra-logcollector-cloudwatch.yaml

This sample infrastructure log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).

To enable AWS CloudWatch log streaming, you must specify awsRegion, cloudwatchStream, and cloudwatchGroup.

kind: logcollector
name: infra-log-name
type: infrastructure
destination: cloudwatch
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name Value or Subfield Name / Description Value or Subfield Name / Description
kind logcollector Specify the resource type
name infra-log-name Specify the unique log collector name
type infrastructure Log collector for infrastructure
destination cloudwatch Cloud destination type
cloudProfile cloud-profile-name Specify an existing Karbon Platform Services cloud profile
awsRegion For example, us-west-2 or monitoring.us-west-2.amazonaws.com Valid AWS region name or CloudWatch endpoint fully qualified domain name
cloudwatchGroup cloudwatch-group-name Log group name
cloudwatchStream cloudwatch-stream-name Log stream name
filterSourceCode Specify the log conversion code

project-logcollector.yaml

This sample project log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).

kind: logcollector
name: project-log-name
type: project
project: project-name
destination: cloud-destination type
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name Value or Subfield Name / Description Value or Subfield Name / Description
kind logcollector Specify the resource type
name project-log-name Specify the unique log collector name
type project Log collector for specific project
project project-name Specify the project name
destination cloud-destination type Cloud destination type such as CloudWatch
cloudProfile cloud-profile-name Specify an existing Karbon Platform Services cloud profile
awsRegion For example, us-west-2 or monitoring.us-west-2.amazonaws.com Valid AWS region name or CloudWatch endpoint fully qualified domain name
cloudwatchGroup cloudwatch-group-name Log group name
cloudwatchStream cloudwatch-stream-name Log stream name
filterSourceCode Specify the log conversion code

Stream Real-Time Application and Pipeline Logs

Real-Time Log Monitoring built into Karbon Platform Services provides real-time log monitoring and lets you view application and data pipeline log messages securely in real time.

Note: Infrastructure administrators can stream logs if they have been added to a project.

Viewing the most recent log messages as they occur helps you see and troubleshoot application or data pipeline operations. Messages stream securely over an encrypted channel and are viewable only by authenticated clients (such as an existing user logged on to the Karbon Platform Services cloud platform).

The cloud management console shows the most recent log messages, up to 2 MB. To get the full logs, collect and then download the log bundles by Running Log Collector - Service Domains.

Displaying Real-Time Logs

View the most recent real-time logs for applications and data pipelines.

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. Go to the Deployments page for an application or data pipeline.
    1. From the home menu, click Projects , then click a project.
    2. Click Kubernetes Apps or Functions and Data Pipelines .
    3. Click an app name or data pipeline name, then click Deployments .
    4. Click an application or a data pipeline in the table, then click Deployments .
      A table shows each Service Domain deploying the application or data pipeline.
  2. Select a Service Domain, then click View Real-time Logs .
    The window displays streaming log messages and shows the most recent 2 MB of log messages from a container associated with your data pipeline function or application, from the Service Domain.

What to do next

Real-Time Log Monitoring Console describes what you see in the terminal-style display.

Real-Time Log Monitoring Console

The Karbon Platform Services real-time log monitoring console is a terminal-style display that streams log entries as they occur.

After you do the steps in Displaying Real-Time Logs, a terminal-style window is displayed in one or more tabs. Each tab is streaming log messages and shows the most recent 2 MB of log messages from a container associated with your data pipeline function or application.

Your application or data pipeline function generates the log messages. That is, the console shows log messages that you have written into your application or function.

Figure. Real-Time Log Monitoring Console Click to enlarge One or more tabs with a terminal style display shows messages generated by your application or function.

No Logs Message

If your Karbon Platform Services Service Domain is connected and your application or function is not logging anything, the console might show a No Logs message. In this case, the message means that the application or function is idle and not generating any log messages.

Errors

You might see one or more error messages in the following cases. As a result, Real-Time Log Monitoring cannot retrieve any logs.

  • If your application, function, or other resource fails
  • Network connectivity fails or is highly intermittent between the Karbon Platform Services Service Domain or your browser and karbon.nutanix.com

Tips for Real-Time Log Monitoring for Applications

  • The first tab shows the first container associated with the application running on the Karbon Platform Services Service Domain. The console shows one tab for each container associated with the application, including a tab for each container replicated in the same application.
  • Each tab displays the application and container name as app_name : container_id ) and might be truncated. To see the full name on the tab, hover over it.
  • Reordering tabs is not available. Unlike your usual tabbed browser experience, you cannot reorder or move the tabs.

Tips for Real-Time Log Monitoring for Functions in Data Pipelines

  • The first tab shows the first function deployed in the data pipeline running on the Service Domain. The console shows one tab for each function deployed in the data pipeline.
  • Reordering tabs is not available. Unlike your usual tabbed browser experience, you cannot reorder or move the tabs. For data pipelines, the displayed tab order is the order of the functions (that is, scripts of transformations) in the data pipeline.
  • Duplicate function names. If you use the same function more than once in the data pipeline, you will see duplicate tabs. That is, one tab for each function instance.
  • Each tab displays the data pipeline and function name as data_pipeline_ : function ) and might be truncated. To see the full name on the tab, hover over it.

Managing API Keys

Simplifies authentication when you use the Karbon Platform Services API by enabling you to manage your keys from the Karbon Platform Services management console. This topic also describes API key guidelines.

As a user (infrastructure or project), you can manage up to two API keys through the Karbon Platform Services management console. After logging on to the management console, click your user name in the management console, then click Manage API Keys to create, disable, or delete these keys.

Number of API Keys Per Users and Expiration
  • Each user can create and use two API keys.
  • Keys do not have an expiration date.
Deleting and Reusing API Keys
  • You can delete an API key at any time.
  • You cannot use or reuse a key after deleting it.
Creating and Securing the API Keys
  • When you create a key, the Manage API Keys dialog box displays the key.
  • When you create a key, make a copy of the key and secure it. You cannot see the key later. It is not recoverable at any time.
  • Do not share the key with anyone, including other users.

Read more about the Karbon Platform Services API at nutanix.dev. For Karbon Platform Services Developers describes related information and links to resources for Karbon Platform Services developers.

Using API Keys With HTTPS API Requests

Example API request using an API key.

After you create an API key, use it with your Karbon Platform Services API HTTPS Authorization requests. In the request, specify an Authorization header including Bearer and the key you generated and copied from the Karbon Platform Services management console.

For example, here is an example Node JS code snippet:

var http = require("https");

var options = {
  "method": "GET",
  "hostname": "karbon.nutanix.com",
  "port": null,
  "path": "/v1.0/applications",
  "headers": {
    "authorization": "Bearer API_key"
  }
};

Creating API Keys

Create one or more API keys through the Karbon Platform Services management console.

Procedure

  1. After logging on to the management console, click your user name, then click Manage API Keys .
    Figure. Manage API Keys Click to enlarge Click your profile name, then click Manage API keys.

  2. Click Create API Key .
    Manage API Keys shows that the key is created. Keys do not have an expiration date.
  3. Click Copy to Clipboard .
    • Make a copy of the API key and secure it. It is not recoverable at any time.
    • Click View to see the key value. You cannot see the key later.
    • Copy to Clipboard is a mandatory step. The Close button is inactive until you click Copy to Clipboard .
  4. To create another key, click Create API Key and Copy to Clipboard .
    Make a copy of the key and secure it. It is not recoverable at any time. Click View to see the key value. You cannot see the key later.
    Note that Status for each key is Active .
    Figure. Key Creation Click to enlarge The API keys show as Active.

  5. Click Close .

Disabling, Enabling, or Deleting API Keys

Create one or more API keys through the Karbon Platform Services management console.

Procedure

  1. After logging on to the management console, click your user name, then click Manage API Keys .
    Figure. Manage API Keys Click to enlarge Click your profile name, then click Manage API keys.

    Manage API Keys shows information about each API key: Create Time, Last Accessed, Status (Active or Disabled).
    Depending on the current state of the API key, you can disable, enable, or delete it.
    Figure. API Keys Click to enlarge The dialog shows API Key status as active or disabled.

  2. Disable an enabled API key.
    1. Click Disable .
      Note that Status for the API key is now Disabled .
    2. Click Close .
  3. Enable a disabled API key.
    1. Click Enable .
      Note that Status for the API key is now Active .
    2. Click Close .
  4. Delete an API Key.
    1. Click Delete , then click Delete again to confirm.
    2. Click Close .
      You cannot use or reuse a key after deleting it. Any client authenticating with this now-deleted API key will need a new key to authenticate again.

Secure Shell (SSH) Access to Service Domains

Karbon Platform Services provides limited secure shell (SSH) access to your cloud-connected service domain to manage Kubernetes pods.

Note: To enable this feature, contact Nutanix Support so they can set your profile accordingly. Setting Privileged Mode shows how you can check your Service Domain effectiveProfile setting.
Caution: Nutanix recommends limiting your use of this feature in production deployments. Do not enable SSH access in production deployments unless you have exhausted all other troubleshooting or debugging alternatives.

The Karbon Platform Services cloud management console provides limited secure shell (SSH) access to your cloud-connected Service Domain to manage Kubernetes pods. SSH Service Domain access enables you to run Kubernetes kubectl commands to help you with application development, debugging, and pod troubleshooting.

As Karbon Platform Services is secure by design, dynamically generated public/private key pairs with a default expiration of 30 minutes secure your SSH connection. When you start an SSH session from the cloud management console, you automatically log on as user kubeuser .

Infrastructure administrators have SSH access to Service Domains. Project users do not have access.

kubeuser Restrictions

  • Limited to running kubectl CLI commands
  • No superuser or sudo access: No /root file access, unable to issue privileged commands

Opening a Secure Session (SSH) Console to a Service Domain

Access a Service Domain through SSH to manage Kubernetes pods with kubectl CLI commands. This feature is disabled by default. To enable this feature, contact Nutanix Support.

Before you begin

To complete this task, log on to the cloud management console as an infrastructure administrator user.
Caution: Nutanix recommends limiting your use of this feature in production deployments. Do not enable SSH access in production deployments unless you have exhausted all other troubleshooting or debugging alternatives.

Procedure

  1. Click Infrastructure > Service Domains .
  2. In the table, click a Service Domain Name , then click Console to connect to the Service Domain.
    A terminal window connects as the kubeuser user for 30 minutes.
  3. To activate the cursor, click the terminal window.
  4. After 30 minutes, you can choose to stay connected or disconnect.
    • After 30 minutes, you can remain connected by clicking Reconnect to reestablish the SSH connection.
    • To disconnect an active connection, click Back to Service Domains or another link on this page ( Summary , Nodes , or any others except SSH ).

What to do next

For example commands, see Using kubectl to Manage Service Domain Kubernetes Pods.

Using kubectl to Manage Service Domain Kubernetes Pods

Use kubectl commands to manage Kubernetes pods on the Service Domain.

Example kubectl Commands

  • List all running pods.
    kubeuser@host$ kubectl get pods
  • List all pod services.
    kubeuser@host$ kubectl get services
  • Get logs for a specific pod named pod_name .
    kubeuser@host$ kubectl logs pod_name
  • Run a command for a specific pod named pod_name .
    kubeuser@host$ kubectl exec pod_name command_name
  • Attach a shell to a container container_name in a pod named pod_name .
    kubeuser@host$ kubectl exec -it pod_name --container container_name -- /bin/sh

Alerts

The Alerts page and the Alerts Dashboard panel show any alerts triggered by Karbon Platform Services depending on your role.

  • Infrastructure Administrator . All alerts associated with Projects, Apps & Data, Infrastructure
  • Project User Alerts . All alerts associated with Projects and Apps & Data

To see alert details:

  • On the Alerts page, click the alert Description to see details about that alert.
  • From the Alerts Dashboard panel, click View Details , then click the alert Description to see details.

Click Filters to sort the alerts by:

  • Severity . Critical or Warning.
  • Time Range . Select All, Last 1 Hour, Last 1 Day, or Last 1 Week.

An Alert link is available on each Apps & Data and Infrastructure page.

Figure. Sorting and Filtering Alerts Click to enlarge Filters flyout menu to sort alerts by severity

For Karbon Platform Services Developers

Information and links to resources for Karbon Platform Services developers.

This section contains information about Karbon Platform Services development.

API Reference
Go to https://www.nutanix.dev/api-reference/ for API references, code samples, blogs , and more for Karbon Platform Services and other Nutanix products.
Karbon Platform Services Public Github Repository

The Karbon Platform Services public Github repository https://github.com/nutanix/karbon-platform-services includes sample application YAML files, instructions describing external client access to services, Karbon Platform Services kps CLI samples, and so on.

Karbon Platform Services Command Line

Nutanix has released the kps command line on its public Github repository. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

Ingress Controller Support - Command Line

Karbon Platform Services supports the Traefik open source router as the default Kubernetes Ingress controller and NGINX (ingress-nginx) as a Kubernetes Ingress controller. For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/ingress.

Kafka Data Service Support - Command Line

Kafka is available as a data service through your Service Domain. Clients can manage, publish and subscribe to topics using the native Kafka protocol. Data Pipelines can use Kafka as destination. Applications can also use a Kafka client of choice to access the Kafka data service.

For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/kafka. See alsoKafka as a Service.

Free Karbon Platform Services Trial
Sign up for a free Karbon Platform Services Trial at https://www.nutanix.com/products/iot/try.

Set Privileged Mode For Applications

Enable a container application to run with elevated privileges.

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.
Note: To enable this feature, contact Nutanix Support so they can set your profile accordingly. In your profile, this feature is disabled by default. After privileged mode is enabled in your profile, you can configure a container application to run with elevated privileges.
Caution: Nutanix recommends that you use this feature sparingly and only when necessary. To keep Karbon Platform Services secure by design, as a general rule, Nutanix also recommends your run your applications at user level.

For information about installing the kps command line, see For Karbon Platform Services Developers.

Karbon Platform Services enables you to develop an application which requires elevated privileges to run successfully. By using the kps command line, you can set your Service Domain to enable an application running in a container to run in privileged mode.

Setting Privileged Mode

Configure your Service Domain to enable a container application to run with elevated privileges.

Before you begin

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.
  • Ensure that you have created an infrastructure administrator or project user with associated resources (Service Domain, project, applications, and so on).
  • To enable this feature, contact Nutanix Support so they can set your profile accordingly. In your profile, this feature is disabled by default. After privileged mode is enabled in your profile, you can configure a container application to run with elevated privileges.
Caution: Nutanix recommends that you use this feature sparingly and only when necessary. To keep Karbon Platform Services secure by design, as a general rule, Nutanix also recommends your run your applications at user level.

Procedure

  1. Install the kps command line as described at GitHub: https://github.com/nutanix/karbon-platform-services/tree/master/cli
  2. From your terminal window, create a Karbon Platform Services context to associate with an existing Karbon Platform Services user.
    user@host$  kps config create-context context_name --email user_email_address --password password
    1. context_name . A context name to associate with the specified user_email_address and related Karbon Platform Services resources.
    2. user_email_address . Email address of an existing Karbon Platform Services user. This email address can be a My Nutanix account address or local user address.
    3. password . Password for the Karbon Platform Services user.
  3. Verify that the context is created.
    user@host$ kps config get-contexts
    The terminal displays CURRENT CONTEXT as the context_name . It also displays the NAME, URL, TENANT-ID, and Karbon Platform Services email and password.
  4. Get the Service Domain names (displayed in YAML format) where the container application needs to run with elevated privileges.
    user@host$ kps get svcdomain -o yaml
  5. Set privilege mode for a specific Service Domain svc_domain_name .
    user@host$ kps update svcdomain svc_domain_name --set-privileged
    Successfully updated Service Domain: svc_domain_name
  6. Verify privileged mode is set to true .
    user@host$ kps get svcdomain svc_domain_name -o yaml
    kind: edge
    name: svc_domain_name
    
    connected: true
    .
    .
    .
    profile: 
       privileged: true 
       enableSSH: true 
    effectiveProfile: 
       privileged: true
       enableSSH: true
    effectiveProfile privileged set to true indicates that Nutanix Support has enabled this feature. If the setting is false , contact Nutanix Support to enable this feature. In this example, Nutanix has also enabled SSH access to this Service Domain (see Secure Shell (SSH) Access to Service Domains in Karbon Platform Services Administration Guide ).

What to do next

See Using Privileged Mode with an Application (Example).

Using Privileged Mode with an Application (Example)

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.

After elevating privilege as described in Setting Privileged Mode, elevate the application privilege. This sample enables USB device access for an application running in a container on elevated Service Domain

YAML Snippet, Sample Privileged Mode USB Device Access Application

Add a tag similar to the following in the Deployment section in your application YAML file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: usb
  annotations:
    sherlock.nutanix.com/privileged: "true"

Full YAML File, Sample Privileged Mode USB Device Access Application


apiVersion: v1
kind: ConfigMap
metadata:
  name: usb-scripts
data:
  entrypoint.sh: |-
    apk add python3
    apk add libusb
    pip3 install pyusb
    echo Read from USB keyboard
    python3 read-usb-keyboard.py
  read-usb-keyboard.py: |-
    import usb.core
    import usb.util
    import time

    USB_IF      = 0     # Interface
    USB_TIMEOUT = 10000 # Timeout in MS

    USB_VENDOR  = 0x627
    USB_PRODUCT = 0x1

    # Find keyboard
    dev = usb.core.find(idVendor=USB_VENDOR, idProduct=USB_PRODUCT)
    endpoint = dev[0][(0,0)][0]
    try:
      dev.detach_kernel_driver(USB_IF)
    except Exception as err:
      print(err)
    usb.util.claim_interface(dev, USB_IF)
    while True:
      try:
        control = dev.read(endpoint.bEndpointAddress, endpoint.wMaxPacketSize, USB_TIMEOUT)
        print(control)
      except Exception as err:
        print(err)
      time.sleep(0.01)
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: usb
  annotations:
        sherlock.nutanix.com/privileged: "true"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: usb
  template:
    metadata:
      labels:
        app: usb
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: alpine
        image: alpine
        volumeMounts:
        - name: scripts
          mountPath: /scripts
        command:
        - sh
        - -c 
        - cd /scripts && ./entrypoint.sh
      volumes:
      - name: scripts
        configMap:
          name: usb-scripts
          defaultMode: 0766

Configure Service Domain Environment Variables

Define environment variables for an individual Service Domain. After defining them, any Kubernetes app that specifies that Service Domain can access them as part of a container spec in the app YAML.

As an infrastructure administrator, you can set environment variables and associated values for each Service Domain, which are available for use in Kubernetes apps. For example:

  • You can provide environment variables and values by using a Helm chart you upload when you create an app. See Creating an Application and Helm Chart Support and Requirements.
  • You can also set specific environment variables per Service Domain by updating a Service Domain in the cloud management console or by using the kps update svcdomain command line.
  • See also Privileged Kubernetes Apps.

As a project user, you can then specify these per-Service Domain variables set by the infra admin in your app. If you do not include the variable name in your app YAML file but you pass it as a variable to run in your app, Karbon Platform Services can inject this variable value.

Setting and Clearing Service Domain Environment Variables - Command Line

How to set environment variables for a Service Domain.

Before you begin

You must be an infrastructure admin to set variables and values.

Procedure

  1. Install the kps command line as described at GitHub: https://github.com/nutanix/karbon-platform-services/tree/master/cli
  2. From your terminal window, create a Karbon Platform Services context to associate with an existing infra admin user.
    user@host$  kps config create-context context_name --email user_email_address --password password
    1. context_name . A context name to associate with the specified user_email_address and related Karbon Platform Services resources.
    2. user_email_address . Email address of an existing Karbon Platform Services user. This email address can be a My Nutanix account address or local user address.
    3. password . Password for the Karbon Platform Services user.
  3. Verify that the context is created.
    user@host$ kps config get-contexts
    The terminal displays CURRENT CONTEXT as the context_name . It also displays the NAME, URL, TENANT-ID, and Karbon Platform Services email and password.
  4. Get the Service Domain names (displayed in YAML format) to find the service domain where you set the environment variables.
    user@host$ kps get svcdomain -o yaml
  5. For a Service Domain named my-svc-domain , for example, set the Service Domain environment variable. In this example, set a secret variable named SD_PASSWORD with a value of passwd1234 .
    user@host$ kps update svcdomain my-svc-domain --set-env '{"SD_PASSWORD":"passwd1234"}'
  6. Verify the changes.
    user@host$ kps get svcdomain my-svc-domain -o yaml
    kind: edge
    name: my-svc-doamin
    connected: true
    .
    .
    .
    env: '{"SD_PASSWORD": "passwd1234"}'
    The Service Domain is updated. Any infra admin or project user can deploy an app to a Service Domain where you can refer to the secret by using the variable $(SD_PASSWORD) .
  7. You can continue to add environment variables by using the kps update svcdomain my-svc-domain --set-env '{" variable_name ": " variable_value "}' command.
  8. You an also clear one or all variables for Service Domain svc_domain_name .
    1. To clear (unset) all environment variables:
      user@host$ kps update svcdomain svc_domain_name --unset-env
    2. To clear (unset) a specific environment variables:
      user@host$ kps update svcdomain svc_domain_name --unset-env '{"variable_name":"variable_value"}'
  9. To update an app, restart it. See also Deploying and Undeploying a Kubernetes Application.

What to do next

Using Service Domain Environment Variables - Example

Using Service Domain Environment Variables - Example

Example: how to use existing environment variables for a Service Domain in application YAML.

About this task

  • You must be an infrastructure admin to set variables and values. See Setting and Clearing Service Domain Environment Variables - Command Line.
  • In this example, an infra admin has created these environment variables: a Kafka endpoint that requires a secret (authentication) to authorize the Kafka broker.
  • As a project user, you can then specify these per-Service Domain variables set by the infra admin in your app. If you do not include the variable name in your app YAML file but you pass it as a variable to run in your app, Karbon Platform Services can inject this variable value.

Procedure

  1. In your app YAML, add a container snippet similar to the following.
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app-deployment
      labels:
        app: my-app
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
          - name: my-app
            image: some.container.registry.com/myapp:1.7.9
            ports:
            - containerPort: 80
         env:
            - name: KAFKA_ENDPOINT
              value: some.kafka.endpoint
            - name: KAFKA_KEY
            - value: placeholder
         command:
           - sh
           - c
           - "exec node index.js $(KAFKA_KEY)"
    
  2. Use this app YAML snippet when you create a Kubernetes app in Karbon Platform Services as described in Kubernetes Apps.

Karbon Platform Services Terminology

Category

Logically grouped Service Domain, data sources, and other items. Applying a category to an entity applies any values and attributes associated with the category to the entity.

Cloud Connector

Built-in Karbon Platform Services platform feature to publish data to a public cloud like Amazon Web Services or Google Cloud Platform. Requires a customer-owned secured public cloud account and configuration in the Karbon Platform Services management console.

Cloud Profile

Cloud provider service account (Amazon Web Services, Google Cloud Platform, and so on) where acquired data is transmitted for further processing.

Container Registry Profile

Credentials and location of the Docker container registry hosted on a cloud provider service account. Can also be an existing cloud profile.

Data Pipeline

Path for data that includes input, processing, and output blocks. Enables you to process and transform captured data for further consumption or processing.

  • Input. An existing data stream or data source, identified according to a Category
  • Transformation. Code block such as a script defined in a Function to process or transform input data.
  • Output. A destination for data. Publish data to the cloud or cloud service (such as AWS Simple Queue Service) at the edge.
Data Service

Data service such as Kafka as a Service or Real-Time Stream Processing as a Service.

Data Source

A collection of sensors, gateways, or other input devices to associate with a node or Service Domain (previously known as an edge). Enables you to manage and monitor sensor integration and connectivity.

A node minimally consists of a node (also known as edge device) and a data source. Any location (hospital, parking lot, retail store, oil rig, factory floor, and so on) where sensors or other input devices are installed and collecting data. Typical sensors measure (temperature, pressure, audio, and so on) or stream (for example, an IP-connected video camera).

Functions

Code used to perform one or more tasks. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.

Infrastructure Administrator

User who creates and administers most aspects of Karbon Platform Services. The infrastructure administrator provisions physical resources, Service Domains, data sources and public cloud profiles. This administrator allocates these resources by creating projects and assigning project users to them.

Project

A collection of infrastructure (Service Domain, data source, project users) plus code and data (Kubernetes apps, data pipelines, functions, run-time environments), created by the infrastructure administrator for use by project users.

Project User

User who views and uses projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and applications. This user has project-specific CRUD permissions: the project user can create, read, update, and delete assigned applications, scripts, data pipelines, and other project users.

Real-time Data Stream
  1. Data Pipeline output endpoint type with a Service Domain as an existing destination.
  2. An existing data pipeline real-time data stream that is used as the input to another data pipeline.
Run-time Environment

A run-time environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.

Service Domain

Intelligent Platform as a Service (PaaS) providing the Karbon Platform Services Service Domain infrastructure (consisting of a full software stack combined with a hardware device). It enables customers to deploy intelligent applications (powered by AI/artificial intelligence) to process and transform data ingested by sensors. This data can be published selectively to public clouds.

Karbon Platform Services Management Console

Browser-based console where you can manage the Karbon Platform Services platform and related infrastructure, depending on your role (infrastructure administrator or project user).

Karbon Platform Services

Software as a Service (SaaS)/Platform as a Service (PaaS) based management platform and cloud IoT services. Includes the Karbon Platform Services management console.

Version

Read article
Karbon Platform Services Project User Guide

Karbon Platform Services Hosted

Last updated: 2022-11-24

Karbon Platform Services Overview

Karbon Platform Services is a Kubernetes-based multicloud platform as a service that enables rapid development and deployment of microservice-based applications. These applications can range from simple stateful containerized applications to complex web-scale applications across any cloud.

In its simplest form, Karbon Platform Services consists of a Service Domain encapsulating a project, application, and services infrastructure, and other supporting resources. It also incorporates project and administrator user access and cloud and data pipelines to help converge edge and cloud data.

With Karbon Platform Services, you can:

  • Quickly build and deploy intelligent applications across a public or private cloud infrastructure.
  • Connect various mobile, highly distributed data sensors (like video cameras, temperature or pressure sensors, streaming devices, and so on) to help collect data.
  • Create intelligent applications using data connectors and machine learning modules (for example, implementing object recognition) to transform the data. An application can be as simple as text processing code or it could be advanced code implementing AI by using popular machine learning frameworks like TensorFlow.
  • Push this data to your Service Domain or the public cloud to be stored or otherwise made available.

This data can be stored at the Service Domain or published to the cloud. You can then create intelligent applications using data connectors and machine learning modules to consume the collected data. These applications can run on the Service Domain or the cloud where you have pushed collected data.

Nutanix provides the Service Domain initially as a VM appliance hosted in an AOS AHV or ESXi cluster. You manage infrastructure, resources, and Karbon Platform Services capabilities in a console accessible through a web browser.

Cloud Management Console and User Experience

As part of initial and ongoing configuration, you can define two user types: an infrastructure administrator and a project user . The cloud management console and user experience help create a more intuitive experience for infrastructure administrators and project users.

  • Admin Console drop-down menu for infra admins and Home Console drop-down menu for project users. Both menus provide a list of all current projects and 1-click access to an individual project. In the navigation sidebar, Projects also provides a navigation path to the overall list of projects.
  • Vertical tabs for most pages and dashboards. For example, clicking Administration > Logging shows the Logging dashboard with related task and Alert tabs. The left navigation menu remains persistent to help eliminate excessive browser refreshes and help overall navigation.
  • Simplified Service Domain Management . In addition to the standard Service Domain list showing all Service Domains, a consolidated dashboard for each Service Domain is available from Infrastructure > Service Domain . Karbon Platform Services also simplifies upgrade operations available from Administration > Upgrades . For convenience, infra admins can choose when to download available versions to all or selected Service Domains, when to upgrade them, and check status for all or individual Service Domains.
  • Updated Workflows for Infrastructure Admins and Project Users . Apart from administration operations such as Service Domain, user management, and resource management, the infra admin and project user work flow is project-centric. A project encapsulates everything required to successfully implement and monitor the platform.
  • Updated GPU/vGPU Configuration Support . If your Service Domain includes access to a GPU/vGPU, you can choose its use case. When you create or update a Service Domain, you can specify exclusive access by any Kubernetes app or data pipeline or by the AI Inferencing API (for example, if you are using ML Models).

Project User Role

A project user can view and use projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and Kubernetes applications.

Karbon Platform Services allows and defines two user types: an infrastructure administrator and a project user. An infrastructure administrator can create both user types.

The project user has project-specific create/read/update/delete (CRUD) permissions: the project user can create, read, update, and delete the following and associate it with an existing project:

  • Kubernetes Apps
  • Functions (scripts)
  • Data pipelines
  • Runtime environments

When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project.

When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.

Figure. Project User Click to enlarge

Karbon Platform Services Cloud Management Console

The web browser-based Karbon Platform Services cloud management console enables you to manage infrastructure and related projects, with specific management capability dependent on your role (infrastructure administrator or project user).

You can log on with your My Nutanix or local user credentials.

Logging On to The Cloud Management Console

Before you begin

  • The supported web browser is the current and two previous versions of Google Chrome.
  • If you are logging on for the first time, you might experience a guided onboarding workflow.
  • After three failed login attempts in one minute, you are locked out of your account for 30 minutes.
  • Users without My Nutanix credentials log on as a local user.

Procedure

  1. Open https://karbon.nutanix.com/ in a web browser.
  2. Choose one of the following to log on:
    • Click Login with My Nutanix and log on with your My Nutanix credentials.
    • Click Log In with Local User Account and log on with your project user or infrastructure administrator user name and password in the Username / Password fields.
    The web browser displays role-specific dashboard.

Project User View

The default view for a project user is the Dashboard .

  • Click the menu button in the view to expand and display all available pages in this view.
Figure. Default Project User View Click to enlarge Page shows the project user dashboard including an onboarding widget

Dashboard
Customized widget panel view of your infrastructure, projects, Service Domains, and alerts. Includes the Onboarding Dashboard panel, with links to create projects and project components such as Service Domains, cloud profiles, categories, and so on. The user view indicate displays Project for project users.
Projects
  • Displays a list of all projects created by the infrastructure administrator.
  • Edit an existing project.
Alerts
Displays the Alerts page, which displays a sortable table of alerts associated with your Karbon Platform Services deployment.

Kubernetes Apps, Logging, and Services

Kubernetes Apps
  • Displays a list of available Kubernetes applications.
  • Create an application and associate it with a project.
Functions and Data Pipelines
  • Displays a list of available data pipelines.
  • Create a data pipeline and associate it with a project.
  • Create a visualization, a graphical representation of a data pipeline including data sources and Service Domain or cloud data destinations.
  • Displays a list of scripts (code used to perform one or more tasks) available for use in a project.
  • Create a function by uploading a script, and then associate it with a project.
  • Displays a list of available runtime environments.
  • Create an runtime environment and associate it with a project and a container registry.
AI Inferencing
  • Create and display machine learning models (ML Models) available for use in a project.
Services and Related Alerts
  • Kafka. Configured and deployed data streaming and messaging topics.
  • Istio. Defined secure traffic management service.
  • Prometheus. Defined app monitoring and metrics collection.
  • Nginx-Ingress. Configured and deployed ingress controllers.

Viewing Dashboards

After you log on to the cloud management console, you are presented with the main Dashboard page as a role-specific landing page. You can also show this information at any time by clicking Dashboard under the main menu.

Each Karbon Platform Services element (Service Domain, function, data pipeline, and so on) includes a dashboard page that includes information about that element. It might also include information about elements associated with that element.

The element dashboard view also enables you to manage that element. For example, click Projects and select a project. Click Kubernetes Apps and click an application in the list. The application dashboard is displayed along with an Edit button.

Quick Start Menu

The Karbon Platform Services management console includes a Quick Start menu next to your user name. Depending on your role (infrastructure administrator or project user), you can quickly create infrastructure or apps and data. Scroll down to see items you can add for use with projects.

Figure. Quick Start Menu Click to enlarge The quick start menu is at the top right of the console.

Getting Started - Project User

The Quick Start Menu lists the common onboarding tasks for the project user. It includes links to project resource pages. You can also go directly to any project resource from the Apps & Data menu item.

As the project user, you can update a project by creating the following items.

  • Applications
  • Data pipelines
  • Runtime environments
  • Functions

If any Getting Started item shows Pending , the infrastructure administrator has not added you to that entity (like a project or application) or you need to create an entity (like an application).

Start At The Project Page

To get started after logging on to the cloud management console, see Projects.

Projects

A project is an infrastructure, apps, and data collection created by the infrastructure administrator for use by project users.

When an infrastructure administrator creates a project and then adds themselves as a user for that project, they are assigned the roles of project user and infrastructure administrator for that project. When an infrastructure administrator adds another infrastructure administrator as a user to a project, that administrator is assigned the project user role for that project.

A project can consist of:

  • Existing administrator or project users
  • A Service Domain
  • Cloud profile
  • Container registry profile

When you add a Service Domain to a project, all resources such as data sources associated with the Service Domain are available to the users added to the project.

Projects Page

The Projects page lets an infrastructure administrator create a new project and lists projects that administrators can update and view.

For project users, the Projects page lists projects created and assigned by the infrastructure administrator that project users can view and add apps and data. Project users can view and update any projects assigned by the administrator with applications, data pipelines, and so on. Project users cannot remove a project.

When you click a project name, the project Summary dashboard is displayed and shows resources in the project.

You can click any of the project resource menu links to edit or update existing resources, or create and add resources to a project. For example, you can edit an existing data pipeline in the project or create a new one and assign it to the project. For example, click Kafka to shows details for the Kafka data service associated with the project. See Viewing Kafka Status.).

Figure. Project Summary Dashboard Click to enlarge Project summary dashboard with component links.

Managing Project Services

The Karbon Platform Services cloud infrastructure provides services that are enabled by default. It also provides access to services that you can enable for your project.

The platform includes these ready-to-use services, which provide an advantage over self-managed services:

  • Secure by design.
  • No life cycle management. Service Domains do not require any user intervention to maintain or update service versions, patches, or other hands-on activities. For ease of use, all projects using the Service Domain use the same service version.
  • Dedicated service resources. Your project uses isolated service resources. Service resources are not shared with other projects. Applications within a project are not required to authenticate to use the service.
  • Service-specific alerts. Like other Karbon Platform Services features, the Service Domain helps monitor service health and raises service-specific alerts.
App Runtime Services
These services are enabled by default on each Service Domain.
  • Kubernetes Apps. Containers as a service. You can create and run Kubernetes apps without having to manage the underlying infrastructure.
  • Functions and Data Pipelines. Run server-less functions based on data triggers, then publish data to specific cloud or Service Domain endpoints.
  • AI Inferencing. Enable this service to use your machine learning (ML) models in your project. The ML Model feature provides a common interface for functions (that is, scripts) or applications.
Ingress Controller
Traefik or Nginx-Ingress. If your project requires Ingress controller routing, you can choose the open source Traefik router or the NGINX Ingress controller to enable on your Service Domain. See Enable an Ingress Controller.
Service Mesh
Istio. Provides secure connection, traffic management, and telemetry. See Istio Service Mesh.
Data Streaming | Messaging
  • Kafka. Available for use within project applications and data pipelines, running on a Service Domain hosted in your environment. See Kafka as a Service.
  • NATS. Available for use within project applications and data pipelines. In-memory high performance data streaming including pub/sub (publish/subscribe) and queue-based messaging.
Logging | Monitoring | Alerting
  • Prometheus. Provides Kubernetes app monitoring and logging, alerting, metrics collection, and a time series data model (suitable for graphing). See Prometheus Application Monitoring and Metrics as a Service.
  • Logging. Provides log monitoring and log bundling. See Audit Trail and Log Management.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Kafka as a Service

Kafka is available as a data service through your Service Domain.

The Kafka data service is available for use within a project's applications and data pipelines, running on a Service Domain hosted in your environment. The Kafka service offering from Karbon Platform Services provides the following advantages over a self-managed Kafka service:

  • Secure by design.
  • No need to explicitly declare Kafka topics. With Karbon Platform Services Kafka as a service, topics are automatically created.
  • No life cycle management. Service Domains do not require any user intervention to maintain or update service versions, patches, or other hands-on activities. For ease of use, all projects using the Service Domain use the same service version.
  • Dedicated service resources. Your project uses isolated service resources. Service resources are not shared with other projects. Applications within a project are not required to authenticate to use the service.
  • Service-specific alerts. Like other Karbon Platform Services features, the Service Domain monitors service health and raises service-specific alerts.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Using Kafka in an Application

Information about application requirements and sample YAML application file

Create an application for your project as described in Creating an Application and specify the application attributes and configuration in a YAML file.

Sample Kafka Application YAML Template File


apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-deployment
  labels:
    app: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: some.container.registry.com/myapp:1.7.9
        ports:
        - containerPort: 80
     env:
        - name: KAFKA_ENDPOINT
          value: {{.Services.Kafka.Endpoint}}
Field Name Value or Field Name / Description Value or Field Name / Description
kind Deployment Specify the resource type. Here, use Deployment .
metadata name Provide a name for your deployment.
labels Provide at least one label. Here, specify the application name as app: my-app
spec Define the Kafka service specification.
replicas Here, 1 to indicate a single Kafka cluster (single Service Domain instance or VM) to keep data synchronized.
selector Use matchLabels and specify the app name as in labels above.
template
Specify the application name here ( my-app ), same as metadata specifications above.
spec Here, define the specifications for the application using Kafka.
containers
  • name: my-app . Specify the container application
  • image: some.container.registry.com/myapp:1.7.9 . Define the container registry host address where the container image is stored.
  • ports: containerPort: 80 . Define the container registry port. Here, 80.
env

name: KAFKA_ENDPOINT

value: {{.Services.Kafka.Endpoint}}

Leave these values as shown.

Using Kafka in a Function for a Data Pipeline

Information about data pipeline function requirements.

See Functions and Data Pipelines.

You can specify a Kafka endpoint type in a data pipeline. A data pipeline consists of:

  • Input. An existing data source or real-time data stream (output from another data pipeline).
  • Transformation. A function (code block or script) to transform data from a data source (or no function at all to pass data directly to the endpoint).
  • Output. An endpoint destination for the transformed or raw data.

Kafka Endpoint Function Requirements

For a data pipelines with a Kafka topic endpoint:

  • Language . Select the golang scripting language for the function.
  • Runtime Environment . Select the golang runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.

Viewing Kafka Status

In the cloud management console, you can view Kafka data service status when you use Kafka in an application or as a Kafka endpoint in a data pipeline as part of a project. This task assumes you are logged in to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard is displayed along with Edit and Manage Services buttons.
  3. Click Kafka .
    This dashboard shows a consolidated, high-level view of all Kafka topics, deployments, and related alerts. The default view is Topics.
  4. To show all topics for this project, click Topics .
    1. Topics. All Kafka topics used in this project.
    2. Service Domain. Service Domains in the project employing Kafka messaging.
    3. Bytes/sec produced and Bytes/sec consumed. Bandwidth use.
  5. To view Service Domain status where Kafka is used, click Deployments .
    1. List of Service Domains and provisioned status.
    2. Brokers. A No Brokers Available message might mean the Service Domain is not connected.
    3. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  6. To see all alerts associated with Kafka for this project, click Alerts .

Enable an Ingress Controller

The Service Domain supports the Traefik open source router as the default Kubernetes Ingress controller. You can also choose the NGINX ingress controller instead.

An infrastructure admin can enable an Ingress controller for your project through Manage Services as described in Managing Project Services. You can only enable one Ingress controller per Service Domain.

When you include Ingress controller annotations as part of your application YAML file, Karbon Platform Services uses Traefik as the default on-demand controller.

If your deployment requires it, you can alternately use NGINX (ingress-nginx) as a Kubernetes Ingress controller instead of Traefik.

In your application YAML, specify two snippets:

  • Service snippet. Use annotations to define the HTTP or HTTPS host protocol, path, service domain, and secret for each Service Domain to use as an ingress controller. You can specify one more Service Domains.
  • Secret snippet. Use this snippet to specify the certificates used to secure app traffic.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Sample Ingress Controller Service Domain Configuration

To securely route application traffic with a Service Domain ingress controller, create YAML snippets to define and specify the ingress controller for each Service Domain.

You can only enable and use one Ingress controller per Service Domain.

Create an application for your project as described in Creating an Application to specify the application attributes and configuration in a YAML file. You can include these Service and Secret snippets Service Domain ingress controller annotations and certificate information in this app deployment YAML file.

Ingress Controller Service Domain Host Annotations Specification

apiVersion: v1
kind: Service
metadata:
  name: whoami
  annotations:
    sherlock.nutanix.com/http-ingress-path: /notls
    sherlock.nutanix.com/https-ingress-path: /tls
    sherlock.nutanix.com/https-ingress-host: DNS_name
    sherlock.nutanix.com/http-ingress-host: DNS_name
    sherlock.nutanix.com/https-ingress-secret: whoami
spec:
  ports:
    - protocol: TCP
      name: web
      port: 80
  selector:
    app: whoami
Table 1. Ingress Controller Annotations Specification
Field Name Value or Field Name / Description Value or Field Name / Description
kind Service Specify the Kubernetes service. Here, use Service to indicate that this snippet defines the ingress controller details.
apiVersion v1 Here, the Kubernetes API version.
metadata name Provide an app name to which this controller applies.
annotations These annotations define the ingress controller encryption type and paths for Karbon Platform Services.
sherlock.nutanix.com/http-ingress-path: /notls /notls specifies no Transport Layer Security encryption.
sherlock.nutanix.com/https-ingress-path: /tls

/tls specifies Transport Layer Security encryption.

sherlock.nutanix.com/http-ingress-host: DNS_name Ingress service host path, where the service is bound to port 80.

DNS_name . DNS name you can give to your application. For example, my.contoso.net . Ensure that the DNS name resolves to the Service Domain IP address.

sherlock.nutanix.com/https-ingress-host: DNS_name Ingress service host path, where the service is bound to port 443.

DNS_name . DNS name you can give to your application. For example, my.contoso.net . Ensure that the DNS name resolves to the Service Domain IP address.

sherlock.nutanix.com/https-ingress-secret: whoami The sherlock.nutanix.com/https-ingress-secret: whoami snippet links the authentication Secret information defined above to this controller.
spec Define the transfer protocol, port type, and port for the application.
  • protocol: TCP . Transfer protocol of TCP.
  • ports: Define the port and type.

    name: web Port type.

    port: 80 TCP port.

A selector to specify the application. selector app: whoami

Securing The Application Traffic

Use a Secret snippet to specify the certificates used to secure app traffic.

apiVersion: v1
kind: Secret
metadata:
  name: whoami
type: kubernetes.io/tls
data:
  ca.crt: cert_auth_cert
  tls.crt: tls_cert
  tls.key: tls_key
Table 2. TLS Certificate Specification
Field Name Value or Field Name / Description Value or Field Name / Description
apiVersion v1 Here, the TLS API version.
kind Secret Specify the resource type. Here, use Secret to indicate that this snippet defines the authentication details.
metadata name Provide an app name to which this certification applies.
type Define the authentication type used to secure the app. Here, kubernetes.io/tls
data ca.crt Add the keys for each certification type: certificate authority certificate (ca.crt), TLS certificate (tls.crt), and TLS key (
tls.crt
tls.key

Viewing Ingress Controller Details

In the cloud management console, you can view Ingress controller status for any controller used as part of a project. This task assumes you are logged in to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard is displayed along with Edit and Manage Services buttons.
  3. Depending on the deployed Ingress controller, click Nginx-Ingress or Traefik .
    This dashboard shows a consolidated, high-level view of all rules, deployments, and related alerts. The default view is Rules.
  4. To show all rules for this project, click Rules .
    1. Application. App where traffic is being routed.
    2. Rules. Rules you have configured for this app. Here, Rules shows the host and paths
    3. Destination. Application and port number.
    4. Service Domain. Service Domains in the project employing routing.
    5. TLS. Transport Layer Security (TLS) protocol status. On indicates encrypted communication is enabled. Off indicates it is not used.
  5. To view Service Domain status where the controller is used, click Deployments .
    1. List of Service Domains and provisioned status.
    2. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  6. To see all alerts associated with the controller for this project, click Alerts .

Istio Service Mesh

Istio provides secure connection, traffic management, and telemetry.

Add the Istio Virtual Service and DestinationRules to an Application - Example

In the application YAML snippet or file, define the VirtualService and DestinationRules objects.

These objects specify traffic routing rules for the recommendation-service app host. If the traffic rules match, traffic flows to the named destination (or subset/version of it) as defined here.

In this example, traffic is routed to the recommendation-service app host if it is sent from the FireFox browser. The specific policy version ( subset ) for each host helps you identify and manage routed data.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: recomm-svc
spec:
  hosts:
  - recommendation-service
  http:
  - match:
    - headers:
        user-agent:
          regex: .*Firefox.*
    route:
    - destination:
        host: recommendation-service
        subset: v2
  - route:
    - destination:
        host: recommendation-service
        subset: v1

This DestinationRule YAML snippet defines a load-balancing traffic policy for the policy versions ( subsets ), where any healthy host can service the request.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: recomm-svc
spec:
  host: recommendation-service
  trafficPolicy:
    loadBalancer:
      simple: RANDOM
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

Manage Traffic by Weight - Example

In this YAML snipped, you can split traffic for each subset by specifying a weight of 30 in one case, and 70 in the other. You can also weight them evenly by giving each a weight value of 50.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: recomm-svc
spec:
  hosts:
  - recommendation-service
  http:
  - route:
    - destination:
       host: recommendation-service
       subset: v2
      weight: 30   
    - destination:
       host: recommendation-service
       subset: v1
      weight: 70

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Viewing Istio Details

In the cloud management console, you can view Istio service mesh status associated with applications in a project. This task assumes you are logged in to the cloud management console.

About this task

The Istio tabs show service resource and routing configuration information derived from project-related Kubernetes Apps YAML files and service status.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list, then click Istio in the navigation sidebar to show the default Application Metrics page.
    Application Metrics shows initial details about the applications and related traffic metrics.
    1. Name and Service Domain. Application name and Service Domain where the application is deployed and the Istio service is enabled.
    2. Workloads. One or more workloads associated with the application. For example, Kubernetes Deployments, StatefulSets, DaemonSets, and so on.
    3. Inbound Request Volume / Outbound Request Volume. Inbound/Outbound HTTP requests per second for the last 5 minutes
  3. To see details about any traffic routing configurations associated with the application, click Virtual Services .
    An application can specify one or more virtual services, which you can deploy across one or more Service Domains.
    1. Application and Virtual Services. Application and service name.
    2. Service Domains. Number and name of the Service Domains where the service is deployed.
    3. Matches. Lists matching traffic routes.
    4. Destinations. Number of service destination host connection or request routes. Expand the number to show any destinations served by the virtual service (v1, v2, and so on) where traffic is routed. If specified in the Virtual Service YAML file, Weight indicates the proportion of traffic routed to each host. For example, Weight: 50 indicates that 50 percent of the traffic is routed to that host.
  4. To see rules associated with Destinations, click Destination Rules .
    An application can specify one or more destination rules, which you can deploy across one or more Service Domains. A destination rule is a policy that is applied after traffic is routed (for example, a load balancing configuration).
    1. Application. Application where the rule applies.
    2. Destination Rules. Rules by name.
    3. Service Domains. Number and name of the Service Domains where the rule is deployed.
    4. Subsets. Name and number of the specific policies (subsets). Expand the number to show any pod labels (v1, v2, and so on), service versions (v1, v2, and so on), and traffic weighting, where rules apply and traffic is routed.
  5. To view Service Domain status where Istio is used, click Deployments .
    1. List of Service Domains where the service is deployed and status.
      • PROVISIONED. The service has been successfully installed and enabled.
      • PROVISIONING. The service initialization and provisioning state is in progress.
      • UNPROVISIONED. The service has been successfully disabled (uninstalled).
      • FAILED. The service provisioning has failed.
    2. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  6. To see all alerts associated with the controller for this project, click Alerts .

Prometheus Application Monitoring and Metrics as a Service

Note: When you disable Prometheus and then later enable it, the service creates a new PersistentVolumeClaim (PVC) and clears any previously-stored metrics data stored in PVC.

The Prometheus service included with Karbon Platform Services enables you to monitor endpoints you define in your project's Kubernetes apps. Karbon Platform Services allows one instance of Prometheus per project.

Prometheus collects metrics in your app endoints. The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, endpoints, and alerts. See Viewing Prometheus Service Status.

You can then decide how to view the collected metrics, through graphs or other Prometheus-supported means. See Create Prometheus Graphs with Grafana - Example.

Default Service Settings

Table 1. Prometheus Default Settings
Setting Default Value or Description
Frequency interval to collect and store metrics (also known as scrape and store) Every 60 seconds 1
Collection endpoint /metrics 1
Default collection app collect-metrics
Data storage retention time 10 days
  1. You can create a customized ServiceMonitor YAML snippet to change this default setting. See Enable Prometheus App Monitoring and Metric Collection - Examples.

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Viewing Prometheus Service Status

The Prometheus Deployments dashboard displays Service Domains where the service is deployed, service status, Prometheus endpoints, and alerts.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard is displayed along with Edit and Manage Services buttons.
  3. Click Prometheus to view the default Deployments page.
    1. Service Domain and Status. Service Domains where the Prometheus service is deployed and status.
      • PROVISIONED. The service has been successfully installed and enabled.
      • PROVISIONING. The service initialization and provisioning state is in progress.
      • UNPROVISIONED. The service has been successfully disabled (uninstalled).
      • FAILED. The service provisioning has failed.
    2. Prometheus Endpoints. Endpoints that the service is scraping, by ID.
    3. Alert Manager. Alert Manager shows alerts associated with the Prometheus endpoints, by ID.
    4. Alerts. Number of Critical (red) and Warning (yellow) alerts.
  4. To see all alerts associated with Prometheus for this project, click Alerts .

Create Prometheus Graphs with Grafana - Example

This example shows how you can set up a Prometheus metrics dashboard with Grafana.

This topic provides examples to help you expose Prometheus endpoints to Grafana, an open-source analytics and monitoring visualization application. You can then view scraped Prometheus metrics graphically.

Define Prometheus as the Data Source for Grafana

The first ConfigMap YAML snippet example uses the environment variable {{.Services.Prometheus.Endpoint}} to define the service endpoint. If this YAML snippet is part of an application template created by an infra admin, a project user can then specify these per-Service Domain variables in their application.

The second snippet provides configuration information for the Grafana server web page. The host name in this example is woodkraft2.ntnxdomain.com


apiVersion: v1
kind: ConfigMap
metadata:
 name: grafana-datasources
data:
 prometheus.yaml: |-
   {
       "apiVersion": 1,
       "datasources": [
           {
              "access":"proxy",
               "editable": true,
               "name": "prometheus",
               "orgId": 1,
               "type": "prometheus",
               "url": "{{.Services.Prometheus.Endpoint}}",
               "version": 1
           }
       ]
   }
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-ini
data:
  grafana.ini: |
    [server]
    domain = woodkraft2.ntnxdomain.com
    root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana
    serve_from_sub_path = true
---

Specify Deployment Information on the Service Domain

This YAML snippet provides a standard deployment specification for Grafana.


apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      name: grafana
      labels:
        app: grafana
    spec:
      containers:
        - name: grafana
          image: grafana/grafana:latest
          ports:
            - name: grafana
              containerPort: 3000
          resources:
            limits:
              memory: "2Gi"
              cpu: "1000m"
            requests:
              memory: "1Gi"
              cpu: "500m"
          volumeMounts:
            - mountPath: /var/lib/grafana
              name: grafana-storage
            - mountPath: /etc/grafana/provisioning/datasources
              name: grafana-datasources
              readOnly: false
            - name: grafana-ini
              mountPath: "/etc/grafana/grafana.ini"
              subPath: grafana.ini
      volumes:
        - name: grafana-storage
          emptyDir: {}
        - name: grafana-datasources
          configMap:
            defaultMode: 420
            name: grafana-datasources
        - name: grafana-ini
          configMap:
            defaultMode: 420
            name: grafana-ini
---

Define the Grafana Service and Use an Ingress Controller

Define the Grafana Service object to use port 3000 and an Ingress controller to manage access to the service (through woodkraft2.ntnxdomain.com).


apiVersion: v1
kind: Service
metadata:
  name: grafana
spec:
  selector:
    app: grafana
  ports:
    - port: 3000
      targetPort: 3000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: grafana
  labels:
    app: grafana
spec:
  rules:
    - host: woodkraft2.ntnxdomain.com
      http:
        paths:
        - path: /grafana
          backend:
            serviceName: grafana
            servicePort: 3000

Copyright 2022 Nutanix, Inc. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names used herein are for identification purposes only and may be the trademarks of their respective holder(s) and no claim of rights is made therein.

Kubernetes Apps

You can create intelligent applications to run on the Service Domain infrastructure or the cloud where you have pushed collected data. You can implement application YAML files to use as a template, where you can customize the template by passing existing Categories associated with a Service Domain to it.

You need to create a project with at least one user to create an app.

You can undeploy and deploy any applications that are running on Service Domains or in the cloud. See Deploying and Undeploying a Kubernetes Application.

Privileged Kubernetes Apps

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.

For Kubernetes apps running as privileged, you might have to specify the Kubernetes namespace where the application is deployed. You can do this by using the {{ .Namespace }} variable you can define in app YAML template file.

In this example, the resource kind of ClusterRoleBinding specifies the {{ .Namespace }} variable as the namespace where the subject ServiceAccount is deployed. As all app resources are deployed in the project namespace, specify the project name as well (here, name: my-sa).

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-sa
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: my-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: my-cluster-role
subjects:
  - kind: ServiceAccount
    name: my-sa
    namespace: {{ .Namespace }}

Creating an Application

Create a Kubernetes application that you can associate with a project.

Before you begin

To complete this task, log on to the cloud management console.

About this task

  • If your app requires a service, make sure you enable it in the associated project.
  • Application YAML files can also be a template where you can customize the template by passing existing Categories associated with a Service Domain to it.
  • See also Configure Service Domain Environment Variables to set and use environment variables in your app.
  • Nutanix recommends that you do not reuse application names after creating and then deleting (removing) an application. If you subsequently create an application with the same name as a deleted application, your application might re-use stale data from the deleted application.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps > Create Kubernetes App .
  3. Name your application. Up to 200 alphanumeric characters are allowed.
  4. Description . Provide a relevant description for your application.
  5. Select one or more Service Domains, individually or by category.
    How your Service Domain was added in your project determines what you can select in this step. See Creating a Project.
    1. For Select Individually , click Add Service Domains to select one or more Service Domains.
    2. For Select By Category , select to choose and apply one or more categories associated with a Service Domain.
      Click the plus sign ( + ) to add more categories.
  6. Click Next .
  7. To use an application YAML file or template, select YAML based configuration , then click Choose File to navigate to a YAML file or template.
    After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
  8. To define an application with a Helm chart, select Upload a Helm Chart . See Helm Chart Support and Requirements.
    1. Helm Chart File . Browse to a Helm Chart package file.
    2. Values Override File (Optional) . You also can optionally browse to a values YAML file to override the values you specified in the Helm Chart package.
    3. View Generated YAML File . After uploading these files, you can view the generated YAML configuration by clicking Show YAML .
  9. Click Create .
    If your app requires a service, make sure you enable it in the associated project.
    The Kubernetes Apps page lists the application you just created as well as any existing applications. Apps start automatically after creation.

Editing an Application

Update an existing Kubernetes application.

Before you begin

To complete this task, log on to the cloud management console.

About this task

  • Application YAML files can also be a template where you can customize the template by passing existing Categories associated with a Service Domain to it.
  • To set and use environment variables in your app, see Configure Service Domain Environment Variables.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps , then click an application in the list.
    The application dashboard is displayed along with an Edit button.
  3. Click Edit .
  4. Name . Update the application name. Up to 200 alphanumeric characters are allowed.
  5. Description . Provide a relevant description for your application.
  6. You cannot change the application's Project .
  7. Update the associated Service Domains.
    How your Service Domain was added in your project determines what you can select in this step. See Creating a Project.
    1. If Select Individually is selected:
      • Click X to remove a Service Domain in the list.
      • Click Edit to select or deselect one or more Service Domains.
    2. If Select By Category is selected, add or remove one or more categories associated with a Service Domain.
      • Select a category and value.
      • Click X to remove a category.
      • Click the plus sign ( + ) to add more categories and values.
  8. Click Next .
  9. To use an application YAML file or template, select YAML based configuration , then click Choose File to navigate to a YAML file or template.
    1. You can edit the file directly on the Yaml Configuration page.
    2. Click Choose File to navigate to a YAML file or template.
    After uploading the file, you can also edit the file contents. You can choose the contrast levels Normal , Dark , or High Contrast Dark .
  10. To define an application with a Helm chart, select Upload a Helm Chart . See Helm Chart Support and Requirements.
    1. Helm Chart File . Browse to a Helm Chart package file.
    2. Values Override File (Optional) . You also can optionally browse to a YAML file to override the values you specified in the Helm Chart package. Use this option to update your application without having to re-upload a new chart.
    3. View Generated YAML File . After uploading these files, you can view the generated YAML configuration by clicking Show YAML .
  11. Click Update .
    The Kubernetes Apps page lists the application you just updated as well as any existing applications.

Removing an Application

Delete an existing Kubernetes application.

About this task

  • To complete this task, log on to the cloud management console.
  • Nutanix recommends that you do not reuse application names after creating and then deleting (removing) an application. If you subsequently create an application with the same name as a deleted application, your application might re-use stale data from the deleted application.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps , then select an application in the list.
  3. Click Actions > Remove , then click Delete to confirm.
    The Kubernetes Apps page lists any remaining applications.

Deploying and Undeploying a Kubernetes Application

You can undeploy and deploy any applications that are running on Service Domains or in the cloud. You can choose the Service Domains where you want the app to deploy or undeploy. Select the table or tile view on this page by clicking one of the view icons.

Before you begin

Undeploying a Kubernetes app deletes all config objects directly created for the app, including PersistentVolumeClaim data. Your app can create a PersistentVolumeClaim indirectly through StatefulSets. The following points describe scenarios when data is deleted and when data is preserved after you undeploy an app.

  • If your application specifies an explicit PersistentVolumeClaim object, when the app is undeployed, data stored in PersistentVolumeClaim is deleted. This data is not available if you then deploy the app.
  • If your application specifies a VolumeClaimTemplates object, when the app is undeployed, data stored in PersistentVolumeClaim persists. This data is available for reuse if you later deploy the app. If you plan to redeploy apps, Nutanix recommends using VolumeClaimTemplates to implement StatefulSets with stable storage.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click Kubernetes Apps , then select an application in the list.
  3. To undeploy a running application, select an application, then click Actions > Undeploy
    1. To undeploy every instance of the app running on all Service Domains, select Undeploy All , then click Undeploy .
    2. To undeploy the app running on specific Service Domains, select Undeploy Selected , select one or more Service Domains, then click Undeploy .
    3. Click Undeploy App to confirm.
  4. To deploy an undeployed application, select an application, then click Actions > Deploy , then click Deploy to confirm.
    1. To deploy every instance of the app running on all Service Domains, select Deploy All , then click Deploy .
    2. To deploy the app running on specific Service Domains, select Deploy Selected , select one or more Service Domains, then click Deploy .

Helm Chart Support and Requirements

Helm Chart Format
When you create a Kubernetes app in the cloud management console, you can upload a Helm chart package in a gzipped TAR file (.TGZ format) that describes your app. For example, it can include:
  • Helm chart definition YAML file
  • App YAML template files that define your application (deployment, service, and so on). See also Privileged Kubernetes Apps
  • Values YAML file where you declare variables to pass to your app templates. For example, you can specify Ingress controller annotations, cloud repository details, and other required settings
Supported Helm Version
Karbon Platform Services supports Helm version 3.
Kubernetes App Support
Karbon Platform Services supports the same Kubernetes resources in Helm charts as in application YAML files. For example, daemonsets, deployment, secrets, services, statefulsets, and so on are supported when defined in a Helm chart.

Data Pipelines

The Data Pipelines page enables you to create and view data pipelines, and also see any alerts associated with existing pipelines.

A data pipeline is a path for data that includes:

  • Input . An existing data source or real-time data stream.
  • Transformation . Code block such as a script defined in a Function to process or transform input data.
  • Output . A destination for your data. Publish data to the Service Domain, cloud, or cloud data service (such as AWS Simple Queue Service).

It also enables you to process and transform captured data for further consumption or processing.

To create a data pipeline, you must have already created or defined at least one of the following:

  • Project
  • Category
  • Data source
  • Function
  • Cloud profile. Required for cloud data destinations or Service Domain endpoints
  • When you create, clone, or edit a function, you can define one or more parameters. When you create a data pipeline, you define the values for the parameters when you specify the function in the pipeline.
  • Data pipelines can share functions, but you can specify unique parameter values for the function in each data pipeline.
  • You can also stop and start a data pipeline. See Stopping and Starting a Data Pipeline.

Data Pipeline Visualization

After you create one or more data pipelines, the Data Pipelines > Visualization page shows data pipelines and the relationship among data pipeline components.

You can view data pipelines associated with a Service Domain by clicking the filter icon under each title (Data Sources, Data Pipelines to Service Domain, Data Pipelines on Cloud) and selecting one or more Service Domains in the drop-down list.

Figure. Data Pipeline Visualization Click to enlarge Shows the existing data pipelines and the relationship among data pipeline components.

Creating a Data Pipeline

Create a data pipeline, with a data source or real-time data source as input and infrastructure or external cloud as the output.

Before you begin

You must have already created at least one of each: project, data source, function, and category. Also, a cloud profile is required for cloud data destinations or Service Domain endpoints. See also Naming Guidelines.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Data Pipelines .
  4. Click Create .
  5. Data Pipeline Name . Name your data pipeline. Up to 63 lowercase alphanumeric characters and the dash character (-) are allowed.

Input - Add a Data Source

Procedure

Click Add Data Source , then select Data Source .
  1. Select a data source Category and one related Value .
  2. [Optional] Click Add new to add another Category and Value . Continue adding as many as you have defined.
  3. Click the trashcan to delete a data source category and value.

Input - Add a Real-Time Data Stream

Procedure

Click Add Data Source , then select Real-Time Data Stream .
  1. Select an existing pipeline as a Real-Time Data Stream .

Transformation - Add a Function

Procedure

  1. Click Add Function and select an existing Function.
  2. Define a data Sampling Interval by selecting Enable .
    1. Enter a interval value.
    2. Select an interval: Millisecond , Second , Minute , Hour , or Day .
  3. Enter a value for any parameters defined in the function.
    If no parameters are defined, this field's name is shown as parens characters. ( )
  4. [Optional] Click the plus sign ( + ) to add another function.
    This step is useful if you want to chain functions together, feeding transformed data from one function to another, and so on.
  5. You can also create a function here by clicking Upload , which displays the Create Function page.
    1. Name your function. Up to 200 alphanumeric characters are allowed.
    2. Description . Provide a relevant description for the function.
    3. Language . Select the scripting language for the function: golang, python, node.
    4. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
    5. Click Next .
    6. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    7. Click Add Parameter to define and use any parameters for the data transformation in the function.
    8. Click Create .

Output - Add a Destination

Procedure

  1. Click Add Destination to specify where the transformed data is to be output: Publish to Service Domain or Publish to External Cloud .
  2. If you select Destination > Service Domain :
    1. Endpoint Type . Select Kafka , MQTT , Realtime Data Stream , or Data Interface .
    2. Data Interface Name . If shown, name your data interface.
    3. Endpoint Name . If shown, name your endpoint.
  3. If you select Destination > External Cloud and Cloud Type as AWS , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select an AWS region.
  4. If you select Destination > External Cloud and Cloud Type as Azure , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. EndpointType . Select an existing stream type, such as Blob Storage .
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
  5. If you select Destination > External Cloud and Cloud Type as GCP , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
      d
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select a GCP region.
  6. Click Create .

Editing a Data Pipeline

Update a data pipeline, including data source, function, and output destination. See also Naming Guidelines.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Data Pipelines .
  4. Select a data pipeline in the list, then click Actions > Edit
    You cannot update the data pipeline name.

Input - Edit a Data Source

Procedure

You can do any of the following:
  1. Select a different Data Source to change the data pipeline Input source (or keep the existing one).
  2. Select a different value for an existing Category.
  3. Click the trashcan to delete a data source category and value.
  4. Click Add new to add another Category and Value . Continue adding as many as you have defined.
  5. Select a different category.

Input - Edit a Real-Time Data Stream

Procedure

Select a different Realtime Data Stream to change the data pipeline Input source.

Transformation - Edit a Function

About this task

You can do any of the following tasks.

Procedure

  1. Select a different Function .
  2. Add or update the Sampling Interval for any new or existing function.
    1. If not selected, create a Sampling Interval by selecting Enable
    2. Enter a interval value.
    3. Select an interval: Millisecond , Second , Minute , Hour , or Day .
  3. Enter a value for any parameters defined in the function.
    If no parameters are defined, this field's name is shown as parens characters. ( )
  4. [Optional] Click the plus sign ( + ) to add another function.
    This step is useful if you want to chain functions together, feeding transformed data from one function to another, and so on.
  5. You can also create a function here by clicking Upload , which displays the Add Function page.
    1. Name your function. Up to 200 alphanumeric characters are allowed.
    2. Description . Provide a relevant description for the function.
    3. Language . Select the scripting language for the function: golang, python, node.
    4. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
    5. Click Next .
    6. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    7. Click Add Parameter to define and use any parameters for the data transformation in the function.
    8. Click Create .
  6. Click Create .

Output - Edit a Destination

About this task

You can do any of the following tasks.

Procedure

  1. Select a Infrastructure or External Cloud Destination to specify where the transformed data is to be output.
  2. If you select Destination > Infrastructure :
    1. Endpoint Type . Select MQTT , Realtime Data Stream , Kafka , or Data Interface .
    2. Data Interface Name . If shown, name your data interface.
    3. Endpoint Name . If shown, name your endpoint.
  3. If you select Destination > External Cloud and Cloud Type as AWS , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select an AWS region.
  4. If you select Destination > External Cloud and Cloud Type as GCP , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. Endpoint Type . Select an existing endpoint type.
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
    4. Region . Select a GCP region.
  5. If you select Destination > External Cloud and Cloud Type as Azure , complete the following fields:
    1. Cloud Profile . Select your existing profile.
      See the topic Creating Your Cloud Profile in the Karbon Platform Services Administration Guide .
    2. EdType . Select an existing stream type, such as Blob Storage .
    3. Endpoint Name . Name your endpoint. Up to 200 alphanumeric characters and the dash character (-) are allowed.
  6. Click Update .

Removing a Data Pipeline

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Data Pipelines .
  4. Select a data pipeline, click Actions > Remove , then click Delete again to confirm.
    The Data Pipelines page lists any remaining data pipelines.

Stopping and Starting a Data Pipeline

About this task

  • You can stop and start a data pipeline.
  • You can select the table or tile view on this page by clicking one of the view icons.
    Figure. View Icons Click to enlarge The view icons on the page let you switch between table and tile view.

About this task

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list, then click Functions and Data Pipelines .
  3. To stop an active data pipeline, select a data pipeline, then click Actions > Stop .
    You can also click a data pipeline, then click Stop or Start , depending on the data pipeline state.
    Stop stops any data from being transformed or processed, and terminates any data transfer to your data destination.
  4. To start the data pipeline, select Start (after stopping a data pipeline) from the Actions drop-down menu.

Functions

A function is code used to perform one or more tasks. Script languages include Python, Golang, and Node.js. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.

An infrastructure administrator or project user can create a function, and later can edit or clone it. You cannot edit a function that is used by an existing data pipeline. In this case, you can clone it to make an editable copy.

  • You can use ML models in your function code. Use the ML Model API to call the model and version.
  • When you create, clone, or edit a function, you can define one or more parameters.
  • When you create a data pipeline, you define the values for the parameters when you specify the function in the pipeline.
  • Data pipelines can share functions, but you can specify unique parameter values for the function in each data pipeline.

Creating a Function

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Functions > Create .
  4. Name . Name the function. Up to 200 alphanumeric characters are allowed.
  5. Description . Provide a relevant description for your function.
  6. Language . Select the scripting language for the function: golang, python, node.
  7. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
  8. Click Next .
  9. Add function code.
    1. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    2. Click Add Parameter to define and use one or more parameters for data transformation in the function. Click the check icon to add each parameter.
  10. Click Create .

Editing a Function

Edit an existing function. To complete this task, log on to the cloud management console.

About this task

Other than the name and description, you cannot edit a function that is in use by an existing data pipeline. In this case, you can clone a function to duplicate it. See Cloning a Function.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Functions .
  4. Select a function in the list, then click Edit .
  5. Name . Update the function name. Up to 200 alphanumeric characters are allowed.
  6. Description . Provide a relevant description for your function.
  7. You cannot change the function's Project .
  8. Language . Select the scripting language for the function: golang, python, node.
  9. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
  10. Click Next .
  11. If you want to choose a different file or edit the existing function code:
    1. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    2. Click Add Parameter to define and use one or more parameters for data transformation in the function. Click the check icon to add each parameter.
  12. Click Update .

Cloning a Function

Clone an existing function. To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines > Functions .
  4. Select a function in the list, then click Clone .
  5. Name . Update the function name. Up to 200 alphanumeric characters are allowed.
  6. Description . Provide a relevant description for your function.
  7. You cannot change the function's Project .
  8. Language . Select the scripting language for the function: golang, python, node.
  9. Runtime Environment . Select the runtime environment for the function. A default runtime environment is already selected in most cases depending on the Language you have selected.
  10. Click Next .
  11. If you want to choose a different file or edit the existing function code:
    1. Click Choose File to navigate to a file containing your function code.
      After uploading the file, you can edit the file contents. You can also choose the contrast levels Normal , Dark , or High Contrast Dark .
    2. Click Add Parameter to define and use one or more parameters for data transformation in the function. Click the check icon to add each parameter.
  12. Click Next to update a data pipeline with this function, if desired.
  13. Click Create , then click Confirm in response to the data pipeline warning.
    An updated function can cause data pipelines to break (that is, stop collecting data correctly).

AI Inferencing and ML Model Management

You can create machine learning (ML) models to enable AI inferencing for your projects. The ML Model feature provides a common interface for functions (that is, scripts) or applications to use the ML model Tensorflow runtime environment on the Service Domain.

The Karbon Platform Services Release Notes list currently supported ML model types.

An infrastructure admin can enable the AI Inferencing service for your project through Manage Services as described in Managing Project Services.

You can add multiple models and model versions to a single ML Model instance that you create. In this scenario, multiple client projects can access any model configured in the single ML model instance.

Before You Begin
  • To allow access by the AI Inferencing API, ensure that the infrastructure admin selects Use GPU for AI Inferencing in the associated project Service Domain Advanced Settings.
  • Ensure that the infrastructure admin has enabled the AI Inferencing service for your project.
ML Models Guidelines and Limitations
  • The maximum ML model zip file size you can upload is 1 GB.
  • Each ML model zip file must contain a binary file and metadata file.
  • You can use ML models in your function code. Use the ML Model API to call the model and version.
ML Model Validation
  • The Karbon Platform Services platform validates any ML model that you upload. It checks the following:
    • The uploaded model is a ZIP file, with the correct folder of file structure in the ZIP file.
    • The ML model binaries in the ZIP file are valid and in the required format.
    • TensorFlow ML model binaries in the ZIP file are valid and in the required format.

Adding an ML Model

About this task

To complete this task, log on to the cloud management console.

Before you begin

  • An infrastructure admin can allow access to the AI Inferencing API by selecting Use GPU for AI Inferencing in the associated project Service Domain Advanced Settings.
  • An infrastructure admin can enable the AI Inferencing service for your project through Manage Services as described in Managing Project Services.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing > Add Model .
  3. Name . Name the ML model. Up to 200 alphanumeric characters are allowed.
  4. Description . Provide a relevant description.
  5. Select a specific project to make your ML model available to that project.
  6. Select a Framework Type from the drop-down menu. For example, Tensorflow 2.1.0 .
  7. Click Add Version in the List of Model Versions panel.
    1. Type an ML Model Version number, such as 1, 2, 3 and so on.
    2. Click Choose File and navigate to a model ZIP file stored on your local machine.
      • The maximum zip file size you can upload is 1 GB.
      • The console validates the file as it uploads it.
      • Uploading a file is a background task and might take a few moments depending on file size. Do not close your browser.
    3. Type any relevant Version Information , then click Upload .
  8. [Optional] Click Add new to add another model version to the model.
    A single ML model can consist of multiple model versions.
  9. Click Done .
    The ML Model page displays the added ML model. It might also show the ML model as continuing to upload.

What to do next

Check ML model status in the ML Model page.

Editing an ML Model

About this task

  • To complete this task, log on to the cloud management console.
  • You cannot change the name or framework type associated with an ML model.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing , select a model in the list, and click Edit .
  3. Description . Update an existing description.
  4. Edit or Remove an existing model version from the List of Model Versions.
  5. [Optional] Click Add new to add another model version.
    A single ML model can consist of multiple model versions.
    1. Type an ML Model Version number, such as 1, 2, 3 and so on.
    2. Click Choose File and navigate to a model ZIP file stored on your local machine.
      • The maximum zip file size you can upload is 1 GB.
      • The console validates the file as it uploads it.
      • Uploading a file is a background task and might take a few moments depending on file size. Do not close your browser.
    3. Type any relevant Version Information , then click Upload .
  6. Click Done .
    The ML Model page displays the added ML model. It might also show the ML model as continuing to upload.

What to do next

Check ML model status in the ML Model page.

Removing an ML Model

How to delete an ML model.

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing and select a model in the list.
  3. Cick Remove , then click Delete again to confirm.
    The ML Models page lists any remaining ML models.

What to do next

Check ML model status in the ML Model page.

Viewing ML Model Status

The ML Models page in the Karbon Platform Services management console shows version and activity status for all models.

Procedure

  1. From the home menu, click Projects , then click a project.
  2. Click AI Inferencing to show model information like Version, File Size, and Framework Type.
  3. To see where a model is deployed, click the model name.
    This page shows a list of associated Service Domains.

Runtime Environments

A runtime environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.

Karbon Platform Services includes standard runtime environments including but not limited to the following. These runtimes are read-only and cannot be edited, updated, or deleted by users. They are available to all projects, functions, and associated container registries.

  • Golang
  • NodeJS
  • Python 2
  • Python 3
  • Tensorflow Python
You can add your own custom runtime environment for use by all or specific projects, functions, and container registries.
Note: Custom Golang runtime environments are not supported. Use the provided standard Golang runtime environment in this case.

Creating a Runtime Environment

How to create a user-added runtime environment for use with your project.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines , then click the Runtime Environments tab.
  4. Click Create .
  5. Name . Name the runtime environment. Up to 75 alphanumeric characters are allowed.
  6. Description . Provide a relevant description.
  7. Container Registry Profile . Click Add Profile to create a new container registry profile or use an existing profile. Do one of the following sets of steps.
    1. Select Add New .
    2. Name your profile.
    3. Description . Describe the profile. Up to 75 alphanumeric characters are allowed.
    4. Container Registry Host Address . Provide a public or private registry host address. The host address can be in the format host.domain:port_number/repository_name
      For example, https://index.docker.io/v1/ or registry-1.docker.io/distribution/registry:2.1
    5. Username . Enter the user name credential for the profile.
    6. Password . Enter the password credential for the profile. Click Show to display the password as you type.
    7. Email Address . Add an email address in this field.
    1. Select Use Existing cloud profile and elect an existing profile from Cloud Profile .
    2. Name your profile.
    3. Description . Describe the profile. Up to 75 alphanumeric characters are allowed.
    4. Server . Provide a server URL to the container registry in the format used by your cloud provider.
      For example, an Amazon AWS Elastic Container Registry (ECR) URL might be: https:// aws_account_id .dkr.ecr. region .amazonaws.com
  8. Click Done .
  9. Container Image Path . Provide a container image URL. For example, https:// aws_account_id .dkr.ecr. region .amazonaws.com/ image : imagetag
  10. Languages . Select the scripting language for the runtime: golang, python, node.
  11. [Optional] Click Add Dockerfile to choose and upload a Dockerfile for the container image, then click Done .
    A Dockerfile typically includes instructions used to build images automatically or run commands.
  12. Click Create .

Editing a Custom Runtime Environment

How to edit a user-added runtime environment from the cloud management console. You cannot edit the included read-only runtime environments.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines , then click the Runtime Environments tab.
  4. Select a runtime environment in the list, then click Edit .
  5. Name . Name the runtime environment. Up to 75 alphanumeric characters are allowed.
  6. Description . Provide a relevant description.
  7. Container Registry Profile . Select an existing container registry profile.
  8. Container Image Path . Provide a container image URL. For example, https:// aws_account_id .dkr.ecr. region .amazonaws.com/ image : imagetag
  9. Languages . Select the scripting language for the runtime: golang, python, node.
  10. [Optional] Remove or Edit any existing Docker file.
    After editing a file, click Done .
  11. Click Add Docker Profile to choose a Docker file for the container image.
  12. Click Update .

Removing a Runtime Environment

How to remove a user-added runtime environment from the cloud management console. To complete this task, log on to the cloud management console.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
  3. Click Functions and Data Pipelines , then click the Runtime Environments tab.
  4. Select a runtime environment, click Remove , then click Delete again to confirm.
    The Runtime Environments page lists any remaining runtime environments.

Audit Trail and Log Management

Logging provides a consolidated landing page enabling you to collect, forward, and manage logs from selected Service Domains.

From Logging or System Logs > Logging or the summary page for a specific project, you can:

  • Run Log Collector on every or selected Service Domains (admins only)
  • See created log bundles, which you can then download or delete
  • Create, edit, and delete log forwarding policies to help make collection more granular and then forward those logs to the cloud
  • View alerts
  • View an audit trail of any operations performed by users in the last 30 days

You can also collect logs and stream them to a cloud profile by using the kps command line available from the Nutanix public Github channel https://github.com/nutanix/karbon-platform-services/tree/master/cli. Readme documentation available there describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

Audit Trail Dashboard

Access the Audit Log dashboard to view the most recent operations performed by users.

Karbon Platform Services maintains an audit trail of any operations performed by users in the last 30 days and displays them in an Audit Trail dashboard in the cloud management console. To display the dashboard, log on to the console, open the navigation menu, and click Audit Trail .

Click Filters to display operations by Operation Type, User Name, Resource Name, Resource Type, Time Range, and other filter types so you can narrow your search. Filter Operation Type by CREATE, UPDATE, and DELETE actions.

Running Log Collector - Kubernetes Apps

Log Collector examines the selected project application and collects logs and configuration information useful for troubleshooting issues and finding out details about an app.

Procedure

  1. Click the menu button, then click Projects .
  2. Click a project in the list.
    The project dashboard and navigation sidebar is displayed.
  3. Click Kubernetes Apps , then click an app in the list.
  4. Select the Log Bundles tab, then click Run Log Collector .
    1. To collect log bundles for every Service Domain, choose Select all Service Domains , then click Select Logs .
    2. To collector log bundles for specific Service Domains, choose them, then click Collect Logs
      The table displays the collection status.
    • Collecting . Depending on how many Service Domains you have chosen, it can take a few minutes to collect the logs.
    • Success . All logs are collected. You can now Download them or Delete them.
    • Failed . Logs could not be collected. Click Details to show error messages relating to this collection attempt.
  5. After downloading the log bundle, you can delete it from the console.

What to do next

  • The downloaded log bundle is in a gzipped TAR file. Extract the TAR file, then untar or unzip the TAR file to see the collected logs.
  • You can also forward logs to the cloud based on your cloud profile. See Creating and Updating a Log Forwarding Policy.

Creating and Updating a Log Forwarding Policy

Create, edit, and delete log forwarding policies to help make collection more granular and then forward those Service Domain logs to the cloud.

Before you begin

Make sure your infrastructure admin has created a cloud profile first.

Procedure

  1. Go to System Logs > Logging from Dashboard or the summary page for a specific project.
  2. Click Log Forwarding Policy > Create New Policy .
  3. Name . Name your policy.
  4. Select a Service Domain.
    1. Select All . Logs for all Service Domains are forwarded to the cloud destination.
    2. Select Select By Category to choose and apply one or more categories associated with a Service Domain. Click the plus sign ( + ) to add more categories.
    3. Select Select Individually to choose a Service Domain, then click Add Service Domain to select one or more Service Domains.
  5. Select a destination.
    1. Select Profile . Choose an existing cloud profile.
    2. Select Service . Choose a cloud service. For example, choose Cloudwatch to stream logs to Amazon CloudWatch. Other services include Kinesis and Firehose .
    3. Select Region . Enter a valid AWS region name or CloudWatch endpoint fully qualified domain name. For example, us-west-2 or monitoring.us-west-2.amazonaws.com
    4. Select Stream . Enter a log stream name.
    5. Select Groups . (CloudWatch only) Enter a Log group name.
  6. Click Done .
    The dashboard or summary page shows the policy tile.
  7. From the Logging dashboard, you can edit or delete a policy.
    1. Click Edit and change any of the fields, then click Done .
    2. To delete a policy, click Delete , then click Delete again to confirm.

Creating a Log Collector for Log Forwarding - Command Line

Create a log collector for log forwarding by using the kps command line.

Nutanix has released the kps command line on its public Github channel. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

Each sample YAML file defines a log collector. Log collectors can be:

  • Infrastructure-based: collects infrastructure-related (Service Domain) information. For use by infra admins.
  • Project-based: project-related information (applications, data pipelines, and so on). For use by project users and infra admins (with assigned projects).

See the most up-to-date sample YAML files and descriptions at https://github.com/nutanix/karbon-platform-services/tree/master/cli.

Example Usage

Create a log collector defined in a YAML file:

user@host$ kps create -f infra-logcollector-cloudwatch.yaml

infra-logcollector-cloudwatch.yaml

This sample infrastructure log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).

To enable AWS CloudWatch log streaming, you must specify awsRegion, cloudwatchStream, and cloudwatchGroup.

kind: logcollector
name: infra-log-name
type: infrastructure
destination: cloudwatch
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name Value or Subfield Name / Description Value or Subfield Name / Description
kind logcollector Specify the resource type
name infra-log-name Specify the unique log collector name
type infrastructure Log collector for infrastructure
destination cloudwatch Cloud destination type
cloudProfile cloud-profile-name Specify an existing Karbon Platform Services cloud profile
awsRegion For example, us-west-2 or monitoring.us-west-2.amazonaws.com Valid AWS region name or CloudWatch endpoint fully qualified domain name
cloudwatchGroup cloudwatch-group-name Log group name
cloudwatchStream cloudwatch-stream-name Log stream name
filterSourceCode Specify the log conversion code

project-logcollector.yaml

This sample project log collector collects logs for a specific tenant, then forwards them to a specific cloud profile (for example: AWS CloudWatch ).

kind: logcollector
name: project-log-name
type: project
project: project-name
destination: cloud-destination type
cloudProfile: cloud-profile-name
awsRegion: us-west-2
cloudwatchGroup: cloudwatch-group-name
cloudwatchStream: cloudwatch-stream-name
filterSourceCode: ""
Field Name Value or Subfield Name / Description Value or Subfield Name / Description
kind logcollector Specify the resource type
name project-log-name Specify the unique log collector name
type project Log collector for specific project
project project-name Specify the project name
destination cloud-destination type Cloud destination type such as CloudWatch
cloudProfile cloud-profile-name Specify an existing Karbon Platform Services cloud profile
awsRegion For example, us-west-2 or monitoring.us-west-2.amazonaws.com Valid AWS region name or CloudWatch endpoint fully qualified domain name
cloudwatchGroup cloudwatch-group-name Log group name
cloudwatchStream cloudwatch-stream-name Log stream name
filterSourceCode Specify the log conversion code

Stream Real-Time Application and Pipeline Logs

Real-Time Log Monitoring built into Karbon Services provides real-time log monitoring and lets you view application and data pipeline log messages securely in real time.

Note: Infrastructure administrators can stream logs if they have been added to a project.

Viewing the most recent log messages as they occur helps you see and troubleshoot application or data pipeline operations. Messages stream securely over an encrypted channel and are viewable only by authenticated clients (such as an existing user logged on to the Karbon Services cloud platform).

The cloud management console shows the most recent log messages, up to 2 MB.

Displaying Real-Time Logs

View the most recent real-time logs for applications and data pipelines.

About this task

To complete this task, log on to the cloud management console.

Procedure

  1. Go to the Deployments page for an application or data pipeline.
    1. From the home menu, click Projects , then click a project.
    2. Click Kubernetes Apps or Functions and Data Pipelines .
    3. Click an app name or data pipeline name, then click Deployments .
    4. Click an application or a data pipeline in the table, then click Deployments .
      A table shows each Service Domain deploying the application or data pipeline.
  2. Select a Service Domain, then click View Real-time Logs .
    The window displays streaming log messages and shows the most recent 2 MB of log messages from a container associated with your data pipeline function or application, from the Service Domain.

What to do next

Real-Time Log Monitoring Console describes what you see in the terminal-style display.

Real-Time Log Monitoring Console

The Karbon Platform Services real-time log monitoring console is a terminal-style display that streams log entries as they occur.

After you do the steps in Displaying Real-Time Logs, a terminal-style window is displayed in one or more tabs. Each tab is streaming log messages and shows the most recent 2 MB of log messages from a container associated with your data pipeline function or application.

Your application or data pipeline function generates the log messages. That is, the console shows log messages that you have written into your application or function.

Figure. Real-Time Log Monitoring Console Click to enlarge One or more tabs with a terminal style display shows messages generated by your application or function.

No Logs Message

If your Karbon Platform Services Service Domain is connected and your application or function is not logging anything, the console might show a No Logs message. In this case, the message means that the application or function is idle and not generating any log messages.

Errors

You might see one or more error messages in the following cases. As a result, Real-Time Log Monitoring cannot retrieve any logs.

  • If your application, function, or other resource fails
  • Network connectivity fails or is highly intermittent between the Karbon Platform Services Service Domain or your browser and karbon.nutanix.com

Tips for Real-Time Log Monitoring for Applications

  • The first tab shows the first container associated with the application running on the Karbon Platform Services Service Domain. The console shows one tab for each container associated with the application, including a tab for each container replicated in the same application.
  • Each tab displays the application and container name as app_name : container_id ) and might be truncated. To see the full name on the tab, hover over it.
  • Reordering tabs is not available. Unlike your usual tabbed browser experience, you cannot reorder or move the tabs.

Tips for Real-Time Log Monitoring for Functions in Data Pipelines

  • The first tab shows the first function deployed in the data pipeline running on the Service Domain. The console shows one tab for each function deployed in the data pipeline.
  • Reordering tabs is not available. Unlike your usual tabbed browser experience, you cannot reorder or move the tabs. For data pipelines, the displayed tab order is the order of the functions (that is, scripts of transformations) in the data pipeline.
  • Duplicate function names. If you use the same function more than once in the data pipeline, you will see duplicate tabs. That is, one tab for each function instance.
  • Each tab displays the data pipeline and function name as data_pipeline_ : function ) and might be truncated. To see the full name on the tab, hover over it.

Managing API Keys

Simplifies authentication when you use the Karbon Platform Services API by enabling you to manage your keys from the Karbon Platform Services management console. This topic also describes API key guidelines.

As a user (infrastructure or project), you can manage up to two API keys through the Karbon Platform Services management console. After logging on to the management console, click your user name in the management console, then click Manage API Keys to create, disable, or delete these keys.

Number of API Keys Per Users and Expiration
  • Each user can create and use two API keys.
  • Keys do not have an expiration date.
Deleting and Reusing API Keys
  • You can delete an API key at any time.
  • You cannot use or reuse a key after deleting it.
Creating and Securing the API Keys
  • When you create a key, the Manage API Keys dialog box displays the key.
  • When you create a key, make a copy of the key and secure it. You cannot see the key later. It is not recoverable at any time.
  • Do not share the key with anyone, including other users.

Read more about the Karbon Platform Services API at nutanix.dev. For Karbon Platform Services Developers describes related information and links to resources for Karbon Platform Services developers.

Using API Keys With HTTPS API Requests

Example API request using an API key.

After you create an API key, use it with your Karbon Platform Services API HTTPS Authorization requests. In the request, specify an Authorization header including Bearer and the key you generated and copied from the Karbon Platform Services management console.

For example, here is an example Node JS code snippet:

var http = require("https");

var options = {
  "method": "GET",
  "hostname": "karbon.nutanix.com",
  "port": null,
  "path": "/v1.0/applications",
  "headers": {
    "authorization": "Bearer API_key"
  }
};

Creating API Keys

Create one or more API keys through the Karbon Platform Services management console.

Procedure

  1. After logging on to the management console, click your user name, then click Manage API Keys .
    Figure. Manage API Keys Click to enlarge Click your profile name, then click Manage API keys.

  2. Click Create API Key .
    Manage API Keys shows that the key is created. Keys do not have an expiration date.
  3. Click Copy to Clipboard .
    • Make a copy of the API key and secure it. It is not recoverable at any time.
    • Click View to see the key value. You cannot see the key later.
    • Copy to Clipboard is a mandatory step. The Close button is inactive until you click Copy to Clipboard .
  4. To create another key, click Create API Key and Copy to Clipboard .
    Make a copy of the key and secure it. It is not recoverable at any time. Click View to see the key value. You cannot see the key later.
    Note that Status for each key is Active .
    Figure. Key Creation Click to enlarge The API keys show as Active.

  5. Click Close .

Disabling, Enabling, or Deleting API Keys

Create one or more API keys through the Karbon Platform Services management console.

Procedure

  1. After logging on to the management console, click your user name, then click Manage API Keys .
    Figure. Manage API Keys Click to enlarge Click your profile name, then click Manage API keys.

    Manage API Keys shows information about each API key: Create Time, Last Accessed, Status (Active or Disabled).
    Depending on the current state of the API key, you can disable, enable, or delete it.
    Figure. API Keys Click to enlarge The dialog shows API Key status as active or disabled.

  2. Disable an enabled API key.
    1. Click Disable .
      Note that Status for the API key is now Disabled .
    2. Click Close .
  3. Enable a disabled API key.
    1. Click Enable .
      Note that Status for the API key is now Active .
    2. Click Close .
  4. Delete an API Key.
    1. Click Delete , then click Delete again to confirm.
    2. Click Close .
      You cannot use or reuse a key after deleting it. Any client authenticating with this now-deleted API key will need a new key to authenticate again.

Alerts

The Alerts page and the Alerts Dashboard panel show any alerts triggered by Karbon Platform Services depending on your role.

  • Infrastructure Administrator . All alerts associated with Projects, Apps & Data, Infrastructure
  • Project User Alerts . All alerts associated with Projects and Apps & Data

To see alert details:

  • On the Alerts page, click the alert Description to see details about that alert.
  • From the Alerts Dashboard panel, click View Details , then click the alert Description to see details.

Click Filters to sort the alerts by:

  • Severity . Critical or Warning.
  • Time Range . Select All, Last 1 Hour, Last 1 Day, or Last 1 Week.

An Alert link is available on each Apps & Data and Infrastructure page.

Figure. Sorting and Filtering Alerts Click to enlarge Filters flyout menu to sort alerts by severity

Naming Guidelines

Data Pipelines and Functions
  • When you create, clone, or edit a function, you can define one or more parameters. When you create a data pipeline, you define values for the parameters when you specify the function in the pipeline.
  • Data pipelines can share functions, but you can specify unique parameter values for the function in each data pipeline.
Service Domain, Data Source, and Data Pipeline Naming
  • Starts and ends with a lowercase alphanumeric character
  • Dash (-) and dot (.) characters are allowed.For example, my.service-domain is a valid Service Domain name.
  • Maximum length of 63 characters
Data Source Topic and Field Naming
Topic and field naming must be unique across the same data source types. You are allowed to duplicate names across different data source types. For example, an MQTT topic name and RTSP protocol stream name can share /temperature/frontroom.
  • An MQTT topic name must be unique when creating an MQTT data source. You cannot duplicate or reuse an MQTT topic name for multiple MQTT data sources.
  • An RTSP protocol stream name must be unique when creating an RTSP data source. You cannot duplicate or reuse an RTSP protocol stream name for multiple RTSP data sources.
  • A GigE Vision data source name must be unique when creating a GigE Vision data source. You cannot duplicate or reuse a GigE Vision data source name for multiple GigE Vision data sources.
Container Registry Profile Naming
  • Starts and ends with a lowercase alphanumeric character
  • Dash (-) and dot (.) characters are allowed.
  • Maximum length of 200 characters
All Other Resource Naming
  • Starts and ends with a lowercase alphanumeric character
  • Dash (-) and dot (.) characters are allowed. For example, my.service-domain is a valid Service Domain name
  • Maximum length of 200 characters

For Karbon Platform Services Developers

Information and links to resources for Karbon Platform Services developers.

This section contains information about Karbon Platform Services development.

API Reference
Go to https://www.nutanix.dev/api-reference/ for API references, code samples, blogs , and more for Karbon Platform Services and other Nutanix products.
Karbon Platform Services Public Github Repository

The Karbon Platform Services public Github repository https://github.com/nutanix/karbon-platform-services includes sample application YAML files, instructions describing external client access to services, Karbon Platform Services kps CLI samples, and so on.

Karbon Platform Services Command Line

Nutanix has released the kps command line on its public Github repository. https://github.com/nutanix/karbon-platform-services/tree/master/cli describes how to install the command line and includes sample YAML files for use with applications, data pipelines, data sources, and functions.

Ingress Controller Support - Command Line

Karbon Platform Services supports the Traefik open source router as the default Kubernetes Ingress controller and NGINX (ingress-nginx) as a Kubernetes Ingress controller. For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/ingress.

Kafka Data Service Support - Command Line

Kafka is available as a data service through your Service Domain. Clients can manage, publish and subscribe to topics using the native Kafka protocol. Data Pipelines can use Kafka as destination. Applications can also use a Kafka client of choice to access the Kafka data service.

For full details, see https://github.com/nutanix/karbon-platform-services/tree/master/services/kafka. See alsoKafka as a Service.

Free Karbon Platform Services Trial
Sign up for a free Karbon Platform Services Trial at https://www.nutanix.com/products/iot/try.

Set Privileged Mode For Applications

Enable a container application to run with elevated privileges.

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.
Note: To enable this feature, contact Nutanix Support so they can set your profile accordingly. In your profile, this feature is disabled by default. After privileged mode is enabled in your profile, you can configure a container application to run with elevated privileges.
Caution: Nutanix recommends that you use this feature sparingly and only when necessary. To keep Karbon Platform Services secure by design, as a general rule, Nutanix also recommends your run your applications at user level.

For information about installing the kps command line, see For Karbon Platform Services Developers.

Karbon Platform Services enables you to develop an application which requires elevated privileges to run successfully. By using the kps command line, you can set your Service Domain to enable an application running in a container to run in privileged mode.

Setting Privileged Mode

Configure your Service Domain to enable a container application to run with elevated privileges.

Before you begin

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.
  • Ensure that you have created an infrastructure administrator or project user with associated resources (Service Domain, project, applications, and so on).
  • To enable this feature, contact Nutanix Support so they can set your profile accordingly. In your profile, this feature is disabled by default. After privileged mode is enabled in your profile, you can configure a container application to run with elevated privileges.
Caution: Nutanix recommends that you use this feature sparingly and only when necessary. To keep Karbon Platform Services secure by design, as a general rule, Nutanix also recommends your run your applications at user level.

Procedure

  1. Install the kps command line as described at GitHub: https://github.com/nutanix/karbon-platform-services/tree/master/cli
  2. From your terminal window, create a Karbon Platform Services context to associate with an existing Karbon Platform Services user.
    user@host$  kps config create-context context_name --email user_email_address --password password
    1. context_name . A context name to associate with the specified user_email_address and related Karbon Platform Services resources.
    2. user_email_address . Email address of an existing Karbon Platform Services user. This email address can be a My Nutanix account address or local user address.
    3. password . Password for the Karbon Platform Services user.
  3. Verify that the context is created.
    user@host$ kps config get-contexts
    The terminal displays CURRENT CONTEXT as the context_name . It also displays the NAME, URL, TENANT-ID, and Karbon Platform Services email and password.
  4. Get the Service Domain names (displayed in YAML format) where the container application needs to run with elevated privileges.
    user@host$ kps get svcdomain -o yaml
  5. Set privilege mode for a specific Service Domain svc_domain_name .
    user@host$ kps update svcdomain svc_domain_name --set-privileged
    Successfully updated Service Domain: svc_domain_name
  6. Verify privileged mode is set to true .
    user@host$ kps get svcdomain svc_domain_name -o yaml
    kind: edge
    name: svc_domain_name
    
    connected: true
    .
    .
    .
    profile: 
       privileged: true 
       enableSSH: true 
    effectiveProfile: 
       privileged: true
       enableSSH: true
    effectiveProfile privileged set to true indicates that Nutanix Support has enabled this feature. If the setting is false , contact Nutanix Support to enable this feature. In this example, Nutanix has also enabled SSH access to this Service Domain (see Secure Shell (SSH) Access to Service Domains in Karbon Platform Services Administration Guide ).

What to do next

See Using Privileged Mode with an Application (Example).

Using Privileged Mode with an Application (Example)

Important: The privileged mode is deprecated. Nutanix plans to remove this feature in an upcoming release.

After elevating privilege as described in Setting Privileged Mode, elevate the application privilege. This sample enables USB device access for an application running in a container on elevated Service Domain

YAML Snippet, Sample Privileged Mode USB Device Access Application

Add a tag similar to the following in the Deployment section in your application YAML file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: usb
  annotations:
    sherlock.nutanix.com/privileged: "true"

Full YAML File, Sample Privileged Mode USB Device Access Application


apiVersion: v1
kind: ConfigMap
metadata:
  name: usb-scripts
data:
  entrypoint.sh: |-
    apk add python3
    apk add libusb
    pip3 install pyusb
    echo Read from USB keyboard
    python3 read-usb-keyboard.py
  read-usb-keyboard.py: |-
    import usb.core
    import usb.util
    import time

    USB_IF      = 0     # Interface
    USB_TIMEOUT = 10000 # Timeout in MS

    USB_VENDOR  = 0x627
    USB_PRODUCT = 0x1

    # Find keyboard
    dev = usb.core.find(idVendor=USB_VENDOR, idProduct=USB_PRODUCT)
    endpoint = dev[0][(0,0)][0]
    try:
      dev.detach_kernel_driver(USB_IF)
    except Exception as err:
      print(err)
    usb.util.claim_interface(dev, USB_IF)
    while True:
      try:
        control = dev.read(endpoint.bEndpointAddress, endpoint.wMaxPacketSize, USB_TIMEOUT)
        print(control)
      except Exception as err:
        print(err)
      time.sleep(0.01)
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: usb
  annotations:
        sherlock.nutanix.com/privileged: "true"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: usb
  template:
    metadata:
      labels:
        app: usb
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: alpine
        image: alpine
        volumeMounts:
        - name: scripts
          mountPath: /scripts
        command:
        - sh
        - -c 
        - cd /scripts && ./entrypoint.sh
      volumes:
      - name: scripts
        configMap:
          name: usb-scripts
          defaultMode: 0766

Configure Service Domain Environment Variables

Define environment variables for an individual Service Domain. After defining them, any Kubernetes app that specifies that Service Domain can access them as part of a container spec in the app YAML.

As an infrastructure administrator, you can set environment variables and associated values for each Service Domain, which are available for use in Kubernetes apps. For example:

  • You can provide environment variables and values by using a Helm chart you upload when you create an app. See Creating an Application and Helm Chart Support and Requirements.
  • You can also set specific environment variables per Service Domain by updating a Service Domain in the cloud management console or by using the kps update svcdomain command line.
  • See also Privileged Kubernetes Apps.

As a project user, you can then specify these per-Service Domain variables set by the infra admin in your app. If you do not include the variable name in your app YAML file but you pass it as a variable to run in your app, Karbon Platform Services can inject this variable value.

Setting and Clearing Service Domain Environment Variables - Command Line

How to set environment variables for a Service Domain.

Before you begin

You must be an infrastructure admin to set variables and values.

Procedure

  1. Install the kps command line as described at GitHub: https://github.com/nutanix/karbon-platform-services/tree/master/cli
  2. From your terminal window, create a Karbon Platform Services context to associate with an existing infra admin user.
    user@host$  kps config create-context context_name --email user_email_address --password password
    1. context_name . A context name to associate with the specified user_email_address and related Karbon Platform Services resources.
    2. user_email_address . Email address of an existing Karbon Platform Services user. This email address can be a My Nutanix account address or local user address.
    3. password . Password for the Karbon Platform Services user.
  3. Verify that the context is created.
    user@host$ kps config get-contexts
    The terminal displays CURRENT CONTEXT as the context_name . It also displays the NAME, URL, TENANT-ID, and Karbon Platform Services email and password.
  4. Get the Service Domain names (displayed in YAML format) to find the service domain where you set the environment variables.
    user@host$ kps get svcdomain -o yaml
  5. For a Service Domain named my-svc-domain , for example, set the Service Domain environment variable. In this example, set a secret variable named SD_PASSWORD with a value of passwd1234 .
    user@host$ kps update svcdomain my-svc-domain --set-env '{"SD_PASSWORD":"passwd1234"}'
  6. Verify the changes.
    user@host$ kps get svcdomain my-svc-domain -o yaml
    kind: edge
    name: my-svc-doamin
    connected: true
    .
    .
    .
    env: '{"SD_PASSWORD": "passwd1234"}'
    The Service Domain is updated. Any infra admin or project user can deploy an app to a Service Domain where you can refer to the secret by using the variable $(SD_PASSWORD) .
  7. You can continue to add environment variables by using the kps update svcdomain my-svc-domain --set-env '{" variable_name ": " variable_value "}' command.
  8. You an also clear one or all variables for Service Domain svc_domain_name .
    1. To clear (unset) all environment variables:
      user@host$ kps update svcdomain svc_domain_name --unset-env
    2. To clear (unset) a specific environment variables:
      user@host$ kps update svcdomain svc_domain_name --unset-env '{"variable_name":"variable_value"}'
  9. To update an app, restart it. See also Deploying and Undeploying a Kubernetes Application.

What to do next

Using Service Domain Environment Variables - Example

Using Service Domain Environment Variables - Example

Example: how to use existing environment variables for a Service Domain in application YAML.

About this task

  • You must be an infrastructure admin to set variables and values. See Setting and Clearing Service Domain Environment Variables - Command Line.
  • In this example, an infra admin has created these environment variables: a Kafka endpoint that requires a secret (authentication) to authorize the Kafka broker.
  • As a project user, you can then specify these per-Service Domain variables set by the infra admin in your app. If you do not include the variable name in your app YAML file but you pass it as a variable to run in your app, Karbon Platform Services can inject this variable value.

Procedure

  1. In your app YAML, add a container snippet similar to the following.
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app-deployment
      labels:
        app: my-app
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
          - name: my-app
            image: some.container.registry.com/myapp:1.7.9
            ports:
            - containerPort: 80
         env:
            - name: KAFKA_ENDPOINT
              value: some.kafka.endpoint
            - name: KAFKA_KEY
            - value: placeholder
         command:
           - sh
           - c
           - "exec node index.js $(KAFKA_KEY)"
    
  2. Use this app YAML snippet when you create a Kubernetes app in Karbon Platform Services as described in Kubernetes Apps.

Karbon Platform Services Terminology

Category

Logically grouped Service Domain, data sources, and other items. Applying a category to an entity applies any values and attributes associated with the category to the entity.

Cloud Connector

Built-in Karbon Platform Services platform feature to publish data to a public cloud like Amazon Web Services or Google Cloud Platform. Requires a customer-owned secured public cloud account and configuration in the Karbon Platform Services management console.

Cloud Profile

Cloud provider service account (Amazon Web Services, Google Cloud Platform, and so on) where acquired data is transmitted for further processing.

Container Registry Profile

Credentials and location of the Docker container registry hosted on a cloud provider service account. Can also be an existing cloud profile.

Data Pipeline

Path for data that includes input, processing, and output blocks. Enables you to process and transform captured data for further consumption or processing.

  • Input. An existing data stream or data source, identified according to a Category
  • Transformation. Code block such as a script defined in a Function to process or transform input data.
  • Output. A destination for data. Publish data to the cloud or cloud service (such as AWS Simple Queue Service) at the edge.
Data Service

Data service such as Kafka as a Service or Real-Time Stream Processing as a Service.

Data Source

A collection of sensors, gateways, or other input devices to associate with a node or Service Domain (previously known as an edge). Enables you to manage and monitor sensor integration and connectivity.

A node minimally consists of a node (also known as edge device) and a data source. Any location (hospital, parking lot, retail store, oil rig, factory floor, and so on) where sensors or other input devices are installed and collecting data. Typical sensors measure (temperature, pressure, audio, and so on) or stream (for example, an IP-connected video camera).

Functions

Code used to perform one or more tasks. A script can be as simple as text processing code or it could be advanced code implementing artificial intelligence, using popular machine learning frameworks like Tensorflow.

Infrastructure Administrator

User who creates and administers most aspects of Karbon Platform Services. The infrastructure administrator provisions physical resources, Service Domains, data sources and public cloud profiles. This administrator allocates these resources by creating projects and assigning project users to them.

Project

A collection of infrastructure (Service Domain, data source, project users) plus code and data (Kubernetes apps, data pipelines, functions, run-time environments), created by the infrastructure administrator for use by project users.

Project User

User who views and uses projects created by the infrastructure administrator. This user can view everything that an infrastructure administrator assigns to a project. Project users can utilize physical resources, as well as deploy and monitor data pipelines and applications. This user has project-specific CRUD permissions: the project user can create, read, update, and delete assigned applications, scripts, data pipelines, and other project users.

Real-time Data Stream
  1. Data Pipeline output endpoint type with a Service Domain as an existing destination.
  2. An existing data pipeline real-time data stream that is used as the input to another data pipeline.
Run-time Environment

A run-time environment is a command execution environment to run applications written in a particular language or associated with a Docker registry or file.

Service Domain

Intelligent Platform as a Service (PaaS) providing the Karbon Platform Services Service Domain infrastructure (consisting of a full software stack combined with a hardware device). It enables customers to deploy intelligent applications (powered by AI/artificial intelligence) to process and transform data ingested by sensors. This data can be published selectively to public clouds.

Karbon Platform Services Management Console

Browser-based console where you can manage the Karbon Platform Services platform and related infrastructure, depending on your role (infrastructure administrator or project user).

Karbon Platform Services

Software as a Service (SaaS)/Platform as a Service (PaaS) based management platform and cloud IoT services. Includes the Karbon Platform Services management console.

Version

Read article
License Manager Guide

Last updated: 2022-11-29

Welcome to License Manager

The Nutanix corporate web site includes up-to-date information about AOS software editions .

License Manager provides Licensing as a Service (LaaS) by integrating the Nutanix Support portal Licensing page with licensing management and agent software residing on Prism Element and Prism Central clusters. Unlike previous license schemes and work flows that were dependent on specific Nutanix software releases, License Manager is an independent software service residing in your cluster software. You can update it independently from Nutanix software such as AOS, Prism Central, Nutanix Cluster Check, and so on.

License Manager provides these features and benefits.

  • Simplified upgrade path. Upgradeable through Life Cycle Manager (LCM), which enables Nutanix to regularly introduce licensing features, functions, and fixes. Upgrades through LCM help ensure that your cluster is running the latest licensing agent logic.
  • Streamlined web console interface for Prism Element and Prism Central. After licensing unlicensed clusters and adding new software products through the Nutanix Support Portal, the web console provides a single 1-click control plane for most work flows. The web console only displays your licensed products and their current status.
  • Simplified 1-click work flow. Invisible integration with the Nutanix Support Portal helps simplify the most common licensing work flows and use cases.
  • Consistent experience. License Manager works for all licensable products that are available for Prism Element and Prism Central.

Licensing Management Options

License Manager provides more than one way to manage your licenses, depending on your preference and cluster deployment. Except for dark-site clusters, where AOS and Prism Central clusters are not connected to the Internet, these options require that your cluster is connected to the Internet.

Manage Licenses with License Manager and 1-Click Licensing

This feature is not available to dark site clusters, which are not connected to the Internet.

See Enable 1-Click Licensing and Manage Licenses with 1-Click Licensing.

Nutanix recommends that you configure and enable 1-click licensing. 1-click licensing simplifies license and add-on management by integrating the licensing work flow into a single interface in the web console. Once you enable this feature, you can perform most licensing tasks from the web console. It is disabled by default.

Depending on the product license you purchase, you apply it through the Prism Element or Prism Central web console. See Prism Element Cluster Licensing or Prism Central License Categories.

Manage Licenses with License Manager Without Enabling 1-Click Licensing (3-Step Licensing)

This feature is not available to dark site clusters, which are not connected to the Internet.

See Manage Licenses with Update License (3-Step Licensing).

After you license your cluster, the web console Licensing page allows you to manage your license tier by upgrading, downgrading, or otherwise updating a license. If you have not enabled 1-click licensing and want to use 3-step licensing, the Licensing page includes a Update License button.

Manage Licenses For Dark-Site Clusters
For legacy licenses (that is, licenses other than Nutanix cloud platform package licenses), see Manage Licenses for Dark Site Clusters (Legacy License Key).

For cloud platform package licenses, see Manage Licenses for Dark Site Clusters (Cloud Platform License Key).

Use these procedures if your dark site cluster is not connected to the Internet (that is, your cluster is deployed at a dark site). To enter dark site cluster information at the Nutanix Support Portal, these procedures require you to use a web browser from a machine connected to the Internet. If you do not have Internet access, you cannot use these procedures.

Nutanix Support Portal Licenses Page

This topic assumes that you already have user name and password credentials for the Nutanix Support portal.

After you log on to the Nutanix Support portal at https://portal.nutanix.com, click the Licenses link on the portal home page or the hamburger menu available from any page. Licenses provides access to these licensing landing pages:

  • Summary . Displays a license summary page with panes (widgets) for each type of license that you have purchased. Each pane specifies the tier type, usage percentage (how many you have used out of your total availability), and the metric for each type of license. Click the Manage Licenses button (top of page) to get started with new licenses or administer existing licenses.
  • License Inventory . Includes the following two license category pages.
    • Active Licenses . Displays a table of active licenses for your account (default view for License Inventory ). Active licenses are purchased licenses that are usable currently. Entry information in the table varies by tab and includes license ID, tier, class, expiration date, and other information. If you are using tags, the portal displays tag names on the page, which you can use to Filter licenses. To download a comma-separated values file containing the details from the table, click Download all as CSV . You can also select one or more specific entries to download only the selected cluster license details. The table includes the following tabs that filter the list by status:
      • All : Lists all active licenses in your inventory.
      • Available : Lists licenses that are currently available to be provisioned to a cluster.
      • Applied : Lists licenses that are provisioned to an active cluster.
      • Upgrade : Lists purchased upgrade licenses. An upgrade license enables you to activate one or more features from a higher licensing tier without having to purchase completely new licenses. That is, you upgrade your existing unexpired lower tier license (for example, AOS Pro) with an upgrade license (AOS Ultimate). For each upgrade license, you must have an existing lower tier license to activate the upgrade license. As with future licenses, you cannot apply an upgrade license to a cluster until the Start Date has passed. See Upgrade Licenses.
      • Reserved (Nutanix Cloud Clusters only): Lists the licenses that you have partially or fully reserved for Nutanix Cloud Clusters. You can download the licensing details as a CSV file and manage the license reservations. You can also specify the capacity allocation for Nutanix Cloud Clusters (see Reserve Licenses for Nutanix Cloud Clusters).
      • Expiring : Lists licenses that will be expiring soon.
      • Future : Lists licenses that start at a future date and cannot be applied to clusters until the start date is in the past.
    • Inactive Licenses . Displays a table of inactive licenses for your account. Inactive licenses are purchased licenses that have either expired or were decommissioned. Entry information includes license ID, expiration date, days since expiry, quantity, tier, class, PO number, and tag.
  • Clusters . Includes one or more of the following category pages depending on your account type. To download a comma-separated values file containing the details from any of the pages, click Download all as CSV . Click the Cluster UUID link to see cluster license details.
    • Licensed Clusters . Displays a table of licensed clusters including the cluster name, cluster UUID, license tier, and license metric.
    • Cloud Clusters . Displays a table of licensed Nutanix Cloud Clusters including the cluster name, cluster UUID, billing mode, and status.
    • License with file . Displays a table of clusters licensed with a license summary file (LSF) including the cluster name, cluster UUID, license tier, and license metric.
    • License with key . Displays a table of (usually dark site) clusters licensed with a license key including the cluster name, cluster UUID, license key, and license tags.
    • Cluster Expirations . Displays a table that lists when a license will expire. Entries include the expiry date, days to expiration, cluster name, and cluster UUID. To filter the table, select the duration from the pull-down list in the View all clusters expiring in the next XX Days field.

Displaying License Features, Details, and Status

About this task

The most current information about your licenses is available from the Prism Element (PE) or Prism Central (PC) web console. It is also available at the Nutanix Support Portal License page. You can view information about license levels, expiration dates, and any free license inventory (that is, unassigned available licenses).

From a PE Controller VM (CVM) or PC VM, you can also display license details associated with your dark site license key.

Procedure

  1. In the web console, click the gear icon, and select Licensing .
  2. Click the license tile to show the feature list for that license type. For example:
    Figure. License Features Click to enlarge List of features for your license

    Scroll down to see other features, then click Back to return to the licensing page.
  3. To show a list of all licenses and their details, click license details .
    Figure. License Details Click to enlarge List of features for your license

    Scroll down to see all licenses, then click Back to return to the licensing page.

Showing License Details Associated with Your Dark Site License Key

About this task

Use the command line license_key_util script to see license key information.

Procedure

  1. Log on to the dark-site PE CVM or PC CVM with SSH as the Nutanix user.
  2. Use the show option to display the licenses associated with the license key.
    nutanix@cvm$ ~/ncc/bin/license_key_util show key=license_key
    Licenses Allowed:
    AOS
    Add-Ons:
    Software_Encryption
    File
    Help is available from the command by typing ~/ncc/bin/license_key_util help

Upgrading License Manager

License Manager is independent software and is therefore updated independently from Nutanix software such as AOS, Prism Central, Nutanix Cluster Check, and so on.

When upgrades are available, you can upgrade License Manager through Life Cycle Manager (LCM). LCM enables Nutanix to regularly introduce licensing features, functions, and fixes. Upgrades through LCM help ensure that your cluster is running the latest licensing agent logic.

Nutanix has also designed License Manager so that cluster node restarts are not required.

The Life Cycle Manager Guide describes how to perform an inventory and, if a new version is available, how to update a component like License Manager.

What Version Of License Manager Do I Have?

Log on to the Prism Element or Prism Central web console and do one of the following.

  1. Click the gear icon, then select Licensing in the Settings panel. Under the gear icon, you can see the version. For example, LM.2021.7.2.
    Figure. License Manager Version Click to enlarge License Manager Version

  2. From the Home menu, click LCM > Inventory . Select View By > Component . The list shows the version. For example, LM.2021.7.2.

About Licenses

Products That Do Not Require a License

Not all Nutanix software products require a license. Nutanix provides these products and their features without requiring you to do anything license-wise:

  • Nutanix AHV
  • Karbon (enabled through Prism Central)
  • Prism Starter
  • Framework and utility software such as Life Cycle Manager (LCM), X-Ray, and Move
  • Foundation

Prism Element License Categories

See these Nutanix corporate web sites for the latest information about software licensing and the latest available platforms.

  • Nutanix Software Editions and AOS Software Licensing Models
  • Nutanix Hardware Platforms and Dynamic Specsheet

Nutanix generally categorizes licenses as follows.

Software-only
  • Platforms are qualified by Nutanix (NX Series) or by approved third-party vendors, such as HPE, Cisco, Dell, and others.
  • No license is embedded or delivered as a default, as you purchase hardware and licenses separately.
  • You can purchase AOS Starter, Pro, or Ultimate, and Add-On licenses.
  • Your purchased license metric is based on capacity based licensing (CBL), remote office/back office (ROBO), or virtual desktop infrastructure (VDI).
  • Licenses are transferable.
Third-party OEM
  • Includes OEM platforms qualified by Dell (Nutanix on Dell XC Series appliances and XC Core nodes), Lenovo (Lenovo HX Series Certified Nodes), and other vendors.
  • No license is embedded or delivered as a default, as you purchase hardware and licenses separately.
  • You can purchase AOS Starter, Pro, or Ultimate, and Add-On licenses.
  • Your purchased license metric is based on capacity based licensing (CBL), remote office/back office (ROBO), or virtual desktop infrastructure (VDI).
  • Licenses are transferable.

Prism Element Cluster Licensing

Licenses you can apply through the Prism Element web console include:

AOS Starter, Pro, and Ultimate Licenses

See Prism Element License Categories.

Software-only and third-party OEM platforms require you to download and install a Starter license file that you have purchased.

For all platforms: the AOS Pro and Ultimate license levels require you to install this license on your cluster. When you upgrade a license or add nodes or clusters to your environment, you must install the license.

If you enable 1-click licensing as described in Enable 1-Click Licensing, you can apply the license through the Prism Element web console without needing to log on to the support portal.

If you do not want to enable this feature, you can manage licenses as described in Manage Licenses with Update License (3-Step Licensing). This licensing workflow is also appropriate for dark-site (non-Internet connected or restricted connection) deployments.

Note: Legacy-licensed life of device Nutanix NX and OEM AOS Appliance platforms include an embedded Starter license as part of your purchase. It does not have to be downloaded and installed or otherwise applied to your cluster.
AOS Remote and Branch Office (ROBO) and Virtual Desktop Infrastructure per User (VDI) Licenses

The Nutanix corporate web site includes the latest information about these license models.

AOS Capacity-Based Licenses

Capacity-based licensing is the Nutanix licensing model where you purchase and apply licenses based on cluster attributes. Cluster attributes include the number of raw CPU cores and total raw Flash drive capacity in tebibytes (TiBs). See AOS Capacity-Based Licensing.

Add-Ons

You can add individual features known as add-ons to your existing license feature set. When Nutanix makes add-ons available, you can add them to your existing license, depending on the license level and add-ons available for that license.

See Add-On Licenses.

Prism Central License Categories

As the control console for managing multiple clusters, Prism (also known as Prism Central) consists of three license tiers. If you enable 1-click licensing as described in Enable 1-Click Licensing, you can apply licenses through the Prism web console without needing to log on to the support portal.

If you do not want to enable this feature, you can manage licenses as described in Manage Licenses with Update License (3-Step Licensing). This licensing workflow is also appropriate for dark-site (non-Internet connected or restricted connection) deployments.

Licenses that are available or that you can apply through the Prism web console include:

Prism Starter

Default free Prism license, which enables you to register and manage multiple Prism Element clusters, upgrade Prism with 1-click through Life Cycle Manager (LCM), and monitor and troubleshoot managed clusters. You do not have to explicitly apply the Prism Starter license tier, which also never expires.

Prism Pro

Includes all Prism Starter features plus customizable dashboards, capacity, planning, and analysis tools, advanced search capabilities, low-code/no-code automation, and reporting.

Prism Ultimate

Prism Ultimate adds application discovery and monitoring, budgeting/chargeback and cost metering for resources, and a SQL Server monitoring content pack. Every Prism Central deployment includes a 90-day trial version of this license tier.

Add-Ons

You can add individual features known as add-ons to your existing license feature set. When Nutanix makes add-ons available, you can add them to your existing license, depending on the license level and add-ons available for that license.

See Add-On Licenses.

Cluster-based Licensing

Prism Central cluster-based licensing allows you to choose the level of data to collect from an individual Prism Element cluster managed by your Prism Central deployment. It also lets you choose the related features you can implement for a cluster depending on the applied Prism Central license tier. You can collect data even if metering types (capacity [cores] and nodes) are different for each node in a Prism Element cluster. See Prism Central Cluster-Based Licensing for more details.

AOS Capacity-Based Licensing

AOS capacity-based licensing is the Nutanix licensing model where you purchase and apply licenses based on cluster attributes. Cluster attributes include the number of raw CPU cores and raw total Flash drive capacity in tebibytes (TiBs). This licensing model helps ensure a consistent licensing experience across different platforms running Nutanix software.

Each license stores the currently licensed capacity (CPU cores/Flash TiBs). If the capacity of the cluster increases, the web console informs you that additional licensing is required.

Upgrade Licenses

An upgrade license enables you to upgrade your existing unexpired lower license tier to a higher license tier without having to purchase completely new stand-alone licenses. For example, you can upgrade your lower tier AOS Pro license to an Ultimate Upgrade license, which allows you to now activate or use the available Ultimate features.

For each upgrade license, you must have an existing unexpired lower tier license to activate the upgrade license. As with future licenses, you cannot apply an upgrade license to a cluster until the Start Date has passed.

View Upgrade Licenses at the Nutanix Support Portal

When you log on to the Nutanix Support portal and go to Licenses > License Inventory > Upgrade Licenses , the table shows all purchased Upgrade Licenses. See also Nutanix Support Portal Licenses Page.
Figure. Nutanix Support Portal Upgrade Licenses Page Click to enlarge A web page listing upgrade license details
Click a License Id link to see the properties of the upgrade license.
Figure. Example Upgrade License Properties Click to enlarge A popup window listing license properties
  • License ID . Upgrade license ID.
  • Upgraded From License . License ID of the existing unexpired lower tier license.
  • Applied qty / Available qty . Number of licenses that you have applied to clusters and the available number of licenses.
  • Product . Software product type such as Acropolis (AOS), Prism Central, and so on.
  • Tier . The upgraded license tier. Depending on your base lower tier, Pro or Ultimate.
  • Meter . How the license usage is metered. In this example, a capacity-based license (CBL) using vCPU cores as the metric.
  • Expires . Upgrade license expiration date. Your base license most likely has a different expiration date, expiring sooner than the upgrade license. The upgrade license typically extends your license term.

Nutanix Calm Licensing

For more information about how to enable Calm in Prism Central, see Enabling Calm in the Prism Central Guide and Calm Administration and Operations Guide.

The Nutanix Calm license for Prism Central enables you to manage the number of VMs that are provisioned or managed by Nutanix Calm. Calm licenses are required only for VMs managed by Calm, running in either the Nutanix Enterprise Cloud or public clouds.

The most current status information about your Calm licenses is available from the Prism Central web console. It is also available at the Nutanix Support Portal.

Once Calm is enabled, Nutanix provides a free trial period of 60 days to use Calm. It might take up to 30 minutes to show that Calm is enabled and your trial period is started.

Approximately 30 minutes after you enable Nutanix Calm, the Calm licensing card and licensing details show the trial expiration date. In Use status is displayed as Yes . See also License Warnings in the Web Console.

How The Nutanix Calm License VM Count is Calculated

The Nutanix Calm license VM count is a concurrent VM management limit and is linked to the application life cycle, from blueprint launch to application deletion. Consider the following:

  1. You launch a Nutanix Marketplace blueprint as an application named Example that includes three VMs. These three VMs are counted against your Calm license (that is, three VMs under Calm management).
  2. You later scale Example with two additional VMs. Now, five VMs are under Calm management.
  3. You launch another blueprint as an application named Example2 with four new VMs. Total current number of VMs now under Calm management is nine (total from Example and Example2 ).
  4. You delete Example . Total current number of VMs now under Calm management is four (from the existing active Example2 deployment).

Any VM you have created and are managing independently of Nutanix Calm is not part of the Calm license count. For example, you created a Windows guest OS VM through the Prism Element web console or with other tools (like a Cloudinit script). This VM is not part of a Calm blueprint.

However, if you import an existing VM into an existing Calm blueprint, that VM counts toward the Calm license. It counts until you delete the application deployment. If you stop the VM in this case and the application deployment is active, the VM is considered as under Calm management and part of the license count.

Endpoints Associated with a Runbook

For license usage, Calm also counts each endpoint that you associate to a runbook as one VM under Calm management. This license usage count type is effective as of the Nutanix Calm 3.1 release.

Add-On Licenses

Individual products known as add-ons can be added to your existing license feature set. For more information, see Nutanix Platform Software Options.

Before You License Your Cluster

Requirements and considerations for licensing. Consider the following before you attempt to manage your licenses.

Create a Cluster

Before attempting to install a license, ensure that you have created a cluster and logged into the Prism Element or Prism Central web console at least once. You must install a license after creating a cluster for which you purchased AOS Pro or AOS Ultimate licenses. If you are using Nutanix Cloud Clusters, you can reserve licenses for those clusters (see Reserve Licenses for Nutanix Cloud Clusters).

Before You Destroy a Cluster, Reclaim Licenses By Unlicensing Your Cluster

In general, before destroying a cluster with AOS licenses (Starter / Pro / Ultimate) for software-only and third-party hardware platforms, you must reclaim your licenses by unlicensing your cluster. Unlicensing a cluster returns your purchased licenses to your inventory.

You do not need to reclaim licenses in the following cases.

  • When you use the dark site license key method to apply a key to your cluster, there is no requirement to reclaim licenses. Each license key is associated with a unique cluster UUID, so there is nothing to reclaim.
  • You do not need to reclaim legacy-licensed life of device AOS Starter licenses for Nutanix and OEM AOS Appliance platforms. These licenses are embedded and are automatically applied whenever you create a cluster. You do need to reclaim AOS Pro and Ultimate licenses for these platforms.

Can I Mix License Tiers in a Cluster?

If a cluster includes nodes with different license tiers (for example, AOS Pro and AOS Ultimate), the cluster and each node in the cluster defaults to the feature set enabled by the lowest license tier. For example, if two nodes in the cluster have AOS Ultimate licenses and two nodes in the same cluster have AOS Pro licenses, all nodes effectively have AOS Pro licenses and access to that feature set only.

Attempts to access AOS Ultimate features in this case result in a license noncompliance warning in the web console.

License a Cluster and Add-On

As 1-click licensing is disabled by default, use these Update License procedures to license your cluster that is also connected to the Internet. These procedures also apply if a cluster is unlicensed (newly deployed or previously unlicensed by you).

Figure. Update License Click to enlarge Update License Button

Licensing a Cluster (Internet Connected)

Use this procedure to license your cluster. These procedures also apply if a cluster is unlicensed (newly deployed or previously unlicensed by you). After you complete this procedure, you can enable 1-click licensing.

Before you begin

For this procedure, keep two browser windows open:

  • One browser window for the Prism Element (PE) or Prism Central (PC) web console
  • One browser on an Internet-connected machine for the Nutanix Support Portal

Procedure

  1. Log on to your PE or PC web console, click the gear icon, then select Licensing in the Settings panel.
  2. Click Update License , then click Download to save a cluster summary file to your local machine.
    The cluster summary file is saved to your browser download folder or any folder that you choose.
    Figure. First Time Licensing Page Click to enlarge This picture shows the license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
  4. Click License Clusters .
    1. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
  5. Click Add Licenses in the Acropolis tile (for AOS clusters) or Prism tile (for PC clusters).
    1. Select a License Level such as Pro or Ultimate from the drop-down menu.
    2. For Cluster Deployment Location? , select your location or None of the Above .
      In most cases, select None of the Above . The list includes countries where certain technology license restrictions apply. If your cluster deployment is in one of those locations, select it.
    3. Click Save .
      The Acropolis tile now shows the license level and other available license tiles. For example, your available add-ons like Nutanix Files are shown with a Add Licenses link, meaning you can apply that license now. Add-ons not associated with your account appear grayed out or unavailable. For example, you might not have the Data Encryption add-on feature.
  6. [Optional] Before continuing, you can also license any add-ons as part of this step by clicking Add Licenses in the add-on tile.
    Depending on the add-on, you simply license it with the same steps as the Acropolis work flow above (for example, Data Encryption). For add-ons that are based on capacity like Nutanix Files, you might have to specify the disk capacity to allocate to files. You can rebalance this add-on in the web console later if you need to make any changes.

    If you choose not to license any add-ons at this time, you can license them later. See Licensing An Add-On (Internet Connected).

  7. Click Next , review the Summary page, then click Confirm and Close .
    Clicking Confirm automatically downloads the license summary file, which you need in the next step. Clicking Close shows the portal Licenses page with a summary of your purchased licenses, including how many you have used.
  8. Back at the PE or PC web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
    The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

What to do next

After you complete this procedure, you can now enable 1-click licensing. See Enable 1-Click Licensing.

Licensing An Add-On (Internet Connected)

Use this procedure after you have licensed your cluster and you did not license add-on features at the same time, or if you later purchased add-on features after you initially licensed your cluster. This procedure describes how to do this on a cluster connected to the Internet.

About this task

  • If you did not license any add-ons as part of Licensing a Cluster (Internet Connected), use this procedure.
  • The Nutanix corporate web site includes up-to-date information about add-ons like Nutanix Files, Calm, and so on.

Before you begin

  • Ensure that you have an active My Nutanix account with valid credentials.
  • Make sure you have licensed your cluster first. See Licensing a Cluster (Internet Connected) or Licensing a Cluster (Dark Site).

Procedure

  1. Log on to your Prism Element (PE) or Prism Central (PC) web console, click the gear icon, then select Licensing in the Settings panel.
  2. Download the cluster summary file.
    1. 1-click licensing enabled: Click License with Portal , then click Download to save a cluster summary file to your local machine.
    2. 1-click licensing not enabled or disabled: Click Update License , then click Download to save a cluster summary file to your local machine.
    The cluster summary file is saved to your browser download folder or any folder that you choose.
    Figure. 3-Step Licensing Page Click to enlarge This picture shows the 3-Step license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
  4. Click License Clusters .
    1. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    Available add-ons in your inventory like Nutanix Files are shown with a Add Licenses link, meaning you can apply that license now. Add-ons not associated with your account appear grayed out or unavailable. For example, you might not have the Nutanix Calm add-on in your inventory.
  5. Click Add Licenses in the add-on tile. For example, the Data Encryption tile.
    1. For Cluster Deployment Location? , select your location or None of the Above .
      In most cases, select None of the Above . The list includes countries where certain technology license restrictions apply. If your cluster deployment is in one of those locations, select it.
    2. Click Save .
      The example the Data Encryption tile now shows two links: Edit License , where you can make any changes, and Unselect License , which returns the add-on to an unlicensed state.
  6. Continue licensing any add-ons in your inventory by clicking Add Licenses in the add-on tile.
    Depending on the add-on, you simply license it with the same steps as the Data Encryption step above. Unrestricted add-ons will not have the Cluster Deployment Location? selection.

    For add-ons that are based on capacity or other metrics like Nutanix Files, for example, specify the cluster disk capacity to allocate to Files. You can rebalance this add-on in the web console later if you need to make any changes, as described in Using Rebalance to Adjust Your Licenses.

  7. Click Next , review the Summary page, then click Confirm and Close .
    Clicking Confirm automatically downloads the license summary file, which you need in the next step. Clicking Close shows the portal Licenses page with a summary of your purchased licenses, including how many you have used.
  8. Back at the PE or PC web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
    The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

    Below each license tile is the 1-Click Licensing options menu, where you can select more licensing actions: Rebalance , Extend , or Unlicense . See 1-Click Licensing Actions.

Enable 1-Click Licensing

This feature is not available to dark site clusters, which are not connected to the Internet. To enable 1-click licensing, you must create an API key and download an SSL key from your My Nutanix dashboard. 1-click licensing simplifies licensing by integrating the licensing work flow into a single interface in the web console. As this feature is disabled by default, you need to enable and configure it first.

Note: Your network must allow outbound traffic to portal.nutanix.com:443 to use this feature.

If your cluster is unlicensed, make sure you have licensed it first, then enable 1-click licensing. See Licensing a Cluster (Internet Connected).

1-click licensing simplifies licensing by integrating the licensing work flow into a single control plane in the Prism Element (PE) and Prism Central (PC) web consoles. Once you enable 1-click licensing, you can perform most licensing tasks from the web console Licensing settings panel without needing to explicitly log on to the Nutanix Support Portal.

1-click license management requires you to create an API key and download an SSL key from your My Nutanix dashboard to secure communications between the Nutanix Support Portal and the PE or PC web console.

With the API and SSL keys associated with your My Nutanix account, the web console can communicate with the Nutanix Support Portal to detect any changes or updates to your cluster license status.

  • The API key is initially bound to the user that creates the key on the Nutanix Support Portal. All subsequent licensing operations are done on behalf of the user who created the key and any operations written to an audit log include information identifying this user. That is, licensing task X was performed by user Y (assuming user Y created the key).
  • The API key is also bound to the Prism user that logs on to the web console and then registers the key. For example, if user admin registers the key, any licensing task performed by admin is performed on behalf of user Y (who created the key on the Support Portal). A Prism user (such as admin) must have access to the API key to use this feature. You can use Role Mapping and/or Lightweight Directory Access Protocol to manage key access or provide an audit trail if more than one admin user is going to manage licensing.

After enabling 1-click licensing, you can also disable it.

Enabling 1-Click Licensing

This feature is not available to dark site clusters, which are not connected to the Internet. To enable 1-click licensing, first create a Licensing API key and download an SSL public key. After you do this, register both through the Prism Element or Prism Central web console. You might need to turn off any pop-up blockers in your browser to display dialog boxes.

Before you begin

  • Ensure that you have an active My Nutanix account with valid credentials.
  • Make sure you have licensed your cluster first. See Licensing a Cluster (Internet Connected) or Licensing a Cluster (Dark Site).

Procedure

  1. Create an API key and download the SSL key as described in Creating an API Key.
    Make sure you select the Licensing scope when creating the API key. If you are currently logged on to the Nutanix Support Portal, you can access the API Key page by selecting API Keys from your profile. See also API Key Management.
  2. When you create an API key, make sure you give it a unique name.
  3. Register the keys through the Prism Element or Prism Central web console.
    1. Log on to the web console, click the gear icon, then click Licensing .
    2. Click Enable 1-Click Licensing .
    3. Paste the API key in the API KEY field in the dialog box.
    4. Add the SSL public key by clicking Choose File and browsing to the public key file that you downloaded when you created the key.
    5. Click Save .
    Figure. Enable 1-click licensing Click to enlarge Enable 1-click licensing dialog

    Messages display showing that 1-click licensing is enabled. You can also check related task messages on the Tasks page in the web console.

    The Licensing page now shows two buttons: Disable 1-Click Licensing , which indicates 1-click licensing is enabled, and License With Portal , which lets you upgrade license tiers and add-ons by using the manual Nutanix Support Portal 3-step licensing work flow.

    Below each license tile is the 1-Click Licensing options menu, where you can select more licensing actions: Rebalance , Extend , or Unlicense . See 1-Click Licensing Actions.

    After enabling 1-click licensing, you can also disable it .

Disabling 1-Click Licensing

This feature is not available to dark site clusters, which are not connected to the Internet. Disables 1-click licensing through the web console. For security reasons, you cannot reuse your previously-created API key after disabling 1-click licensing.

About this task

You might need to disable the 1-click licensing connection associated with the API and public keys. If you disable the connection as described here, you can enable it again by obtaining a new API key as described in Creating an API Key.

Procedure

  1. Log on to the web console, click the gear icon, then click Licensing .
  2. Click Disable 1-Click Licensing .
  3. Click Disable , then click Yes to confirm.
    A status message and dialog box is displayed.
  4. To close the dialog box, click Cancel .
    The Licensing page now shows two buttons: Enable 1-Click Licensing , which indicates 1-click licensing is enabled, and License With Portal , which lets you upgrade license tiers and license add-ons by using the manual Nutanix Support Portal 3-step licensing work flow.

Manage Licenses with 1-Click Licensing

After you license your cluster, 1-click licensing helps simplify licensing by integrating the licensing work flow into a single interface in the web console. As this feature is disabled by default, enable and configure it first.

This feature is not available to dark site clusters, which are not connected to the Internet.

Once you configure this feature, you can perform most tasks from the Prism web console without needing to explicitly log on to the Nutanix Support Portal.

When you open Licensing from the Prism Element or Prism Central web console for a licensed cluster, each license tile includes a drop-down menu so you can manage your licenses without leaving the web console. 1-click licensing communicates with the Nutanix Support Portal to detect any changes or updates to your cluster license status.

If you want to change your license tier by upgrading or downgrading your license, use the procedures in Upgrading or Downgrading (Changing Your License Tier).

Before You Begin
Before you begin, make sure your cluster and any add-ons are licensed.
  • See Before You License Your Cluster, License a Cluster and Add-On, and Enable 1-Click Licensing.
  • Your network must allow outbound traffic to portal.nutanix.com:443 to use this feature.

1-Click Licensing Actions

On the Licensing page in the Prism Element or Prism Central web console for a licensed cluster, each license tile includes a drop-down menu so you can manage your licenses directly from the web console.

Rebalance

If you have made changes to your cluster, choose Rebalance to help ensure your available licenses (including licensed add-ons) are applied correctly. Use Rebalance if you:

  • Have added a node and have an available license in your account
  • Have removed one or more nodes from your cluster. Choose Rebalance to reclaim the now-unused license and return it to your license inventory.
  • Move nodes from one cluster to another.
  • Have reduced in capacity in your cluster (number of nodes, Flash storage capacity, number of licenses, and so on)

Extend

Choose Extend to extend the term of current expiring term-based licenses if you have purchased one or more extensions.

If your license has expired, you have to license the cluster as if it were unlicensed. See License a Cluster and Add-On.

Unlicense

Choose Unlicense to unlicense a cluster (including licensed add-ons) in one click. This action removes the licenses from a cluster and returns them to your license inventory. This action is sometimes referred to as reclaiming licenses.

  • Before destroying a cluster, you must unlicense your cluster to reclaim and return your licenses to your inventory.
  • You do not need to reclaim legacy-licensed life of device AOS Starter licenses for Nutanix and OEM AOS Appliance platforms. These licenses are embedded and are automatically applied whenever you create a cluster. You do need to reclaim AOS Pro and Ultimate licenses for these platforms.
  • You do need to reclaim AOS licenses (Starter / Pro / Ultimate) for software-only and third-party hardware platforms.

Using Rebalance to Adjust Your Licenses

Before you begin

If you have destroyed the cluster and did not reclaim the existing licenses by unlicensing the cluster first, contact Nutanix Support to help reclaim the licenses. For more information about how to destroy a cluster, see Destroying a Cluster in the Acropolis Advanced Administration Guide.

About this task

  • 1-Click Licensing Actions describes when to use Rebalance .
  • If License Manager does not detect any changes to your cluster after you click 1-Click Licensing > Rebalance , any subsequent dialog action buttons are grayed out or unavailable

Procedure

  1. Log on to your Prism Element or Prism Central web console, click the gear icon, then select Licensing in the Settings panel.
  2. To rebalance Acropolis or Prism Central licenses, choose 1-Click Licensing > Rebalance from the Acropolis or Prism Central tile.
    1. If you have reduced physical capacity in your cluster (for example, added or removed nodes or change Flash storage capacity) the rebalance task begins.
    2. If your license type is based on the number of users (such as a VDI license) or VMs (such as a ROBO license), you can adjust the number of licenses allocated. Increase or decrease the number in the dialog box and click 1-Click Licensing > Rebalance .
    License Manager contacts the Nutanix Support portal and rebalances your cluster. During this operation, web console Licensing is unavailable until the operation is completed.
  3. To rebalance add-on licenses, choose 1-Click Licensing > Rebalance from the add-on tile. For example, Files for AOS.
    1. For add-ons that are based on capacity or other metrics like Nutanix Files, for example, specify the capacity to allocate. For example, to change the cluster disk capacity to allocate to Files, increase or decrease the number in the dialog box and click 1-Click Licensing > Rebalance .
    License Manager contacts the Nutanix Support portal and rebalances your cluster. During this operation, web console Licensing is unavailable until the operation is completed.

Using Extend to Update Your License Term

About this task

Choose Extend to extend the term of current expiring term-based licenses if you have purchased one or more extensions. 1-Click Licensing Actions describes Extend and other actions.

If your licenses have expired, you basically have to license the cluster as described in License a Cluster and Add-On.

Procedure

  1. Log on to your Prism Element or Prism Central web console, click the gear icon, then select Licensing in the Settings panel.
  2. To update AOS licenses, choose 1-Click Licensing > Extend from the Acropolis tile.
  3. To update add-on licenses, choose 1-Click Licensing > Extend from the add-on tile. For example, Files for AOS.
    License Manager contacts the Nutanix Support portal. During this operation, web console Licensing is unavailable until the operation is completed. Note that the Valid Until date text under the extended tile changes to the new expiration date.

Using Unlicense to Reclaim and Return Licenses to Inventory

About this task

Choose Unlicense to unlicense your cluster (sometimes referred to as reclaiming licenses). 1-Click Licensing Actions describes when to choose Unlicense and other actions.

Perform this task for each cluster that you want to unlicense. If you unlicense Prism Central (PC), the default license type Prism Center Starter is applied as a result. Registered clusters other than the PC cluster remain licensed.

Procedure

  1. Log on to your Prism Element or PC web console, click the gear icon, then select Licensing in the Settings panel.
  2. To reclaim AOS or PC licenses and any add-on licenses (such as Nutanix Files, Data Encryption, and so on), choose 1-Click Licensing > Unlicense from the Acropolis or PC tile.
  3. To reclaim add-on licenses only, choose 1-Click Licensing > Unlicense from the add-on tile. For example, Files for AOS.
    License Manager contacts the Nutanix Support portal. During this operation, web console Licensing is unavailable until the operation is completed. Note that the tile text changes to No License or Starter , depending on your platform.

Applying An Upgrade License

Use this procedure when you have purchased an upgrade license. An upgrade license enables you to upgrade your existing unexpired lower license tier to a higher license tier without having to purchase completely new stand-alone licenses.

Before you begin

This procedure uses the 3-Step Licensing method. You cannot apply this license type automatically with 1-click licensing.

Procedure

  1. Log on to your Prism Element (PE) or Prism Central (PC) web console, click the gear icon, then select Licensing in the Settings panel.
  2. To save a cluster summary file to your local machine, do one of the following steps.
    1. If you have enabled 1-click licensing, click License with Portal , then click Download .
    2. If you have not enabled 1-click licensing, click Update License , then click Download .
    The cluster summary file is saved to your browser download folder or any folder that you choose.
    Figure. 3-Step Licensing Page Click to enlarge This picture shows the 3-step license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
    1. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    In the license tile, a message is displayed that you have upgrade licenses available. For example, Ultimate Upgrade licenses are available for use on this cluster .
  4. Click Modify Selection in the Acropolis (for AOS clusters) or Prism tile (for PC clusters).
    1. Select a License Level such as Pro or Ultimate from the License Tier drop-down menu.
    2. For Cluster Deployment Location? , select your location or None of the Above .
      In most cases, select None of the Above . The list includes countries where certain technology license restrictions apply. If your cluster deployment is in one of those locations, select it.
    3. Click Save .
      The tile now shows the license level and other available license tiles. For example, your available add-ons like Nutanix Files are shown with a Add Licenses link, meaning you can apply that license now. Add-ons not associated with your account appear grayed out or unavailable. For example, you might not have the Data Encryption add-on feature.
  5. Click Next and review the Summary page.
    A message is displayed that lists the upgrade licenses that you can apply to a cluster, along with a table showing license ID, tier, and so on. The Status column shows Adding for the upgrade license and No Change for any licenses that you are not upgrading.
  6. Click Confirm and Close .
    Clicking Confirm automatically downloads the license summary file, which you need in the next step. Clicking Close returns to the portal Licenses page with a summary of your purchased licenses, including how many you have used.
  7. Back at the PE or PC web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
    The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

Upgrading or Downgrading (Changing Your License Tier)

Use License with Portal or Update License in the web console to change your license tier if you have purchased completely new stand-alone licenses. For example, to upgrade from AOS Pro to AOS Ultimate or downgrade Prism Ultimate to Prism Pro. This procedure does not apply if you purchased an upgrade license, which enables you to upgrade your existing unexpired lower license tier to a higher license tier.

Before you begin

For this procedure, keep two browser windows open:

  • One browser window for the Prism Element (PE) or Prism Central (PC) web console
  • One browser on an Internet-connected machine for the Nutanix Support Portal

Procedure

  1. Log on to your PE or PC web console, click the gear icon, then select Licensing in the Settings panel.
  2. To save a cluster summary file to your local machine, do one of the following steps.
    1. If you have enabled 1-click licensing, click License with Portal , then click Download .
    2. If you have not enabled 1-click licensing, click Update License , then click Download .
    The cluster summary file is saved to your browser download folder or any folder that you choose.
    Figure. 3-Step Licensing Page Click to enlarge This picture shows the 3-step license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
    1. If you have previously tagged your licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
      The default tag is All Licenses . You can also leave this field blank. To read about tags, see Creating and Adding a Tag to Your Licenses.
  4. If you have previously tagged your licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
  5. Click Edit License in the Acropolis (for AOS clusters) or Prism tile (for PC clusters).
    1. Select a License Level such as Pro or Ultimate from the drop-down menu.
    2. For Cluster Deployment Location? , select your location or None of the Above .
      In most cases, select None of the Above . The list includes countries where certain technology license restrictions apply. If your cluster deployment is in one of those locations, select it.
    3. Click Save .
      The tile now shows the license level and other available license tiles. For example, your available add-ons like Nutanix Files are shown with a Add Licenses link, meaning you can apply that license now. Add-ons not associated with your account appear grayed out or unavailable. For example, you might not have the Data Encryption add-on feature.
  6. Click Next , review the Summary page, then click Confirm and Close .
    Clicking Confirm automatically downloads the license summary file, which you need in the next step. Clicking Close shows the portal Licenses page with a summary of your purchased licenses, including how many you have used.
  7. Back at the PE or PC web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
    The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

Manage Licenses with Update License (3-Step Licensing)

After you license your cluster and you do not plan to enable 1-click licensing, use Update License in the web console to change your license tier. For example, to upgrade from AOS Pro to AOS Ultimate or downgrade Prism Ultimate to Prism Pro.

After you license your cluster, the web console Licensing page allows you to manage your license tier by upgrading, downgrading, or otherwise updating a license. If you have not enabled 1-click licensing and want to use 3-step licensing, the Licensing page includes a Update License button.

3-Step Licensing refers to most manual licensing procedures. These procedures describe the 3-Step Licensing procedure (in more detail):

  1. For clusters connected to the Internet: In the Prism Element (PE) or Prism Central (PC) web console, download a cluster summary file, then upload it to the Nutanix Support portal.

    For dark-sites where a cluster is not connected to the Internet: In the PE or PC web console, copy the dark-site cluster summary information and then enter it at the Nutanix Support Portal Licensing page. See Manage Licenses for Dark Site Clusters (3-Step Licensing).

  2. Download a license file from the Nutanix Support portal.
  3. In the PE or PC web console, apply the license file to the Internet-connected or dark-site cluster.
Before You Begin
Before you begin, make sure your cluster and any add-ons are licensed.
  • See Before You License Your Cluster and License a Cluster and Add-On
  • Your network must allow outbound traffic to portal.nutanix.com:443 to use this feature.

Licensing Actions by Using the Update License Button

If you did not enable 1-click licensing, when you open Licensing from the Prism Element or Prism Central web console for a licensed cluster, you can use the Update License button to manage licenses.

Rebalance Your Licenses After a Cluster Change

If you have made changes to your cluster, download and apply a new licensing summary file (LSF) to help ensure your available licenses (including licensed add-ons) are applied correctly. Rebalance your cluster if you:

  • Have added a node and have an available license in your account
  • Have removed one or more nodes from your cluster. Reclaim the now-unused license and return it to your license inventory
  • Move nodes from one cluster to another
  • Have reduced in capacity in your cluster (number of nodes, Flash storage capacity, number of licenses, and so on)

Extend the Term of Any Current Expiring Licenses

If you have purchased one or more license extensions, download and apply a new LSF to extend the term of current expiring term-based licenses.

Apply an Upgrade License

If you have purchased an upgrade license, apply it to upgrade your existing unexpired lower license tier to a higher license tier without having to purchase completely new stand-alone licenses.

Unlicense Your Cluster to Return Licenses to Your Inventory (Also Known as Reclaiming)

Unlicense a cluster (including licensed add-ons) to remove the licenses from a cluster and returns them to your license inventory. This action is sometimes referred to as reclaiming licenses. You also download and apply a new LSF in this case.

  • Before destroying a cluster, you must unlicense your cluster to reclaim and return your licenses to your inventory.
  • You do not need to reclaim legacy-licensed life of device AOS Starter licenses for Nutanix and OEM AOS Appliance platforms. These licenses are embedded and are automatically applied whenever you create a cluster. You do need to reclaim AOS Pro and Ultimate licenses for these platforms.
  • You do need to reclaim AOS licenses (Starter / Pro / Ultimate) for software-only and third-party hardware platforms.

Rebalancing Licenses After a Cluster Capacity Change (Update License)

Before you begin

Use this procedure if you have not enabled 1-click licensing. Licensing Actions by Using the Update License Button describes when you should rebalance your cluster license.

Procedure

  1. On an Internet-connected machine, log on to your Prism Element (PE) or Prism Central (PC) web console, click the gear icon, then select Licensing in the Settings panel.
  2. To save a cluster summary file to your local machine, click Update License , then click Download
    The cluster summary file is saved to your browser download folder or any folder that you choose.
    Figure. 3-Step Licensing Page Click to enlarge This picture shows the 3-step license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
    1. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    The Select Licenses pages shows product license tiles. You might see this message in a license tile: Your licenses have been updated to meet your capacity change .
  4. Click Next , review the Summary page, then click Confirm and Close .
    Clicking Confirm automatically downloads the license summary file, which you need in the next step. Clicking Close shows the portal Licenses page with a summary of your purchased licenses, including how many you have used.
  5. Back at the PE or PC web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
    The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

Reclaim Licenses by Unlicensing (Update License)

Before you begin

Use this procedure if you have not enabled 1-click licensing. See Licensing Actions by Using the Update License Button.

Procedure

  1. On an Internet-connected machine, log on to your Prism Element (PE) or Prism Central (PC) web console, click the gear icon, then select Licensing in the Settings panel.
  2. To save a cluster summary file to your local machine, click Update License , then click Download
    The cluster summary file is saved to your browser download folder or any folder that you choose.
    Figure. 3-Step Licensing Page Click to enlarge This picture shows the 3-step license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
    1. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    The Select Licenses pages shows product license tiles.
  4. In the Acropolis tile (for AOS clusters) or Prism tile (for PC clusters), click Unlicense , then click Unlicense again.
    This action removes cluster and add-on licenses.
  5. Click Next , review the Summary page, then click Confirm and Close .
    Clicking Confirm automatically downloads the license summary file, which you need in the next step. Clicking Close shows the portal Licenses page with a summary of your purchased licenses, including how many you have used.
  6. Back at the PE or PC web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
    The Licensing panel displays all license details for the cluster, like license state (for example, No License or Starter).

Extending Your License Term (Update License)

If you have purchased one or more license extensions, download and apply a new license summary file to extend the term of current expiring term-based licenses. This procedure includes a step for expired licenses.

About this task

Use this procedure if you have not enabled 1-click licensing. See Licensing Actions by Using the Update License Button.

Procedure

  1. On an Internet-connected machine, log on to your Prism Element (PE) or Prism Central (PC) web console, click the gear icon, then select Licensing in the Settings panel.
  2. To save a cluster summary file to your local machine, click Update License , then click Download
    The cluster summary file is saved to your browser download folder or any folder that you choose.
    Figure. 3-Step Licensing Page Click to enlarge This picture shows the 3-step license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
    1. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    The Select Licenses pages shows product license tiles.
  4. If your license has expired, do these steps. Otherwise, skip this step.
    1. Click Add Licenses in the Acropolis tile (for AOS clusters) or Prism tile (for PC clusters).
    2. Select a License Level such as Pro or Ultimate from the drop-down menu.
    3. For Cluster Deployment Location? , select your location or None of the Above .
      In most cases, select None of the Above . The list includes countries where certain technology license restrictions apply. If your cluster deployment is in one of those locations, select it.
    4. Click Save .
      The Acropolis tile now shows the license level and other available license tiles. For example, your available add-ons like Nutanix Files are shown with a Add Licenses link, meaning you can apply that license now. Add-ons not associated with your account appear grayed out or unavailable. For example, you might not have the Data Encryption add-on feature.
    5. [Optional] Before continuing, you can also license any add-ons as part of this step by clicking Add Licenses or Edit License in the add-on tile.
      Depending on the add-on, you simply license it with the same steps as the Acropolis work flow above (for example, Data Encryption). For add-ons that are based on capacity like Nutanix Files, you might have to specify the disk capacity to allocate to files. You can rebalance this add-on in the web console later if you need to make any changes.
  5. Click Next , review the Summary page, then click Confirm and Close .
    Clicking Confirm automatically downloads the license summary file, which you need in the next step. Clicking Close shows the portal Licenses page with a summary of your purchased licenses, including how many you have used.
  6. Back at the PE or PC web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
    The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

Applying An Upgrade License

Use this procedure when you have purchased an upgrade license. An upgrade license enables you to upgrade your existing unexpired lower license tier to a higher license tier without having to purchase completely new stand-alone licenses.

Before you begin

This procedure uses the 3-Step Licensing method. You cannot apply this license type automatically with 1-click licensing.

Procedure

  1. Log on to your Prism Element (PE) or Prism Central (PC) web console, click the gear icon, then select Licensing in the Settings panel.
  2. To save a cluster summary file to your local machine, do one of the following steps.
    1. If you have enabled 1-click licensing, click License with Portal , then click Download .
    2. If you have not enabled 1-click licensing, click Update License , then click Download .
    The cluster summary file is saved to your browser download folder or any folder that you choose.
    Figure. 3-Step Licensing Page Click to enlarge This picture shows the 3-step license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
    1. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    In the license tile, a message is displayed that you have upgrade licenses available. For example, Ultimate Upgrade licenses are available for use on this cluster .
  4. Click Modify Selection in the Acropolis (for AOS clusters) or Prism tile (for PC clusters).
    1. Select a License Level such as Pro or Ultimate from the License Tier drop-down menu.
    2. For Cluster Deployment Location? , select your location or None of the Above .
      In most cases, select None of the Above . The list includes countries where certain technology license restrictions apply. If your cluster deployment is in one of those locations, select it.
    3. Click Save .
      The tile now shows the license level and other available license tiles. For example, your available add-ons like Nutanix Files are shown with a Add Licenses link, meaning you can apply that license now. Add-ons not associated with your account appear grayed out or unavailable. For example, you might not have the Data Encryption add-on feature.
  5. Click Next and review the Summary page.
    A message is displayed that lists the upgrade licenses that you can apply to a cluster, along with a table showing license ID, tier, and so on. The Status column shows Adding for the upgrade license and No Change for any licenses that you are not upgrading.
  6. Click Confirm and Close .
    Clicking Confirm automatically downloads the license summary file, which you need in the next step. Clicking Close returns to the portal Licenses page with a summary of your purchased licenses, including how many you have used.
  7. Back at the PE or PC web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
    The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

Upgrading or Downgrading (Changing Your License Tier)

Use License with Portal or Update License in the web console to change your license tier if you have purchased completely new stand-alone licenses. For example, to upgrade from AOS Pro to AOS Ultimate or downgrade Prism Ultimate to Prism Pro. This procedure does not apply if you purchased an upgrade license, which enables you to upgrade your existing unexpired lower license tier to a higher license tier.

Before you begin

For this procedure, keep two browser windows open:

  • One browser window for the Prism Element (PE) or Prism Central (PC) web console
  • One browser on an Internet-connected machine for the Nutanix Support Portal

Procedure

  1. Log on to your PE or PC web console, click the gear icon, then select Licensing in the Settings panel.
  2. To save a cluster summary file to your local machine, do one of the following steps.
    1. If you have enabled 1-click licensing, click License with Portal , then click Download .
    2. If you have not enabled 1-click licensing, click Update License , then click Download .
    The cluster summary file is saved to your browser download folder or any folder that you choose.
    Figure. 3-Step Licensing Page Click to enlarge This picture shows the 3-step license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
    1. If you have previously tagged your licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
      The default tag is All Licenses . You can also leave this field blank. To read about tags, see Creating and Adding a Tag to Your Licenses.
  4. If you have previously tagged your licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
  5. Click Edit License in the Acropolis (for AOS clusters) or Prism tile (for PC clusters).
    1. Select a License Level such as Pro or Ultimate from the drop-down menu.
    2. For Cluster Deployment Location? , select your location or None of the Above .
      In most cases, select None of the Above . The list includes countries where certain technology license restrictions apply. If your cluster deployment is in one of those locations, select it.
    3. Click Save .
      The tile now shows the license level and other available license tiles. For example, your available add-ons like Nutanix Files are shown with a Add Licenses link, meaning you can apply that license now. Add-ons not associated with your account appear grayed out or unavailable. For example, you might not have the Data Encryption add-on feature.
  6. Click Next , review the Summary page, then click Confirm and Close .
    Clicking Confirm automatically downloads the license summary file, which you need in the next step. Clicking Close shows the portal Licenses page with a summary of your purchased licenses, including how many you have used.
  7. Back at the PE or PC web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
    The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

License a Cluster and Add-On (Dark Site Legacy License Key)

Use this procedure to license your cluster if your cluster is not connected to the Internet (that is, a dark site). These procedures also apply if a cluster is unlicensed (newly deployed or previously unlicensed by you). This procedure requires you to collect only the PC or cluster UUID that you later enter at the Nutanix Support portal.
Note: To license a dark site cluster with a license key, first contact your account team (sales engineer) or Nutanix Support to enable this feature.

Use the procedure described in Licensing a Cluster (Dark Site Legacy License Key) if:

  • You have not purchased Nutanix cloud platform packages.
  • Your cluster is running a version of Nutanix Cluster Check (NCC) that supports license keys. See the NCC Release Notes version that introduces license key support
  • Your cluster is not licensed and also not connected to the Internet (that is, a dark site)
  • You are applying a license key that you generate from the Nutanix Support portal

If you purchased Nutanix cloud platform packages, see License a Cluster and Add-On (Dark Site Cloud Platform License Key).

Licensing a Cluster (Dark Site Legacy License Key)

Use this procedure to generate and apply a legacy license key to a cluster that is not connected to the Internet (that is, a dark site).

Before you begin

For this procedure, you must access:
  • The Prism Element (PE) or Prism Central (PC) dark-site web console
  • The Nutanix Support Portal on an Internet-connected machine

About this task

These steps apply to dark site clusters, where you are using the Nutanix Cluster Check utility script license_key_util to apply a legacy license key you obtain from the Nutanix Support portal. These procedures also apply if the dark site cluster is unlicensed (newly deployed or previously unlicensed by you) and you need to generate and apply a legacy license key.

Procedure

  1. Log on to your dark-site PE or PC web console and get the cluster UUID.
    1. Click the gear icon, then select Cluster Details in the Settings panel.
    2. Copy the Cluster UUID.
  2. On an Internet-connected machine, log on to the Nutanix Support portal and click Licenses > Summary > Manage Licenses .
  3. From License Options , select Generate Legacy License Keys .
  4. Depending on your cluster type, select Prism Element or Prism Central and do the following:
    1. Provide a unique Name for your cluster. The name cannot include spaces.
    2. Enter the unique Cluster UUID .
    3. Click the plus sign.
    If you want to license more than one cluster and have the unique cluster UUID for each cluster, repeat these steps.
  5. Click Next .
    The Select Licenses page is displayed. Here, you can:
    • Create a license tag to group your licenses or select an existing tag
    • Select all licensing required for your cluster, like AOS, PC, and any add-ons.
  6. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    The default tag is All Licenses . You can also leave this field blank.
  7. Click Add Licenses in the Acropolis tile (for AOS clusters) or Prism tile (for PC clusters).
    1. Select a License Level such as Pro or Ultimate from the drop-down menu.
    2. For Cluster Deployment Location? , select your location or None of the Above .
      In most cases, select None of the Above . The list includes countries where certain technology license restrictions apply. If your cluster deployment is in one of those locations, select it.
    3. Click Save .
      The Acropolis tile now shows the license level and other available license tiles. For example, your available add-ons like Nutanix Files are shown with a Add Licenses link, meaning you can add that license now. Add-ons not associated with your account appear grayed out or unavailable. For example, you might not have the Data Encryption add-on feature.
  8. Before continuing, you can also license any add-ons as part of this step by clicking Add Licenses in the add-on tile.
    Depending on the add-on, you simply license it with the same steps as the Acropolis work flow above (for example, Data Encryption). For add-ons that are based on capacity like Nutanix Files, you might have to specify the disk capacity to allocate to Files. You can rebalance this add-on in the web console later if you need to make any changes.
  9. Click Next , review the Summary page, then click Generate License Keys .
    The portal begins generating keys.
  10. After the portal generates the keys, you can print the resulting page for a copy of the license keys or click Download as CSV to download a comma-separated file that lists each key and the associated cluster UUID.
    For example, a key might look like this: ABCDE-FGH2K-L3OPQ-RS4UV-WX5ZA-BC7EF-GH9KL
  11. Click Done .

Applying Licenses with the License Key

About this task

Apply the licenses with the license key to the dark site cluster with the license_key_util script. Apply one license key per cluster. The key is associated with the cluster UUID and is unique to that cluster, as described in this procedure.

When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the AOS or Prism Central (PC) license tier, license class, purchased add-ons, and so on.

Procedure

  1. Log on to the dark-site PE Controller VM or PC Controller VM with SSH as the Nutanix user.
  2. To apply the licenses to the cluster, use the utility script license_key_util with the apply and cluster options. For example:
    nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
    Licenses Applied:
    AOS
    Add-Ons:
    Software_Encryption
    File
    Note: You must apply the PC license key first, and then you can apply the Prism Element (PE) license key. If you have multiple PE license keys, you must apply them one at a time.
  3. Use the show option to display the licenses associated with the license key.
    nutanix@cvm$ ~/ncc/bin/license_key_util show key=license_key cluster=cluster_uuid
    Licenses Allowed:
    AOS
    Add-Ons:
    Software_Encryption
    File
    You can also log on to the web console and display license details as described in Displaying License Features, Details, and Status.
    Help is available from the command by typing ~/ncc/bin/license_key_util help

Licensing An Add-On (Dark Site Legacy License Key)

Use this procedure after you have licensed your cluster and you did not license add-on features at the same time, or if you later purchased add-on features after you initially licensed your cluster.

Before you begin

For this procedure, you must access:
  • The Prism Element (PE) or Prism Central (PC) dark-site web console
  • The Nutanix Support Portal on an Internet-connected machine

About this task

These steps apply to clusters not connected to the Internet (that is, a dark site), where you are using the Nutanix Cluster Check utility script license_key_util to apply a legacy license key you obtain from the Nutanix Support portal.

Procedure

  1. Log on to your dark-site PE or PC web console and get the cluster UUID.
    1. Click the gear icon, then select Cluster Details in the Settings panel.
    2. Copy the Cluster UUID.
  2. On an Internet-connected machine, log on to the Nutanix Support portal and click Licenses > Summary > Manage Licenses .
  3. From License Options , select Generate Legacy License Keys .
  4. Depending on your cluster type, select Prism Element or Prism Central and do the following:
    1. Provide a unique Name for your cluster. The name cannot include spaces.
    2. Enter the unique Cluster UUID .
    3. Click the plus sign.
    If you want to license more than one cluster and have the unique cluster UUID for each cluster, repeat these steps.
  5. Click Next .
    The Select Licenses page is displayed. Here, you can:
    • Create a license tag to group your licenses or select an existing tag
    • Select all licensing required for your cluster, like AOS, PC, and any add-ons.
  6. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
  7. Click Add Licenses in the add-on tile. For example, the Data Encryption tile.
    1. For Cluster Deployment Location? , select your location or None of the Above .
      In most cases, select None of the Above . The list includes countries where certain technology license restrictions apply. If your cluster deployment is in one of those locations, select it.
    2. Click Save .
      The example Data Encryption tile now shows two links: Edit License , where you can make any changes, and Unselect License , which returns the add-on to an unlicensed state.
  8. Continue licensing any add-ons in your inventory by clicking Select License in the add-on tile.
    Depending on the add-on, you simply license it with the same steps as the Data Encryption step above. Unrestricted add-ons will not have the Cluster Deployment Location? selection.

    For add-ons that are based on capacity or other metrics like Nutanix Files, for example, specify the cluster disk capacity to allocate to Files. You can rebalance this add-on in the web console later if you need to make any changes.

  9. Click Next , review the Summary page, then click Generate License Keys .
    The portal begins generating keys.
  10. After the portal generates the keys, you can print the resulting page for a copy of the license keys or click Download as CSV to download a comma-separated file with each key and associated cluster UUID.
    For example, a key might look like this: ABCDE-FGH2K-L3OPQ-RS4UV-WX5ZA-BC7EF-GH9KL
  11. Click Done .

Applying Licenses with the License Key

About this task

Apply the licenses with the license key to the dark site cluster with the license_key_util script. Apply one license key per cluster. The key is associated with the cluster UUID and is unique to that cluster, as described in this procedure.

When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the AOS or Prism Central (PC) license tier, license class, purchased add-ons, and so on.

Procedure

  1. Log on to the dark-site PE Controller VM or PC Controller VM with SSH as the Nutanix user.
  2. To apply the licenses to the cluster, use the utility script license_key_util with the apply and cluster options. For example:
    nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
    Licenses Applied:
    AOS
    Add-Ons:
    Software_Encryption
    File
  3. Use the show option to display the licenses associated with the license key.
    nutanix@cvm$ ~/ncc/bin/license_key_util show key=license_key cluster=cluster_uuid
    Licenses Allowed:
    AOS
    Add-Ons:
    Software_Encryption
    File
    You can also log on to the web console and display license details as described in Displaying License Features, Details, and Status.
    Help is available from the command by typing ~/ncc/bin/license_key_util help

Manage Licenses for Dark Site Clusters (Legacy License Key)

Use these procedures if your cluster is not connected to the Internet (that is, your cluster is deployed at a dark site) and you plan to apply a legacy license key. To enter dark site cluster information at the Nutanix Support Portal and generate a legacy license key, use a web browser from a machine with an Internet connection.

To adhere to regulatory, security, or compliance policies at the customer site, dark-site AOS and Prism Central clusters are not connected to the Internet. 1-click licensing and License with Portal licensing actions are not available in this case.

Before You Begin
  • To license a dark site cluster with a license key, first contact your account team (sales engineer) or Nutanix Support to enable this feature.
  • Before attempting to install a license, ensure that you have created a cluster and logged into the Prism Element or Prism Central web console at least once.
  • You must install a license after creating a cluster for which you purchased AOS Pro or AOS Ultimate licenses. After creating the cluster, license your cluster, including any add-ons licensed. See Licensing a Cluster (Dark Site Legacy License Key) and Licensing An Add-On (Dark Site Legacy License Key).
  • A key point to remember is that each license key is associated with a unique cluster UUID. The generated key cannot be used with a different cluster (that is, with a different cluster UUID).

Dark Site Legacy License Key Considerations

Considerations and related actions or work flows that apply when you license your dark site Prism Element or Prism Central cluster with a license key. A key point to remember is that each license key is associated to a unique cluster UUID. The generated key cannot be used with a different cluster (that is, with a different cluster UUID).

When Do I Need to Generate a License Key?

You need to generate a license key at the Nutanix Support portal and apply it to your cluster for the following scenarios.

  • Licensing a cluster, including an unlicensed cluster (newly deployed or previously unlicensed by you).
  • Purchasing an add-on after previously licensing your cluster. For example, you applied a license key to your cluster that is running the AOS Pro tier. Six months later, you purchase the Nutanix Files add-on. You must license your cluster again with a new key to reflect the add-on.
  • Upgrading or downgrading your license tier.
  • Extending a license term. If you have purchased one or more license extensions, generate and apply a new key to extend the term of current expiring term-based licenses.

Do I Need to Rebalance a License Key After a Cluster Change?

No. Each license key is associated to a unique cluster UUID. If you have made changes to your cluster, you do not need to rebalance licenses across your cluster. A cluster rebalance or cluster change is defined as one or more of the following scenarios:

  • You add one or more nodes to your cluster.
  • You remove one or more nodes from your cluster.
  • You move one or more nodes from one cluster to another.
  • You reduce capacity in your cluster (number of nodes, Flash storage capacity, number of licenses, and so on).

Can I Reclaim Licenses and Return Them to My Inventory?

When you use the dark site license key method to apply a key to your cluster, there is no requirement to reclaim licenses. Each license key is associated with a unique cluster UUID, so there is nothing to reclaim.

Before I Destroy a Cluster, Do I Need to Reclaim the License Key?

No. Each license key is associated with a unique cluster UUID, so there is nothing to reclaim. When you create a new cluster, however, you do need to generate and apply a new license key.

I Previously Licensed My Clusters with a License Summary File. Can I Use License Keys Instead?

Yes. If your cluster is running the latest version of NCC and AOS or Prism Central versions compatible with this NCC version, you can switch to using a license key when any cluster attribute changes. That switch to a key includes clusters where you have upgraded NCC and AOS/Prism Central to versions that support license keys. Nutanix recommends the following if you want to use license keys:

  • See the NCC Release Notes version that introduces license key support.
  • Do not upgrade to using a license key unless your cluster attributes change.
  • If your cluster capacity or other cluster attribute changes (upgrade the license tier, extend the license term, or purchase additional add-ons), you can then license these changes by applying a license key.

Extending Your License Term (Dark Site Legacy License Key)

If you have purchased one or more license extensions to extend the term of current expiring term-based legacy licenses, use this procedure to update your cluster license.

Before you begin

For this procedure, you must access:
  • The Prism Element (PE) or Prism Central (PC) dark-site web console
  • The Nutanix Support Portal on an Internet-connected machine

About this task

These steps apply to clusters not connected to the Internet (that is, a dark site), where you are using the Nutanix Cluster Check utility script license_key_util to apply a legacy license key you obtain from the Nutanix Support portal. This procedure includes a step for expired licenses.

Procedure

  1. Log on to your dark-site PE or PC web console and get the cluster UUID.
    1. Click the gear icon, then select Cluster Details in the Settings panel.
    2. Copy the Cluster UUID.
  2. On an Internet-connected machine, log on to the Nutanix Support portal and click Licenses > Summary > Manage Licenses .
  3. From License Options , select Generate Legacy License Keys .
  4. Depending on your cluster type, select Prism Element or Prism Central and do the following:
    1. Provide a unique Name for your cluster. The name cannot include spaces.
    2. Enter the unique Cluster UUID .
    3. Click the plus sign.
    If you want to license more than one cluster and have the unique cluster UUID for each cluster, repeat these steps.
  5. Click Next .
  6. If you have previously tagged your licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    The default tag is All Licenses . You can also leave this field blank. To read about tags, see Creating and Adding a Tag to Your Licenses.
  7. If your license has expired, do these steps. Otherwise, skip this step.
    1. Click Add Licenses in the Acropolis tile (for AOS clusters) or Prism tile (for PC clusters).
    2. Select a License Level such as Pro or Ultimate from the drop-down menu.
    3. For Cluster Deployment Location? , select your location or None of the Above .
      In most cases, select None of the Above . The list includes countries where certain technology license restrictions apply. If your cluster deployment is in one of those locations, select it.
    4. Click Save .
      The Acropolis tile now shows the license level and other available license tiles. For example, your available add-ons like Nutanix Files are shown with a Add Licenses link, meaning you can add that license now. Add-ons not associated with your account appear grayed out or unavailable. For example, you might not have the Data Encryption add-on feature.
    5. [Optional] Before continuing, you can also license any add-ons as part of this step by clicking Add Licenses or Edit License in the add-on tile.
      Depending on the add-on, you simply license it with the same steps as the Acropolis work flow above (for example, Data Encryption). For add-ons that are based on capacity like Nutanix Files, you might have to specify the disk capacity to allocate to files. You can rebalance this add-on in the web console later if you need to make any changes.
  8. Click Next , review the Summary page, then click Generate License Keys .
    The portal begins generating keys.
  9. After the portal generates the keys, you can print the resulting page for a copy of the license keys or click Download as CSV to download a comma-separated file with each key and associated cluster UUID.
    For example, a key might look like this: ABCDE-FGH2K-L3OPQ-RS4UV-WX5ZA-BC7EF-GH9KL
  10. Click Done .

Applying Licenses with the License Key

About this task

Apply the licenses with the license key to the dark site cluster with the license_key_util script. Apply one license key per cluster. The key is associated with the cluster UUID and is unique to that cluster, as described in this procedure.

When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the AOS or Prism Central (PC) license tier, license class, purchased add-ons, and so on.

Procedure

  1. Log on to the dark-site PE Controller VM or PC Controller VM with SSH as the Nutanix user.
  2. To apply the licenses to the cluster, use the utility script license_key_util with the apply and cluster options. For example:
    nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
    Licenses Applied:
    AOS
    Add-Ons:
    Software_Encryption
    File
  3. Use the show option to display the licenses associated with the license key.
    nutanix@cvm$ ~/ncc/bin/license_key_util show key=license_key cluster=cluster_uuid
    Licenses Allowed:
    AOS
    Add-Ons:
    Software_Encryption
    File
    You can also log on to the web console and display license details as described in Displaying License Features, Details, and Status.
    Help is available from the command by typing ~/ncc/bin/license_key_util help

Applying An Upgrade License (Dark Site Legacy License Key)

Use these procedures if your dark site cluster is not connected to the Internet (that is, your cluster is deployed at a dark site) and you have purchased an upgrade license.

Before you begin

For this procedure, you must access:
  • The Prism Element (PE) or Prism Central (PC) dark-site web console
  • The Nutanix Support Portal on an Internet-connected machine

About this task

An upgrade license enables you to upgrade your existing unexpired lower license tier to a higher license tier without having to purchase completely new stand-alone licenses. These steps apply to dark site clusters, where you are using the Nutanix Cluster Check utility script license_key_util to apply a legacy license key you obtain from the Nutanix Support portal.

Procedure

  1. Log on to your dark-site PE or PC web console and get the cluster UUID.
    1. Click the gear icon, then select Cluster Details in the Settings panel.
    2. Copy the Cluster UUID.
  2. On an Internet-connected machine, log on to the Nutanix Support portal and click Licenses > Summary > Manage Licenses .
  3. From License Options , select Generate Legacy License Keys .
  4. Depending on your cluster type, select Prism Element or Prism Central and do the following:
    1. Provide a unique Name for your cluster. The name cannot include spaces.
    2. Enter the unique Cluster UUID .
    3. Click the plus sign.
    If you want to license more than one cluster and have the unique cluster UUID for each cluster, repeat these steps.
  5. Click Next .
  6. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    The default tag is All Licenses . You can also leave this field blank.
  7. Click Next , review the Summary page, then click Generate License Keys .
    The portal begins generating keys.
  8. After the portal generates the keys, you can print the resulting page for a copy of the license keys or click Download as CSV to download a comma-separated file with each key and associated cluster UUID.
    For example, a key might look like this: ABCDE-FGH2K-L3OPQ-RS4UV-WX5ZA-BC7EF-GH9KL
  9. Click Done .

Applying Licenses with the License Key

About this task

Apply the licenses with the license key to the dark site cluster with the license_key_util script. Apply one license key per cluster. The key is associated with the cluster UUID and is unique to that cluster, as described in this procedure.

When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the AOS or Prism Central (PC) license tier, license class, purchased add-ons, and so on.

Procedure

  1. Log on to the dark-site PE Controller VM or PC Controller VM with SSH as the Nutanix user.
  2. To apply the licenses to the cluster, use the utility script license_key_util with the apply and cluster options. For example:
    nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
    Licenses Applied:
    AOS
    Add-Ons:
    Software_Encryption
    File
  3. Use the show option to display the licenses associated with the license key.
    nutanix@cvm$ ~/ncc/bin/license_key_util show key=license_key cluster=cluster_uuid
    Licenses Allowed:
    AOS
    Add-Ons:
    Software_Encryption
    File
    You can also log on to the web console and display license details as described in Displaying License Features, Details, and Status.
    Help is available from the command by typing ~/ncc/bin/license_key_util help

Upgrading or Downgrading Your License Tier (Dark Site Legacy License Key)

Use this procedure to change your license tier.

Before you begin

For this procedure, you must access:
  • The Prism Element (PE) or Prism Central (PC) dark-site web console
  • The Nutanix Support Portal on an Internet-connected machine

About this task

This procedure does not apply if you purchased an upgrade license, which enables you to upgrade your existing unexpired lower license tier to a higher license tier. Use this procedure to change your license tier. For example, to upgrade from AOS Pro to AOS Ultimate or downgrade Prism Ultimate to Prism Pro. These steps apply to clusters not connected to the Internet (that is, a dark site), where you are using the Nutanix Cluster Check utility script license_key_util to apply a legacy license key you obtain from the Nutanix Support portal.

Procedure

  1. Log on to your dark-site PE or PC web console and get the cluster UUID.
    1. Click the gear icon, then select Cluster Details in the Settings panel.
    2. Copy the Cluster UUID.
  2. On an Internet-connected machine, log on to the Nutanix Support portal and click Licenses > Summary > Manage Licenses .
  3. From License Options , select Generate Legacy License Keys .
  4. Depending on your cluster type, select Prism Element or Prism Central and do the following:
    1. Provide a unique Name for your cluster. The name cannot include spaces.
    2. Enter the unique Cluster UUID .
    3. Click the plus sign.
    If you want to license more than one cluster and have the unique cluster UUID for each cluster, repeat these steps.
  5. Click Next .
    The Select Licenses page is displayed. Here, you can:
    • Create a license tag to group your licenses or select an existing tag
    • Edit your cluster licenses, like AOS, PC, and any add-ons.
  6. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    The default tag is All Licenses . You can also leave this field blank.
  7. Click Edit License in the Acropolis (for AOS clusters) or Prism tile (for PC clusters).
    1. Select a License Level such as Pro or Ultimate from the drop-down menu.
    2. For Cluster Deployment Location? , select your location or None of the Above .
      In most cases, select None of the Above . The list includes countries where certain technology license restrictions apply. If your cluster deployment is in one of those locations, select it.
    3. Click Save .
      The tile now shows the license level and other available license tiles. For example, your available add-ons like Nutanix Files are shown with a Add Licenses link, meaning you can add that license now. Add-ons not associated with your account appear grayed out or unavailable. For example, you might not have the Data Encryption add-on feature.
  8. Click Next , review the Summary page, then click Generate License Keys .
    The portal begins generating keys.
  9. After the portal generates the keys, you can print the resulting page for a copy of the license keys or click Download as CSV to download a comma-separated file with each key and associated cluster UUID.
    For example, a key might look like this: ABCDE-FGH2K-L3OPQ-RS4UV-WX5ZA-BC7EF-GH9KL
  10. Click Done .

Applying Licenses with the License Key

About this task

Apply the licenses with the license key to the dark site cluster with the license_key_util script. Apply one license key per cluster. The key is associated with the cluster UUID and is unique to that cluster, as described in this procedure.

When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the AOS or Prism Central (PC) license tier, license class, purchased add-ons, and so on.

About this task

`

Procedure

  1. Log on to the dark-site PE Controller VM or PC Controller VM with SSH as the Nutanix user.
  2. To apply the licenses to the cluster, use the utility script license_key_util with the apply and cluster options. For example:
    nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
    Licenses Applied:
    AOS
    Add-Ons:
    Software_Encryption
    File
  3. Use the show option to display the licenses associated with the license key.
    nutanix@cvm$ ~/ncc/bin/license_key_util show key=license_key cluster=cluster_uuid
    Licenses Allowed:
    AOS
    Add-Ons:
    Software_Encryption
    File
    You can also log on to the web console and display license details as described in Displaying License Features, Details, and Status.
    Help is available from the command by typing ~/ncc/bin/license_key_util help

License a Cluster and Add-On (Dark Site Cloud Platform License Key)

Use this procedure to apply Nutanix cloud platform package licenses to a dark site cluster (that is, a cluster that is not connected to the Internet). This procedure also applies if a cluster is unlicensed (newly deployed or previously unlicensed by you). For information about the cloud platform packages, see Nutanix Cloud Platform Software Options.
Note:
  • To license a dark site cluster with a license key, first contact your account team (sales engineer) or Nutanix Support to enable this feature.
  • After purchasing Nutanix cloud platform package licenses, you cannot use the Prism Element (PE) web console to apply your licenses. Use the Prism Central (PC) web console instead.

To apply licenses for your cloud platform packages, you must use the procedure described in Licensing a Cluster (Dark Site Cloud Platform License Key) to generate and apply a cloud platform license key. This procedure requires you to collect the PC UUID and the UUID of each cluster connected to the PC that you want to license. You enter each UUID at the Nutanix Support portal. To use this procedure, see the following requirements.

  • Your cluster is running the minimum versions of AOS 6.1.1, Nutanix Cluster Check 4.6.2, and pc.2022.4
  • Your cluster is not licensed and also not connected to the Internet.
  • You are applying a license key that you generate from the Nutanix Support portal

In summary, the procedure is as follows.

  1. Log on to your dark site PC web console and collect information about your cluster.
  2. Log on to the Nutanix Support portal and use Manage Licenses to generate Nutanix cloud platform license keys.
  3. Once Nutanix generates the license keys, save them.
  4. Apply the license keys to the dark site cluster with the license_key_util script.

Licensing a Cluster (Dark Site Cloud Platform License Key)

Use this procedure to generate and apply a cloud platform license key to a cluster that is not connected to the internet (that is, a dark site).

Before you begin

For this procedure, you must access:
  • The Prism Central (PC) dark-site web console
  • The Nutanix Support Portal on an Internet-connected machine

About this task

These steps apply to dark site clusters, where you are using the Nutanix Cluster Check utility script license_key_util to apply a cloud platform license key you obtain from the Nutanix Support portal. These procedures also apply if the dark site cluster is unlicensed (newly deployed or previously unlicensed by you) and you need to generate and apply a cloud platform license key.

Procedure

  1. Log on to your dark-site PC web console and get the PC UUID and the UUID of each cluster connected to the PC that you want to license.
    1. Click the gear icon, then select Cluster Details in the Settings panel.
    2. Copy the PC UUID and the UUID of each cluster connected to the PC that you want to license. You will enter this information in the Nutanix Support Portal.
  2. On an Internet-connected machine, log on to the Nutanix Support portal and click Licenses > Summary > Manage Licenses .
  3. Click License Options > Generate Nutanix Cloud Platform License Keys .
  4. Select the check boxes and then click Confirm to verify that your clusters meet the minimum versions of pc.2022.4, AOS 6.1.1, and NCC 4.6.2.
  5. Enter the UUID of your PC and then click Confirm .
    Entering a Name can help you identify your PC more easily.
  6. Enter the Cluster UUID for each of your clusters.
    Click + to add a cluster and - to remove a cluster. Entering a Name can help you identify your clusters more easily.
  7. Click Next
    The Select Licenses page appears. The page displays tiles for each licensing option.
  8. In the tile of a license that you want to apply, click Apply to Cluster .
  9. In the Select Clusters page, select the clusters to apply the license to.
    To filter the cluster list, enter a partial cluster name in Type to filter .
  10. After you select the clusters, click one of the following to apply cloud platform licenses.
    • To apply only the selected license to the selected clusters, click Save and Skip Selecting Add-on .
    • To apply the selected license and select add-on licenses to apply, click Save and Select Add-on . A dialog box appears. Select the add-on licenses to apply from the list and then click Save .
    The Select Licenses page appears again. The tiles now show the number of clusters licensed with each license tier. In a tile, you can click:
    • Edit/Add Cluster to add or remove licenses.
    • Unlicense to remove all licenses.
  11. Repeat steps eight through ten for any other licenses that you want to apply.
  12. After applying all desired licenses, click Next from the Select Licenses page.
  13. Review the Review and Finish summary page.
  14. Click Confirm .
    Clicking Confirm generates and displays the license keys for each of your clusters.
  15. Save the license keys.
    You can save the license keys by printing them or downloading a comma-separated file that lists each key and the associated cluster UUID.
  16. Click Done

Applying Licenses with the License Key

About this task

Apply the licenses with the license key to the dark site cluster with the license_key_util script. Apply one license key per cluster. The key is associated with the cluster UUID and is unique to that cluster.

When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the NCI or NCM license tier, license class, purchased add-ons, and so on.

Procedure

  1. Log on to the dark-site PC Controller VM with SSH as the Nutanix user.
  2. To apply the licenses to the cluster, use the utility script license_key_util with the apply and cluster options. For example:
    nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
    Licenses Applied:
    NCI Pro CORES
    NCM Pro CORES
    Add-Ons:
    NCI Security CORES
    Nutanix Database Service as Addon CORES
    NCI Nutanix Kubernetes Engine CORES
  3. Use the show option to display the licenses associated with the license key.
    nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
    Licenses Applied:
    NCI Pro CORES
    NCM Pro CORES
    Add-Ons:
    NCI Security CORES
    Nutanix Database Service as Addon CORES
    NCI Nutanix Kubernetes Engine CORES
    You can also log on to the web console and display license details as described in Displaying License Features, Details, and Status.
    Help is available from the command by typing ~/ncc/bin/license_key_util help

Licensing an Add-On (Dark Site Cloud Platform License Key)

Use this procedure after you have licensed your cluster and you did not license add-on features at the same time, or if you later purchased add-on features after you initially licensed your cluster.

Before you begin

For this procedure, you must access:
  • The Prism Central (PC) dark-site web console
  • The Nutanix Support Portal on an Internet-connected machine

About this task

These steps apply to clusters not connected to the Internet (that is, a dark site), where you are using the Nutanix Cluster Check utility script license_key_util to apply a cloud platform license key you obtain from the Nutanix Support portal.

Procedure

  1. Log on to your dark-site PC web console and get the PC UUID and the UUID of each cluster connected to the PC that you want to license.
    1. Click the gear icon, then select Cluster Details in the Settings panel.
    2. Copy the PC UUID and the UUID of each cluster connected to the PC that you want to license. You will enter this information in the Nutanix Support Portal.
  2. On an Internet-connected machine, log on to the Nutanix Support portal and click Licenses > Summary > Manage Licenses .
  3. From License Options , select Generate Nutanix Cloud Platform License Keys .
  4. Select the check boxes and then click Confirm to verify that your clusters meet the minimum versions of pc.2022.4, AOS 6.1.1, and NCC 4.6.2.
  5. Enter the UUID of your PC and then click Confirm .
    Entering a Name can help you identify your PC more easily.
  6. Enter the Cluster UUID for each your clusters.
    Click + to add a cluster and - to remove a cluster. Entering a Name can help you identify your clusters more easily.
  7. Click Next
    The Select Licenses page appears. The page displays tiles for each licensing option. You can edit your cluster licenses and any add-ons.
  8. In the tile of a license that you want to apply add-on licenses, click either Apply to Cluster or Edit/Add Cluster .
  9. If applicable, in the Select Clusters page, select the clusters to apply the license to.
    To filter the cluster list, enter a partial cluster name in Type to filter .
  10. Click Save and Select Add-on , select the add-on licenses to apply from the list in the dialog box, then click Save .
    The Select Licenses page appears again. The tiles now show the number of clusters licensed with each license tier. In a tile, you can click:
    • Edit/Add Cluster to add or remove licenses.
    • Unlicense to remove all licenses.
  11. Repeat steps eight through ten for any other licenses that you want to apply.
  12. After applying all desired licenses, click Next from the Select Licenses page.
  13. Review the Review and Finish summary page.
  14. Click Next .
    Clicking Next generates and displays the license keys for each of your clusters.
  15. Save the license keys.
    You can save the license keys by printing them or downloading a comma-separated file that lists each key and the associated cluster UUID.
  16. Click Done

Applying Licenses with the License Key

About this task

Apply the licenses with the license key to the dark site cluster with the license_key_util script. Apply one license key per cluster. The key is associated with the cluster UUID and is unique to that cluster.

When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the NCI or NCM license tier, license class, purchased add-ons, and so on.

Procedure

  1. Log on to the dark-site PC Controller VM with SSH as the Nutanix user.
  2. To apply the licenses to the cluster, use the utility script license_key_util with the apply and cluster options. For example:
    nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
    Licenses Applied:
    NCI Pro CORES
    NCM Pro CORES
    Add-Ons:
    NCI Security CORES
    Nutanix Database Service as Addon CORES
    NCI Nutanix Kubernetes Engine CORES
  3. Use the show option to display the licenses associated with the license key.
    nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
    Licenses Applied:
    NCI Pro CORES
    NCM Pro CORES
    Add-Ons:
    NCI Security CORES
    Nutanix Database Service as Addon CORES
    NCI Nutanix Kubernetes Engine CORES
    You can also log on to the web console and display license details as described in Displaying License Features, Details, and Status.
    Help is available from the command by typing ~/ncc/bin/license_key_util help

Manage Licenses for Dark Site Clusters (Cloud Platform License Key)

Use these procedures if your cluster is not connected to the Internet (that is, your cluster is deployed at a dark site) and you plan to apply a cloud platform license key. To enter dark site cluster information at the Nutanix Support Portal and generate a cloud platform license key, use a web browser from a machine with an Internet connection.

To adhere to regulatory, security, or compliance policies at the customer site, dark-site AOS and Prism Central clusters are not connected to the Internet. The License with Portal licensing action is not available in this case.

Before You Begin
  • To license a dark site cluster with a license key, first contact your account team (sales engineer) or Nutanix Support to enable this feature.
  • Before attempting to install a license, ensure that you have created a cluster and logged into the Prism Central web console at least once.
  • You must install a license after creating a cluster for which you purchased NCI Pro or NCI Ultimate licenses. After creating the cluster, license your cluster, including any add-ons licensed. See Licensing a Cluster (Dark Site Cloud Platform License Key) and Licensing an Add-On (Dark Site Cloud Platform License Key).
  • A key point to remember is that each license key is associated with a unique cluster UUID. The generated key cannot be used with a different cluster (that is, with a different cluster UUID).

Dark Site Cloud Platform License Key Considerations

Considerations and related actions or work flows that apply when you license your dark site Prism Element or Prism Central cluster with a license key. A key point to remember is that each license key is associated to a unique cluster UUID. The generated key cannot be used with a different cluster (that is, with a different cluster UUID).

When Do I Need to Generate a License Key?

You must generate a license key at the Nutanix Support portal and apply it to your cluster for the following scenarios.

  • Licensing a cluster, including an unlicensed cluster (newly deployed or previously unlicensed by you).
  • Purchasing an add-on after previously licensing your cluster. For example, you applied a license key to your cluster that is running the NCI Pro tier. Six months later, you purchase the NDB add-on. You must license your cluster again with a new key to reflect the add-on.
  • Upgrading or downgrading your license tier.
  • Extending a license term. If you have purchased one or more license extensions, generate and apply a new key to extend the term of current expiring term-based licenses.

Do I Need to Rebalance a License Key After a Cluster Change?

No. Each license key is associated to a unique cluster UUID. If you have made changes to your cluster, you do not need to rebalance licenses across your cluster. A cluster rebalance or cluster change is defined as one or more of the following scenarios:

  • You add one or more nodes to your cluster.
  • You remove one or more nodes from your cluster.
  • You move one or more nodes from one cluster to another.
  • You reduce capacity in your cluster (number of nodes, Flash storage capacity, number of licenses, and so on).

Can I Reclaim Licenses and Return Them to My Inventory?

When you use the dark site license key method to apply a key to your cluster, there is no requirement to reclaim licenses. Each license key is associated with a unique cluster UUID, so there is nothing to reclaim.

Before I Destroy a Cluster, Do I Need to Reclaim the License Key?

No. Each license key is associated with a unique cluster UUID, so there is nothing to reclaim. When you create a new cluster, however, you do need to generate and apply a new license key.

I Previously Licensed My Clusters with a License Summary File. Can I Use License Keys Instead?

Yes. If your cluster is running the latest version of Nutanix Cluster Check (NCC) and AOS or Prism Central versions compatible with this NCC version, you can switch to using a license key when any cluster attribute changes. That switch to a key includes clusters where you have upgraded NCC and AOS/Prism Central to versions that support license keys. Nutanix recommends the following if you want to use license keys:

  • See the NCC Release Notes version that introduces license key support.
  • Do not upgrade to using a license key unless your cluster attributes change.
  • If your cluster capacity or other cluster attribute changes (upgrade the license tier, extend the license term, or purchase additional add-ons), you can then license these changes by applying a license key.

Extending Your License Term (Dark Site Cloud Platform License Key)

If you have purchased one or more license extensions to extend the term of current expiring term-based cloud platform licenses, use this procedure to update your cluster license.

Before you begin

For this procedure, you must access:
  • The Prism Central (PC) dark-site web console
  • The Nutanix Support Portal on an Internet-connected machine

About this task

These steps apply to clusters not connected to the Internet (that is, a dark site), where you are using the Nutanix Cluster Check utility script license_key_util to apply a cloud platform license key you obtain from the Nutanix Support portal. This procedure includes a step for expired licenses.

Procedure

  1. Log on to your dark-site PC web console and get the PC UUID and the UUID of each cluster connected to the PC that you want to license.
    1. Click the gear icon, then select Cluster Details in the Settings panel.
    2. Copy the PC UUID and the UUID of each cluster connected to the PC that you want to license. You will enter this information in the Nutanix Support Portal.
  2. On an Internet-connected machine, log on to the Nutanix Support portal and click Licenses > Summary > Manage Licenses .
  3. From License Options , select Generate Nutanix Cloud Platform License Keys .
  4. Select the check boxes and then click Confirm to verify that your clusters meet the minimum versions of pc.2022.4, AOS 6.1.1, and NCC 4.6.2.
  5. Enter the UUID of your PC and then click Confirm .
    Entering a Name can help you identify your PC more easily.
  6. Enter the Cluster UUID for each your clusters.
    Click + to add a cluster and - to remove a cluster. Entering a Name can help you identify your clusters more easily.
  7. Click Next
    The Select Licenses page appears. The page displays tiles for each licensing option.
  8. If your license has expired, do these steps. Otherwise, skip this step.
    1. In the tile of a license that you want to apply, click Apply to Cluster .
    2. In the Select Clusters page, select the clusters to apply the license to.
      To filter the cluster list, enter a partial cluster name in Type to filter .
    3. After you select the clusters, click one of the following to apply cloud platform licenses.
      • To apply only the selected license to the selected clusters, click Save and Skip Selecting Add-on .
      • To apply the selected license and select add-on licenses to apply, click Save and Select Add-on . A dialog box appears. Select the add-on licenses to apply from the list and then click Save .
      The Select Licenses page appears again. The tiles now show the number of clusters licensed with each license tier. In a tile, you can click Edit/Add Cluster to add or remove licenses. You can click Unlicense to remove all licenses.
    4. Repeat substeps a through c for any other licenses that you want to apply.
  9. After applying all desired licenses, click Next from the Select Licenses page.
  10. Review the Review and Finish summary page.
  11. Click Next .
    Clicking Next generates and displays the license keys for each of your clusters.
  12. Save the license keys.
    You can save the license keys by printing them or downloading a comma-separated file that lists each key and the associated cluster UUID.
  13. Click Done

Applying Licenses with the License Key

About this task

Apply the licenses with the license key to the dark site cluster with the license_key_util script. Apply one license key per cluster. The key is associated with the cluster UUID and is unique to that cluster.

When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the NCI or NCM license tier, license class, purchased add-ons, and so on.

Procedure

  1. Log on to the dark-site PC Controller VM with SSH as the Nutanix user.
  2. To apply the licenses to the cluster, use the utility script license_key_util with the apply and cluster options. For example:
    nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
    Licenses Applied:
    NCI Pro CORES
    NCM Pro CORES
    Add-Ons:
    NCI Security CORES
    Nutanix Database Service as Addon CORES
    NCI Nutanix Kubernetes Engine CORES
  3. Use the show option to display the licenses associated with the license key.
    nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
    Licenses Applied:
    NCI Pro CORES
    NCM Pro CORES
    Add-Ons:
    NCI Security CORES
    Nutanix Database Service as Addon CORES
    NCI Nutanix Kubernetes Engine CORES
    You can also log on to the web console and display license details as described in Displaying License Features, Details, and Status.
    Help is available from the command by typing ~/ncc/bin/license_key_util help

Upgrading or Downgrading Your License Tier (Dark Site Cloud Platform License Key)

Use this procedure to change your license tier.

Before you begin

For this procedure, you must access:
  • The Prism Central (PC) dark-site web console
  • The Nutanix Support Portal on an Internet-connected machine

About this task

This procedure does not apply if you purchased an upgrade license, which enables you to upgrade your existing unexpired lower license tier to a higher license tier. Use this procedure to change your license tier. For example, to upgrade from NCI Pro to NCI Ultimate or downgrade NCM Pro to NCM Starter. These steps apply to clusters not connected to the Internet (that is, a dark site). These steps apply to clusters not connected to the Internet (that is, a dark site), where you are using the Nutanix Cluster Check utility script license_key_util to apply a cloud platform license key you obtain from the Nutanix Support portal.

Procedure

  1. Log on to your dark-site PC web console and get the PC UUID and the UUID of each cluster connected to the PC that you want to license.
    1. Click the gear icon, then select Cluster Details in the Settings panel.
    2. Copy the PC UUID and the UUID of each cluster connected to the PC that you want to license. You will enter this information in the Nutanix Support Portal.
  2. On an Internet-connected machine, log on to the Nutanix Support portal and click Licenses > Summary > Manage Licenses .
  3. From License Options , select Generate Nutanix Cloud Platform License Keys .
  4. Select the check boxes and then click Confirm to verify that your clusters meet the minimum versions of pc.2022.4, AOS 6.1.1, and NCC 4.6.2.
  5. Enter the UUID of your PC and then click Confirm .
    Entering a Name can help you identify your PC more easily.
  6. Enter the Cluster UUID for each your clusters.
    Click + to add a cluster and - to remove a cluster. Entering a Name can help you identify your clusters more easily.
  7. Click Next
    The Select Licenses page appears. The page displays tiles for each licensing option. You can edit your cluster licenses and any add-ons.
  8. In the tile of your current license tier, click Unlicense .
    All of your current licenses are removed from your clusters.
  9. In the tile of the license tier that you want to apply, click Apply to Cluster .
    1. In the Select Clusters page, select the clusters to apply the license to.
      To filter the cluster list, enter a partial cluster name in Type to filter .
    2. After you select the clusters, click one of the following to apply cloud platform licenses.
      • To apply only the selected license to the selected clusters, click Save and Skip Selecting Add-on .
      • To apply the selected license and select add-on licenses to apply, click Save and Select Add-on . A dialog box appears. Select the add-on licenses to apply from the list and then click Save .
      The Select Licenses page appears again. The tiles now show the number of clusters licensed with each license tier. In a tile, you can click Edit/Add Cluster to add or remove licenses. You can click Unlicense to remove all licenses.
  10. After applying all desired licenses, click Next from the Select Licenses page.
  11. Review the Review and Finish summary page.
  12. Click Next .
    Clicking Next generates and displays the license keys for each of your clusters.
  13. Save the license keys.
    You can save the license keys by printing them or downloading a comma-separated file that lists each key and the associated cluster UUID.
  14. Click Done

Applying Licenses with the License Key

About this task

Apply the licenses with the license key to the dark site cluster with the license_key_util script. Apply one license key per cluster. The key is associated with the cluster UUID and is unique to that cluster.

When you apply the license key to the cluster, the license key enables the license tiers and add-ons that you select in this procedure. After you apply the key to your cluster, your license details show the NCI or NCM license tier, license class, purchased add-ons, and so on.

Procedure

  1. Log on to the dark-site PC Controller VM with SSH as the Nutanix user.
  2. To apply the licenses to the cluster, use the utility script license_key_util with the apply and cluster options. For example:
    nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
    Licenses Applied:
    NCI Pro CORES
    NCM Pro CORES
    Add-Ons:
    NCI Security CORES
    Nutanix Database Service as Addon CORES
    NCI Nutanix Kubernetes Engine CORES
  3. Use the show option to display the licenses associated with the license key.
    nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_key cluster=cluster_uuid
    Licenses Applied:
    NCI Pro CORES
    NCM Pro CORES
    Add-Ons:
    NCI Security CORES
    Nutanix Database Service as Addon CORES
    NCI Nutanix Kubernetes Engine CORES
    You can also log on to the web console and display license details as described in Displaying License Features, Details, and Status.
    Help is available from the command by typing ~/ncc/bin/license_key_util help

Reserve Licenses for Nutanix Cloud Clusters

Note: This feature applies to Nutanix Cloud Clusters only.

With a bring-your-own-license (BYOL) experience, you can leverage your existing on-prem licenses for Nutanix Cloud Clusters. You can reserve your licenses, partially or entirely, for Nutanix Cloud Clusters and specify the capacity allocation for cloud deployments. The licenses reserved for Nutanix Cloud Clusters are automatically applied to Cloud Clusters to cover their configuration and usage.

You can unreserve the licenses when you do not need them for Nutanix Cloud Clusters and add them back to the on-prem licenses pool. You can use the unreserved capacity for your on-prem clusters.

You can better use your licenses and control your expenses by tracking the reserved licenses consumption and making appropriate adjustments in the capacity usage. The reserved licenses are consumed first, and when no more reserved licenses are available, your chosen Pay As You Go or Cloud Commit payment plans are used.

Note:
  • You can reserve the licenses only if the products are metered and you are using Pay As You Go or Cloud Commit plans. The supported licenses are AOS PRO, AOS ULT, VDI ULT, and Files licenses.
  • It is recommended to unreserve the licenses and return them before your subscription is canceled or expired or when you terminate your cluster.
  • You can expand the license capacity up to the maximum unreserved capacity in the licenses pool and shrink the license capacity up to the currently used capacity.

Reserving Licenses

About this task

To reserve licenses for Nutanix Cloud Clusters, do the following:

Procedure

  1. Log on to the Nutanix Support portal at https://portal.nutanix.com and then click the Licenses link on the portal home page.
  2. Under Licenses on the left pane, click Active Licenses and then click the Available tab in the Available Licenses page.
    Figure. Available Licenses Page Click to enlarge example portal available licenses page

  3. Select the licenses that you want to reserve for Nutanix Cloud Clusters and then select Update reservation for Nutanix Cloud Clusters (NC2) from the Actions pull-down menu.

    The Update reservation for Nutanix Cloud Clusters (NC2) option becomes available only after you select at least one license for reservation.

  4. On the Manage Reservation for Nutanix Cloud Clusters (NC2) page, click the hamburger icon available in the row of the license you want to reserve, and then click Edit .
    Figure. Manage Reservations Page Click to enlarge example manage reservations page

  5. Enter the number of licenses that you want to reserve in the Reserved for AWS column for the license.

    The available licenses appear in the Total Available to Reserve column.

  6. Click Save to save the license reservations.

Modifying License Reservations

About this task

To update the existing license reservations for Nutanix Cloud Clusters, do the following:

Procedure

  1. Log on to the Nutanix Support portal at https://portal.nutanix.com and then click the Licenses link on the portal home page.
  2. Under Licenses on the left pane, click Active Licenses and then click the Reserved tab in the Available Licenses page.
    Figure. Portal Reserved Licenses Page Click to enlarge example portal reserved licenses page

  3. Select the licenses for which you want to update the license reservation and then click the Update reservation for Nutanix Cloud Clusters (NC2) from the Actions pull-down menu.

    The Update reservation for Nutanix Cloud Clusters (NC2) button becomes available only after you select at least one license for reservation.

  4. On the Manage Reservation for Nutanix Cloud Clusters (NC2) page, click the hamburger icon available in the row of the license you want to reserve, and then click Edit .
    Figure. Manage Reservations Page Click to enlarge example manage reservations page

  5. Change the number of licenses that you want to reserve in the Reserved for AWS column for the license.

    The available licenses appear in the Total Available to Reserve column.

  6. Click Save to save the license reservations.

Prism Central Cluster-Based Licensing

Cluster-based licensing allows you to choose the Prism Central (PC) features you can use and level of data to collect from a Prism Element (PE) cluster managed by your PC deployment.

With cluster-based licensing, you decide which PC license tier features to use to manage a specific cluster. For example, you might need features provided by a Prism Ultimate license to manage PE cluster A, while you only need Prism Pro features to manage PE cluster B. You can also designate a cluster (PE Cluster C) as unlicensed (no cluster-based license is applied). For example, you can leave a non-production or development cluster as unlicensed.

All nodes in a cluster must be licensed with a cluster-based license or unlicensed (that is, no cluster-based license is applied). For cluster-based licensing, all nodes in each cluster must be at the same cluster-based licensing tier. You cannot mix license tiers in the same cluster (for example, unlicensed and Prism Ultimate licensed nodes cannot reside in the same cluster).

Only Prism Starter features are available in an unlicensed cluster. If you try to access a Prism Pro or Prism Ultimate feature for an unlicensed cluster (for example, Capacity Runway), PC displays a Feature Disabled type message. Any unlicensed cluster data is filtered out from any reporting.

Cluster-based licensing also lets you select different metering types (capacity [cores] and nodes, depending on the cluster licensing) for each node in a PE cluster. Also, if your PC deployment includes Nutanix Calm or Flow, you can choose Calm Core or Flow Core licensing as the metering type.

See the Prism Central release notes or supplement where PC Cluster-based licensing is introduced. Also read Cluster-based Licensing Requirements and Limitations.

Figure. Prism Central Cluster-Based Licensing Example Click to enlarge Prism Central is managing Prism Pro, Prism Ultimate, and unlicensed clusters

Cluster-based Licensing Requirements and Limitations

  • Cluster-based licensing is not available for dark site clusters or deployments where clusters are not connected to the Internet.
  • To use Prism Central (PC) cluster-based licensing, Prism Element (PE) AOS clusters registered with PC must be licensed with an AOS Starter, AOS Pro, or AOS Ultimate license.
  • If you have not implemented cluster-based licensing for your managed clusters, you have access to features provided by your existing PC license tier for all clusters registered to PC as usual.
  • When using PC cluster-based licensing, a PE cluster is considered unlicensed if no cluster-based license is applied. Only PC Starter features are available to manage a cluster without a cluster-based licensed applied.
  • If you have not implemented cluster-based licensing for your managed clusters when it is available in your PC version, the next time you update your license from PC (for example, applying a new license or consuming unused existing licenses), the Licensing page at the Nutanix Support portal will now present the cluster-based licensing work flow tasks. That is, you must now use cluster-based licensing for eligible registered clusters.

Manage Cluster-Based Licenses

The Prism Central (PC) web console Licensing page allows you to apply your cluster-based licenses with the Apply licenses button and by using 3-Step Licensing. 3-Step Licensing is a mostly manual licensing procedure. After the licenses are applied, manage them with the Update License button.

Before you begin, make sure AOS clusters registered with and managed by PC are licensed with AOS licenses.

  1. In the PC web console, download a cluster summary file, then upload it to the Nutanix Support portal. Your network must allow outbound traffic to portal.nutanix.com:443.

  2. Download a license file from the Nutanix Support portal.
  3. In the PC web console, apply the license file.

Applying Cluster-based Licenses

About this task

For clusters connected to the Internet, if your clusters are unlicensed (that is, cluster-based licenses are not applied), use these procedures to apply cluster-based licenses from Prism Central. This procedure also applies if you later add newly-registered Prism Element clusters to be managed by Prism Central.

Procedure

  1. Log on to your Prism Central web console, click the gear icon, then select Licensing in the Settings panel.
  2. Click Apply licenses , then click Download to save a cluster summary file to your local machine.
    The cluster summary file is saved to your browser download folder or any folder you choose.
    Figure. Apply Licenses Click to enlarge Update License Button

    Figure. Prism Central Licensing Page Click to enlarge This picture shows the license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
  4. Click License Clusters .
    1. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    The Select Licenses page is displayed. It shows tiles for each Prism Central license tier (Ultimate, Pro, Starter) and add-on tiles like Flow, Calm, Objects, and Files. All clusters are automatically assigned with free Prism Starter licenses.
    Figure. Select Licenses Page Click to enlarge This picture shows the license tiles in the Select Licenses page

  5. Depending on the license tier you want to apply, click Select Licenses in the Ultimate or Pro tile.
    The Add Clusters dialog box is displayed. It shows a list of clusters registered to your Prism Central instance.
  6. Select the clusters to apply the cluster-based licenses, then click Save .
    To display and select a subset of the total clusters displayed, enter a partial cluster name in the Type to filter field.
    Figure. Select Licenses Page 2 Click to enlarge This picture shows the list of clusters

    After you click Save , the Select Licenses page is displayed again. The tiles show the number of clusters licensed with each license tier.
  7. Continue applying cluster-based licenses by clicking Select Licenses in the Ultimate or Pro tile.
    1. Select the clusters to license.
    2. Click Save .
  8. [Optional] Before continuing, you can also license any add-ons as part of this step by clicking Add Licenses in the specific add-on tile.
    • Licensing Calm (Cluster-based Licensing)
    • Licensing Objects (Cluster-based Licensing)
    If you choose not to license any add-ons at this time, you can license them later.
  9. Click Next , review the Review and Finish summary page, then click Confirm and Close .
    • At the Review and Finish summary page, click the down arrow to the right of each licensed cluster to show the associated license number (like LIC-08220516). If you need to make changes, click Back to return to the Select Licenses page.
    • Clicking Confirm automatically downloads the license summary file, which you need in the next step.
    • Clicking Close shows the portal Licenses page with a summary of your licenses, including how many you have used.

Applying the License File

Procedure

Back at the Prism Central web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

Licensing Calm (Cluster-based Licensing)

Procedure

  1. If you have already uploaded the cluster summary file and are displaying the Select Licenses page at the Nutanix Support portal, start at the Selecting the License Type step. Otherwise, do all the steps in this procedure.
  2. Log on to your Prism Central web console, click the gear icon, then select Licensing in the Settings panel.
  3. Click Apply licenses , then click Download to save a cluster summary file to your local machine.
    The cluster summary file is saved to your browser download folder or any folder you choose.
    Figure. Prism Central Licensing Page Click to enlarge This picture shows the license task page in the web console.

  4. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
  5. Click License Clusters .
    1. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    The Select Licenses page is displayed. It shows tiles for each Prism Central license tier (Ultimate, Pro, Starter). You might need to scroll down to see the add-on tiles like Flow, Calm, Objects, and Files. All clusters are automatically assigned with free Prism Central Starter licenses.

Selecting the License Type

Procedure

  1. At the Calm tile, click Select Licenses to display the Select Calm Licenses dialog box.
  2. To apply one or more VM pack licenses, click Add VM Pack , enter the number of VM packs to apply, and click Done ,
  3. To apply core-based licenses to clusters managed by Prism Central, do these steps.
    1. Click Add Clusters .
    2. Select the clusters to apply the core-based licenses, then click Save .
      To display and select a subset of the total clusters displayed, enter a partial cluster name in the Type to filter field.
    After you click Save , the Select Licenses page is displayed again. The tiles show the number of clusters licensed.
  4. Click Next , review the Review and Finish summary page, then click Confirm and Close .
    • At the Review and Finish summary page, click the down arrow to the right of each licensed cluster to show the associated license number (like LIC-08220516). If you need to make changes, click Back to return to the Select Licenses page.
    • Clicking Confirm automatically downloads the license summary file, which you need in the next step.
    • Clicking Close shows the portal Licenses page with a summary of your licenses, including how many you have used.

Applying the License File

Procedure

Back at the Prism Central web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

Licensing Flow (Cluster-based Licensing)

Procedure

  1. If you have already uploaded the cluster summary file and are displaying the Select Licenses page at the Nutanix Support portal, start at the Selecting the License Type step. Otherwise, do all the steps in this procedure.
  2. Log on to your Prism Central web console, click the gear icon, then select Licensing in the Settings panel.
  3. Click Apply licenses , then click Download to save a cluster summary file to your local machine.
    The cluster summary file is saved to your browser download folder or any folder you choose.
    Figure. Prism Central Licensing Page Click to enlarge This picture shows the license task page in the web console.

  4. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
  5. Click License Clusters .
    1. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    The Select Licenses page is displayed. It shows tiles for each Prism Central license tier (Ultimate, Pro, Starter). You might need to scroll down to see the add-on tiles like Flow, Calm, Objects, and Files. All clusters are automatically assigned with free Prism Central Starter licenses.

Selecting the License Type

Procedure

  1. At the Flow tile, click Select Licenses to display the Select Flow Licenses dialog box.
  2. Click Add Clusters .
  3. Select the clusters to apply the Flow license, then click Save .
    To display and select a subset of the total clusters displayed, enter a partial cluster name in the Type to filter field.
    After you click Save , the Select Licenses page is displayed again. The tiles show the number of clusters licensed.
  4. Click Next , review the Review and Finish summary page, then click Confirm and Close .
    • At the Review and Finish summary page, click the down arrow to the right of each licensed cluster to show the associated license number (like LIC-08220516). If you need to make changes, click Back to return to the Select Licenses page.
    • Clicking Confirm automatically downloads the license summary file, which you need in the next step.
    • Clicking Close shows the portal Licenses page with a summary of your licenses, including how many you have used.

Applying the License File

Procedure

Back at the Prism Central web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

Licensing Objects (Cluster-based Licensing)

About this task

Nutanix provides the following two types of Objects licenses.
  • Objects For AOS : This license allows you to deploy Objects on clusters with AOS licenses.
  • Objects Dedicated : This license allows you to deploy an Objects-only cluster without AOS licenses.

Procedure

  1. If you have already uploaded the cluster summary file and are currently displaying the the Nutanix Support portal Select Licenses page, start at the Selecting the License Type step. Otherwise, do all the steps in this procedure.
  2. Log on to your Prism Central web console, click the gear icon, then select Licensing in the Settings panel.
  3. Click Apply licenses , then click Download to save a cluster summary file to your local machine.
    The cluster summary file is saved to your browser download folder or any folder you choose.
    Figure. Prism Central Licensing Page Click to enlarge This picture shows the license task page in the web console.

  4. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
  5. Click License Clusters .
    1. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    The Select Licenses page is displayed. It shows tiles for each Prism Central license tier (Ultimate, Pro, Starter). You might need to scroll down to see the add-on tiles like Flow, Calm, Objects, and Files. All clusters are automatically assigned with free Prism Central Starter licenses.

Selecting the License Type

Procedure

  1. To license Objects for AOS, in the For AOS tile, click Select Licenses to display the Select Objects for AOS Licenses dialog box.
    1. Add the number of tebibytes (TiBs) to license for the object store.
    2. Click Save.
  2. To license Objects Dedicated, in the Dedicated tile, click Select Licenses to display the Select Objects for Dedicated dialog box.
    1. Add the number of tebibytes (TiBs) to license for the object store.
    2. Select an add-on license.
      • Objects Encryption . After selecting this add-on, click Yes or No depending on where your cluster is deployed. The list includes countries where certain technology license restrictions apply. If your cluster deployment is in one of those locations, click Yes . Otherwise, click No .
      • Objects Replication . For more information about replication, see Objects User Guide . Select this add-on if you require this option.
    3. Click Save.
  3. Click Next , review the Review and Finish summary page, then click Confirm and Close .
    • At the Review and Finish summary page, click the down arrow to the right of each licensed cluster to show the associated license number (like LIC-08220516). If you need to make changes, click Back to return to the Select Licenses page.
    • Clicking Confirm automatically downloads the license summary file, which you need in the next step.
    • Clicking Close shows the portal Licenses page with a summary of your licenses, including how many you have used.

Applying the License File

Procedure

Back at the Prism Central web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

Changing Your Cluster-based License Tier

At the Prism Central web console Licensing page, use Update License to change the cluster-based license tier of one or more licensed clusters.

Procedure

  1. Log on to your Prism Central web console, click the gear icon, then select Licensing in the Settings panel.
  2. Click Update License , then click Download to save a cluster summary file to your local machine.
    The cluster summary file is saved to your browser download folder or any folder you choose.
    Figure. 3-Step Licensing Page Click to enlarge This picture shows the 3-step license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
  4. Click License Clusters .
    1. If you have previously tagged your licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
      The default tag is All Licenses . You can also leave this field blank. To read about tags, see Creating and Adding a Tag to Your Licenses.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    The Select Licenses page is displayed. It shows tiles for each Prism Central license tier (Ultimate, Pro, Starter) and add-on tiles like Flow, Calm, Objects, and Files. All clusters are automatically assigned with free Prism Central Starter licenses.
  5. Click Edit Selection in the Prism Pro or Prism Ultimate tile.
  6. Select the clusters whose tier you want to change.
    To display and select a subset of the total clusters displayed, enter a partial cluster name in the Type to filter field.
  7. Change the license tier.
    1. From the Actions drop-down, select Change Tier .
    2. From Select Tier , select Prism Pro or Prism Ultimate to change the license tier.
    3. Click Save , the click Save again.
  8. Click Next , review the Review and Finish summary page, then click Confirm and Close .
    • At the Review and Finish summary page, click the down arrow to the right of each licensed cluster to show the associated license number (like LIC-08220516). If you need to make changes, click Back to return to the Select Licenses page.
    • Clicking Confirm automatically downloads the license summary file, which you need in the next step.
    • Clicking Close shows the portal Licenses page with a summary of your purchased licenses, including how many you have used.

Applying the License File

Procedure

Back at the Prism Central web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

Changing Your Cluster-based License Metering Type

At the Nutanix Support portal, use the Advanced Licensing action to change the AOS cluster license metering type to node or core licensing.

Procedure

  1. Log on to your Prism Central web console, click the gear icon, then select Licensing in the Settings panel.
  2. Click Update License , then click Download to save a cluster summary file to your local machine.
    The cluster summary file is saved to your browser download folder or any folder you choose.
    Figure. 3-Step Licensing Page Click to enlarge This picture shows the 3-step license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
  4. Click License Clusters .
    1. If you have previously tagged your licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
      The default tag is All Licenses . You can also leave this field blank. To read about tags, see Creating and Adding a Tag to Your Licenses.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    The Select Licenses page is displayed. It shows tiles for each Prism Central license tier (Ultimate, Pro, Starter) and add-on tiles like Flow, Calm, Objects, and Files. All clusters are automatically assigned with free Prism Starter licenses.
  5. Click Edit Selection in the tile for the item where you want to change the metering type.
  6. Select the clusters whose metering type you want to change.
    To display and select a subset of the total clusters displayed, enter a partial cluster name in the Type to filter field.
  7. Change the metering license type.
    1. From the Actions drop-down, select Advanced Licensing to display the Advanced Licensing dialog box.
    2. Select I want to customize the license meters , then select node or core license.
    3. Optionally, you can filter the licenses by License Tag or license Expiration Date .
    4. Select the license from the list of available licenses.
    5. Click Save , the click OK .
    Figure. Advanced Licensing Click to enlarge This picture shows the Advanced Licensing dialog box at the support portal.

  8. Click Next , review the Review and Finish summary page, then click Confirm and Close .
    • At the Review and Finish summary page, click the down arrow to the right of each licensed cluster to show the associated license number (like LIC-08220516). If you need to make changes, click Back to return to the Select Licenses page.
    • Clicking Confirm automatically downloads the license summary file, which you need in the next step.
    • Clicking Close shows the portal Licenses page with a summary of your licenses, including how many you have used.

Applying the License File

Procedure

Back at the Prism Central web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

Rebalancing Cluster-based Licenses After a Cluster Capacity Change

Procedure

  1. On an Internet-connected machine, log on to your Prism Central web console, click the gear icon, then select Licensing in the Settings panel.
  2. To save a cluster summary file to your local machine, click Update License , then click Download
    The cluster summary file is saved to your browser download folder or any folder you choose.
    Figure. 3-Step Licensing Page Click to enlarge This picture shows the 3-step license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
    1. If you have previously tagged your licenses as described in Creating and Adding a Tag to Your Licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    The Select Licenses pages shows product license tiles. You might see this message in a license tile: Your licenses have been updated to meet your capacity change .
  4. If you want to change the automatic licensing changes, click Edit Selection on each updated tile and make these changes. Otherwise, to accept the automatic changes, skip this step.
  5. Click Next , review the Summary page, then click Confirm and Close .
    Clicking Confirm automatically downloads the license summary file, which you need in the next step. Clicking Close shows the portal Licenses page with a summary of your purchased licenses, including how many you have used.

Applying the License File

Procedure

Back at the Prism Central web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

Unlicensing Clusters (Cluster-based Licenses)

Before you begin

Use this procedure to a remove cluster-based license from your cluster. After you complete these steps, your cluster is considered unlicensed (that is, a cluster-based license is not applied). Only Prism Starter features are available in an unlicensed cluster.

Procedure

  1. Log on to your Prism Central web console, click the gear icon, then select Licensing in the Settings panel.
  2. Click Update License , then click Download to save a cluster summary file to your local machine.
    The cluster summary file is saved to your browser download folder or any folder you choose.
    Figure. 3-Step Licensing Page Click to enlarge This picture shows the 3-step license task page in the web console.

  3. To open the Nutanix Support Portal license page and upload the cluster summary file, click Licenses page , then click Manage Licenses at the portal.
  4. Click License Clusters .
    1. If you have previously tagged your licenses, select the tag name or All Licenses in the Choose License Tags drop-down menu.
      The default tag is All Licenses . You can also leave this field blank. To read about tags, see Creating and Adding a Tag to Your Licenses.
    2. Click Upload File , browse to the cluster summary file you downloaded and select it. Then click Upload File .
    The Select Licenses page is displayed. It shows tiles for each Prism Central license tier (Ultimate, Pro, Starter) and add-on tiles like Flow, Calm, Objects, and Files. All clusters are automatically assigned with free Prism Central Starter licenses.
  5. Click Edit Selection in the Prism Pro or Prism Ultimate tile.
  6. Select the clusters you want to unlicense.
    To display and select a subset of the total clusters displayed, enter a partial cluster name in the Type to filter field.
  7. Unlicense the clusters.
    1. From the Actions drop-down, select Unlicense .
    2. Click Save , the click Save again.
  8. Click Next , review the Review and Finish summary page, then click Confirm and Close .
    • At the Review and Finish summary page, click the down arrow to the right of each licensed cluster to show the associated license number (like LIC-08220516). If you need to make changes, click Back to return to the Select Licenses page.
    • Clicking Confirm automatically downloads the license summary file, which you need in the next step.
    • Clicking Close shows the portal Licenses page with a summary of your purchased licenses, including how many you have used.

Applying the License File

Procedure

Back at the Prism Central web console Licensing settings panel, click Upload to upload the license summary file you just downloaded, and click Apply .
The Licensing panel displays all license details for the cluster, like license tier, licensed add-ons, and expiration dates.

Nutanix Cloud Platform Package Licensing

Prism Central supports licensing for a set of cloud platform packages that deliver broad solutions to customers with simple and comprehensive bundles. The packages are:

  • Nutanix Cloud Infrastructure (NCI)
  • Nutanix Cloud Manager (NCM)
  • Nutanix Unified Storage (NUS)
  • Nutanix Database Service (NDB)
  • Nutanix End User Computing

You can add licenses or convert existing licenses to the new packages. For more information about the cloud platform packages, see Nutanix Cloud Platform Software Options.

Cloud platform licensing follows the same workflow as cluster-based licensing (see Prism Central Cluster-Based Licensing). This chapter provides supplemental information about licensing your clusters for the cloud platform packages.

Licensing Views

You can view licensing information from the Nutanix Support Portal for your entire account, Prism Central (PC) for all clusters registered to that PC instance, and Prism Element for that cluster. However, only the Support Portal and PC are used to apply or convert cloud platform licenses.

Support Portal Licensing View

The Licensing view on the Support Portal is extended to include cloud platform packages. Select Licenses from the collapse ("hamburger") menu to display the Licensing view. The Summary page includes widgets for any purchased licenses including cloud platform packages. The cloud platform package names also appear in the relevant fields in other licensing pages, for example the License Tier column in the Licensed Clusters page.

Figure. Support Portal Licenses Summary Page Click to enlarge This picture shows the license summary page in the support portal.

Prism Central Licensing View

The Licensing view in PC is extended to include cloud platform packages. Select Licensing in the Settings panel to display the Licensing view. The View All Licenses tab includes sections for any applied licenses including cloud platform packages.

Figure. Prism Central Applied Licenses Page Click to enlarge This picture shows the applied licenses summary page in Prism Central

Clicking View license details displays the details page, which also now includes applied cloud platform package license information.

Figure. Prism Central Applied License Details Page Click to enlarge This picture shows the applied licenses details page in Prism Central

The View all clusters tab and cluster license details pages are also extended to include cloud platform packages.

Figure. Prism Central Cluster Licenses Page Click to enlarge This picture shows a clusters license summary page in Prism Central

Figure. Prism Central Cluster License Details Page Click to enlarge This picture shows a cluster license details page in Prism Central.

Applying Cloud Platform Licenses

Use this procedure to apply cloud platform licenses to your cluster.

Before you begin

  • Make sure that the cluster is connected to the Internet (no dark sites).
  • You might need to turn off any pop-up blockers in your browser to display dialog boxes.
  • Applying cloud platform licenses, excluding NUS, requires that the cluster is running the minimum versions of the following software:
    • AOS 6.0.1.7
    • Nutanix Cluster Check (NCC) 4.3.0
    • pc.2021.9
  • Applying NUS licenses requires that the cluster is running the minimum versions of the following software:
    • AOS 6.1.1
    • NCC 4.5.0
    • pc.2022.4

Procedure

  1. Log on to your Prism Central (PC) web console, click the gear icon, then select Licensing in the Settings panel.
  2. Click Update Licenses .
    Figure. Prism Central Licensing Page Click to enlarge This picture shows the license task page in the web console.

  3. Click Download to save a cluster summary file to your local machine.
    By default, the cluster summary file saves to your browser download folder. You can also save it to a folder of your choosing.
    Figure. License Steps Click to enlarge displays licensing steps page

  4. Click Licenses page to go to the Licenses page on the Support Portal, and then click Manage Licenses at the top of the portal Licenses page.
    Figure. Support Portal Licenses Page Click to enlarge This picture shows the license page in the support portal.

  5. Click License Clusters (under Clusters With Internet ).
    Figure. Cluster Information Page Click to enlarge This picture shows the cluster information page

    1. Select the tag name (if you have tagged your licenses previously) or All Licenses in the Choose License Tags drop-down menu.
    2. Click Upload File , browse to the cluster summary file you downloaded, select it, and then click Upload File .
    The Select Licenses page appears. The page displays tiles for each licensing option.
    Figure. Select Licenses Page Click to enlarge This picture shows the license tiles in the Select Licenses page

  6. In the tile of a license that you want to apply, click Select Licenses .
  7. In the Add Clusters page, select the clusters to apply the license to.
    The page lists all clusters registered to the PC instance. To filter the cluster list, enter a partial cluster name in the Type to filter field.
    Figure. Select Clusters Page Click to enlarge This picture shows the list of clusters

  8. After you select the clusters, click one of the following to apply cloud platform licenses.
    • To apply only the selected license to the selected clusters, click Save and Skip Selecting Add-on .
    • To apply the selected license and select add-on licenses to apply, click Save and Select Add-on . A dialog box appears. Select the add-on licenses to apply from the list and then click Save .
      Note: There is a different procedure to apply NUS licenses. Before you click either Save and Skip Selecting Add-on or Save and Select Add-on , use License Quantity for NUS to select the number of licenses that you want.
    The Select Licenses page appears again. The tiles now show the number of clusters licensed with each license tier.
  9. Repeat steps six through eight for any other licenses that you want to apply.
    For information about the cloud platform packages, see Nutanix Cloud Platform Software Options . For information about applying other licenses, see Prism Central Cluster-Based Licensing.
  10. After applying all desired licenses, click Next from the Select Licenses page, review the Review and Finish summary page, then click Confirm and then Close .
    • On the Review and Finish summary page, click the down arrow to the right of each licensed cluster to show the associated license number (such as LIC-08220516). To make changes, click Back to return to the Select Licenses page.
    • Clicking Confirm automatically downloads the license summary file, which you need in the next step.
    • Clicking Close shows the portal Licenses page with a summary of your licenses, including how many you have used.
    Figure. Review and Finish Page Click to enlarge This picture shows the list of cluster licenses to apply

  11. Back at the PC web console Licensing page (see step 3), click Upload to upload the license summary file you just downloaded, select that file, and then click Apply License .
    The Licensing page displays a summary of all applied licenses. The page includes various links to license details for each cluster including the license tier, licensed add-ons, and expiration dates.
    Figure. Prism Central Applied Licenses Page Click to enlarge This picture shows the licenses applied to registered clusters for this PC

Converting to Cloud Platform Licenses

Use this procedure to convert your existing licenses to cloud platform licenses. You can convert your existing licenses whether you have applied them to a cluster or they are unused.

Before you begin

Review the Conversion Requirements and Mapping.

Procedure

  1. Log on to the Nutanix Support Portal.
  2. Go to the Licenses page, and click Convert Licenses .
    Figure. Support Portal Licenses Page Click to enlarge This picture shows the licenses page in the support portal.

  3. In the initial conversion page, select one of the following:
    • To convert applied licenses, click Convert by cluster .
    • To convert unused licenses, click Convert currently unused licenses and then skip to step 5.
    Figure. Initial Conversion Page Click to enlarge This picture shows the introduction page for the conversion workflow in the support portal.

  4. [cluster convert only] Upload a Prism Central (PC) CSF file:
    1. Log on to your PC web console, and follow the steps to download a CSF file (see steps 2 and 3 in Applying Cloud Platform Licenses).
    2. In the Support Portal upload page, click Upload it Now .

    After uploading the file, the information is validated before continuing. If a configuration issue is detected, an appropriate message appears. If all the checks are validated, the "Convert your licenses" screen appears.

    Figure. Upload CSF File Page Click to enlarge This picture shows the license task page in the web console.

  5. In the "Select licenses to convert" page, select the cluster licenses to convert and then click Convert selected licenses (and then click Yes, Proceed if prompted to verify command). Alternately, to convert all listed licenses, simply click Select & convert all .

    The "Select licenses to convert" page displays a table of applied licenses by cluster. The table also lists what each license will be converted to during the process. The table varies slightly depending on which workflow (cluster or unused) you are doing. See Conversion Requirements and Mapping for a list of all the conversion mappings.

    • For the cluster workflow, each row in the table represents a single cluster and includes the cluster name, current license name, applied quantity, new (converted) license name, and converted quantity. For AOS, Flow, and Files licenses, you select the clusters to convert. Prism Pro and Calm are all or nothing conversions, meaning you convert these licenses for all clusters; you may not select a subset of clusters to convert. For clusters managed by a single PC instance, all clusters must have either Files and Objects licenses only or NUS licenses only; you may not convert clusters with a combination of Files, Objects, and NUS licenses.
      Figure. Select Licenses to Convert Page (cluster workflow) Click to enlarge This picture shows the license task page in the web console.

    • For the unused licenses workflow, each row in the table represents a single license and includes the license ID, PO number, current license name, available/purchased quantities, new (converted) license name, converted quantity, and expiration date. You select the licenses to convert.
      Figure. Select Licenses to Convert Page (unused workflow) Click to enlarge This picture shows the license task page in the web console.

    After clicking the Convert selected licenses or Select & convert all button, the conversion process begins immediately. When the process completes, the "Review & Apply" page appears.
  6. Review the results in the "Review & Apply" page.

    The conversion table reappears. Again, the table varies slightly depending on which workflow (cluster or unused) you are doing.

    • For the cluster workflow, the conversion table reappears with a reordered column (cluster name, new license name, new converted quantity, previous license name, previous quantity). Click a row to display licensing details for that cluster.
      Figure. Review & Apply Page (cluster workflow) Click to enlarge This picture shows the license task page in the web console.

    • For the unused licenses workflow, the conversion table reappears with new license information (new license ID, new license name, converted quantity, expiration date). Click a row to see the license conversion details (previous license ID, previous license name, previous quantity). When you are done, click Close to close the page. This completes the workflow for converting unused licenses.
      Figure. Review & Apply Page (unassigned workflow) Click to enlarge This picture shows the license task page in the web console.

  7. [cluster convert only] Under "Next steps," click file_name .lsf to download the updated LSF file and then click Close to close the page.
  8. [cluster convert only] Back at the PC web console Licensing settings page, click Upload to upload the license summary file you just downloaded, select that file, and click Apply License (see step 11 in Applying Cloud Platform Licenses).

Conversion Requirements and Mapping

This topic contains requirements and limitations for you to consider before you convert your existing licenses to the new cloud platform packages. This topic also provides a conversion table of old (current) to new (cloud platform) licenses.

Review the following requirements and limitations before you convert licenses to the new cloud platform packages.

  • Requirements
    • The cluster is running the minimum versions of AOS 6.0.1.7, Nutanix Cluster Check (NCC) 4.3.0, and pc.2021.9. For NUS, the cluster is running the minimum versions of AOS 6.1.1, NCC 4.5.0, and pc.2022.4.
    • The cluster is fully licensed before conversion or has enough available licenses to fully convert the cluster (combination of old and new licenses).
    • The expiration date must be the same if two licenses are being merged into one, for example AOS + Flow or Prism + Calm.
    • ATR-related objects are not populated immediately after conversion. Actions to populate ATR-related objects are triggered after the cool off period (1 week) after conversion.
    • All existing Prism licenses in a Prism Central instance should be converted to NCM.
    • Flow, Encryption, and Object licenses must be applied. Unapplied licenses will not be converted.
  • Limitations
    • Cloud platform licensing is not available for dark sites.
    • The following licenses may not be converted: POC (Proof of Concept), LOD (Life of Device), EPA (Enterprise Purchase Agreement), NTE (Not To Exceed), Upgrade Licenses, SWO (software only) perpetual licenses.
    • Prism Pro, Prism Ultimate, and Calm licenses may not be mixed with cloud platform licenses in a single Prism Central.
    • Conversion is not allowed if an upgrade quote is in flight, the license is due for renewal in the next 90 days, or a renewal opportunity is in the advanced stage.
    • RBAC-based users are not allowed to perform conversions.
    • Partial conversions are not allowed for Files licenses. All Files licenses in Prism Central are converted.

The following is a conversion table of old (current) to new (cloud platform) licenses.

Table 1. Licensing Conversion Map
Package Old License New License
Nutanix Cloud Infrastructure (NCI) AOS Starter NCI Starter
AOS Pro NCI Pro
AOS Ultimate NCI Ultimate
AOS Pro + Encryption NCI Pro + Security
AOS Pro + Flow
AOS Pro + Flow + Encryption
AOS Pro + Adv DR NCI Pro + Adv DR
AOS Starter + Flow NCI Pro + Security
AOS Ultimate + Flow NCI Ultimate
AOS Pro + Adv DR + Encryption NCI Pro + Adv DR + Security
AOS Pro + Adv Rep + Flow
AOS Pro + Adv Rep + Encryption + Flow
AOS Starter + Files (for AOS) NCI Starter + NUS Pro
AOS Pro + Files (for AOS) NCI Pro + NUS Pro
AOS Ultimate + Files (for AOS) NCI Ultimate + NUS Pro
AOS Starter + Objects (for AOS) NCI Starter + NUS Starter
AOS Pro + Objects (for AOS) NCI Pro + NUS Starter
AOS Ultimate + Objects (for AOS) NCI Ultimate + NUS Starter
AOS Starter + Era add-on NCI Starter + NDB add-on
AOS Pro + Era add-on NCI Pro + NDB add-on
AOS Ultimate + Era add-on NCI Ultimate + NDB add-on
Nutanix Cloud Manager (NCM) Prism Pro NCM Starter
Prism Ultimate NCM Pro
Calm Cores NCM Ultimate
Prism Pro + Calm Cores
Prism Ultimate + Calm Cores
Pro Special (n/a)
Nutanix Database Service (NDB) Era Platform NDB Platform
Era Cores NDB add-on
Era vCPU
Nutanix Unified Storage (NUS) Objects Dedicated NUS Starter
Objects (for AOS)
Objects Dedicated + Encryption NUS Starter + Security
Objects Dedicated + Adv DR NUS Starter + Adv DR
Files Dedicated NUS Pro
Files Dedicated + Object Dedicated
Files (for AOS) + Objects (for AOS)
Files Dedicated + Adv DR NUS Pro + Adv DR
Files Dedicated + Encryption NUS Pro + Security
Files (for AOS) NUS Pro
Nutanix End User Computing (n/a) (no conversion for VDI, Frame, or ROBO)

Use Tags to Organize Your Licenses

On the Licenses > License Inventory page on the Nutanix Support portal, you can label your licenses with a tag to conveniently group them.

Tags help provide more granularity and ease of use to your license management.

For example, you can apply a tag to licenses to group them according to:

  • A common expiration date
  • Use case (clusters used by remote offices)
  • Assigned organization (clusters used by an engineering department)

When you tag one or more licenses, you can then:

  • Sort them by tag name on the support portal
  • Perform license actions (on the portal) on clusters grouped by tag (especially when selecting licenses manually)

Creating and Adding a Tag to Your Licenses

Label your licenses with a tag to conveniently group them. You can add multiple tags to a single license.

Procedure

  1. Log on to the Nutanix Support Portal, then click Licenses > License Inventory .
    The portal displays a sortable table listing your purchased licenses. You can sort the table by clicking any table heading (License ID, Tier, License Class, and so on).
  2. To create a new tag:
    1. Click Actions > Create a Tag , then enter a tag name in the New Tag field.
    2. Click Create .
  3. To apply one or more existing tags:
    1. Select the licenses that you want to tag, then click Actions > Add or Remove Tag .
    2. Click the field and then select one or more existing tags from the drop-down list.
    3. Click Apply .
    The Support Portal web page refreshes and displays your tagged licenses.

Renaming a Tag or Removing a Tag from a License

About this task

This procedure describes how to rename a tag or remove a tag from a license (that is, how to untag it).
Note: To delete an existing tag, which also removes the tag from any licenses where it is applied, see Deleting a Tag.

Procedure

  1. Log on to the Nutanix Support Portal, then click Licenses > License Inventory .
    The portal displays a sortable table listing your purchased licenses. You can sort the table by clicking any table heading (License ID, Tier, License Class, and so on).
  2. To rename a tag:
    1. Click Actions > Rename Tag , then select the tag that you want to rename from the drop-down list.
    2. Enter a new tag name, then click Rename .
      Any licenses tagged with the old tag are automatically tagged with the new tag.
  3. To remove a tag from a license:
    1. Select the licenses with the tag that you want to untag, then click Actions > Add or Remove Tag . For the Add or Remove Tag option to display, you must select at least one license.
      The dialog box shows the current tags for the selected license.
    2. In the dialog box, remove one or more tags.
      To remove tags, you can click the tag name, unselect tags from the drop-down list, or press the backspace key on a tag.
    3. Click Apply .
    The Support Portal web page refreshes and displays your tagged licenses.

Deleting a Tag

About this task

This procedure enables you to permanently delete an existing tag, which also removes the tag from any licenses where it is applied.
Note:
  • To remove tags from individual licenses (that is, untag licensed clusters) without deleting the tag itself, see Renaming a Tag or Removing a Tag from a License.
  • If you accidentally delete a tag, you can recreate it as a new tag and apply it. See Creating and Adding a Tag to Your Licenses.

Procedure

  1. Log on to the Nutanix Support Portal, then click Licenses > License Inventory .
    The portal displays a sortable table listing your purchased licenses. You can sort the table by clicking any table heading (License ID, Tier, License Class, and so on).
  2. Click Actions > Delete Tag .
  3. In the dialog box, click Select a tag and then select one or more tags.
    If you know the name of a tag, you can type the name and then select it from the list.
  4. Click Apply .
    The existing tag is deleted, which also removes the tag from any licenses where it is applied. The portal removes the tag from the list.

License Warnings in the Web Console

Most license warnings in the web console are related to license violations or licenses that are about to expire or expired. In most cases, the resolution is to extend or purchase licenses.
  • AOS and Prism Starter licenses never expire for Nutanix NX and third-party OEM hardware appliances. Go to Licensing in the web console to check your Starter license expiration status.
  • AOS, Prism Pro, and Ultimate licenses have an expiration date. The web console alerts you both before and after a license expires.
  • If you attempt to use features not available in the license level for the cluster, a warning is issued. Upgrade your license level if you require continued access to Pro or Ultimate features.
  • If a cluster includes nodes with different license levels, the cluster and each node in the cluster defaults to the minimum feature set enabled by the lowest license level. For example, if two nodes in the cluster have AOS Pro licenses and two nodes in the same cluster have AOS Ultimate licenses, all nodes will effectively have AOS Pro licenses and access to that feature set only. Attempts to access AOS Ultimate features in this case result in a Warning in the web console.
  • If you are using a trial license, the warning shows the expiration date and number of days left in the trial period. Typically, the trial period is 60–90 days. The license name will also display as a Trial license level.
  • During upgrade of AOS, the Prism web console might incorrectly display a license violation alert. After you complete the upgrade, the alert is not displayed.

Nutanix Calm License Warnings in the Web Console

Most license warnings in the web console are related to license violations or licenses that are about to expire or have expired. In most cases, the resolution is to extend or purchase licenses.

  • Nutanix Calm licenses have an expiration date. The web console alerts you both before and after a license expires.
  • An alert is generated providing the number of license packs required for compliance. For example, The cluster needs 1 additional license pack. Please expand Calm capacity with 1 pack.
  • If the license pack is already expired, an alert is generated.
Note:
  • Any powered off VMs are also counted for license calculation.
  • If a VM is deleted manually from a cloud provider without deleting the app from Calm, it will be counted for license calculation.
  • After applying a new license, you can wait for the next scheduled Nutanix Cluster Checks to run and only after that the alerts are auto-resolved.

Disabling a Trial License

How to disable the Prism Pro or Prism Ultimate tier (and remove the violating license message that appears in Prism Central if the Prism Pro features are enabled without a valid license).

Procedure

  1. Log on to Prism Central (PC), click the gear icon, and select Licensing .
  2. In the Licensing page, click View & Manage Features .
  3. For example, click Disable Ultimate Trial .

    A message box appears stating that the operation is reversible. Click the Disable Ultimate Trial button. This immediately logs you out of PC and returns you to the logon page. When you log back on, the features are disabled.

Disabling An Existing Trial License After Upgrading

Follow these steps to disable the trial license if your current license tier is Prism Pro Trial and after upgrading, a Prism Ultimate tier trial is available. Enable and then disable the Ultimate trial to remove the trial from the cluster.

Procedure

  1. Log on to PC, click the gear icon, and select Licensing .
  2. In the Licensing page, click View & Manage Features .
  3. Click Enable Ultimate Trial .
    This immediately logs you out of PC and returns you to the logon page. When you log back on, the features are enabled.
  4. In the Licensing page, click View & Manage Features .
  5. Click Disable Ultimate Trial .

    A message box appears stating that the operation is reversible. Click the Disable Ultimate Trial button. This immediately logs you out of PC and returns you to the logon page. When you log back on, the features are disabled.

API Key Management

After you log on to My Nutanix and depending on your role, the API Key Management tile enables you to create and manage API keys. Use these keys to establish secure communications with Nutanix software and services. Typical user roles able to access this tile include Account Administrators for Cloud Services and existing Support Portal users.

An API key is a unique token that you can use to authenticate API requests associated with your Nutanix software or service. You can create multiple API keys for a single product or service. However, you can use an API key only once to register with that software or service.

It is a randomly generated unique UUID4 hash and can be 36–50 characters long. When you create the key, you choose a service (such as Licensing) and the key is mapped directly to that service. You can use it for the chosen service only. For example, you cannot use a Support Case key with Prism Central (PC).

You can use the API key for secure communication in many scenarios, including but not limited to the following.

  • Licensing. 1-click licensing for Nutanix software
  • Create Support Case. Generating support cases associated with your account
  • Prism Ops. API IAMV1 token generation and authorization

API Key Scope

Scope is the service group, feature, or specific function of a service group and is defined as part of a unique key value pair (scope name paired with the unique scope category). For example, with Prism Ops as the scope, the generated key enables you to authenticate when using the PC or Prism Element (PE) APIs.

The API Key is restricted for use depending on the scope you choose. For example, a key created with a scope of Licensing allows you to enable 1-click licensing through the PE or PC web console.

Creating an API Key

The API Key Management tile is available depending on your role. Typical user roles able to access the tile include Account Administrators for Cloud Services and existing Support Portal users.

Before you begin

  • Ensure that you have an active My Nutanix account with valid credentials.
  • You can create one or more API keys associated with a scope (that is, service such as Licensing). However, you can use only a specific key once when you register it with the service (for example, when enabling 1-click licensing in the web console).

Procedure

  1. Log on to your My Nutanix or cloud account at https://my.nutanix.com.
  2. In the My Nutanix dashboard, find the API Key Management tile and click Launch .
    If you have previously created keys, this page displays a list of keys and the +Create API Keys button.
  3. To begin, click +Create API Keys .
    The Create API Key window is displayed.
  4. Do these steps.
    1. Name. Enter a unique name for your API key to help you identify the key.
    2. Scope. Select a scope category from the Scope drop-down list.
      Scope is the service group, feature, or specific function of a service group and is defined as part of a unique key value pair (scope name paired with the unique scope category). For example, with Prism Ops as the scope, the generated key enables you to authenticate when using the Prism Central or Prism Element APIs.
  5. Click Create .
    The Created API Key window is displayed. Do not click Close . The window always shows an API Key field. Depending on the scope you chose, it might also show a Key ID field and Download Optional SSL Key button.
    Caution: You cannot recover the generated API key and key ID after you close this window. Click the icon to copy the API key and key ID and store it securely. If you forget or lose the API Key, generate a new API Key.
    Figure. Example API Key and Key ID for Prism Ops Scope Click to enlarge Created API Key window. You cannot recover the generated API key and key ID after you close this window. Copy the API key and key ID and store it securely. If you forget or lose the API Key, generate a new API Key.

    Figure. Example API Key and SSL Key for Licensing Scope Click to enlarge API Key window for Licensing scope. You cannot recover the generated API key and key ID after you close this window. Copy the API key and store it securely. If you forget or lose the API Key, generate a new API Key.

  6. Copy each key field value and store it securely for use.
  7. To optionally generate an SSL key, click Download optional SSL Key to save it for use when you register the API Key.
    Download optional SSL Key appears if you choose Licensing and Create Support Case scope categories. To enable 1-click licensing, the SSL is required. Download it if this is your use case.
  8. Click Close .
    The API Key Management page shows the key that you created (by name and scope) and any other keys you have created.

Viewing API Key Details

The API Key Management tile is available depending on your role. Typical user roles able to access the tile include Account Administrators for Cloud Services and existing Support Portal users.

Procedure

  1. Log on to your My Nutanix or cloud account at https://my.nutanix.com.
  2. In the My Nutanix dashboard, find the API Key Management tile and click Launch .
    If you have previously created keys, this page displays a list of keys and the +Create API Keys button.
  3. To view information about a specific API key, click Details to the right of the API key.
    A window is displayed, showing audit information about the key.
  4. To optionally generate an SSL key, click Download optional SSL Key to save it for use when you register the API Key.
    Download optional SSL Key appears if you chose Licensing and Create Support Case scope categories when you created the key.
    Figure. Download optional SSL Key Click to enlarge Download option SSL key button

  5. Click Close .

Deleting an API Key

The API Key Management tile is available depending on your role. Typical user roles able to access the tile include Account Administrators for Cloud Services and existing Support Portal users.

About this task

Delete an API key. Deleting an API key also revokes it in any scope or service where it is used. It might also disrupt any service operations where you have registered this API key.

Procedure

  1. Log on to your My Nutanix or cloud account at https://my.nutanix.com.
  2. In the My Nutanix dashboard, find the API Key Management tile and click Launch .
    If you have previously created keys, this page displays a list of keys and the +Create API Keys button.
  3. To delete a specific API key, click Delete to the right of the API key.
    Warning: Deleting an API key also revokes it in any scope or service where it is used. It might also disrupt any service operations where you have registered this API key.
  4. To confirm that you want to delete the API key, click Delete .
  5. Click Close .
Read article
Mineâ„¢ with Veeam Guide

Mine 2.0.1

Last updated: 2022-09-11

Release Notes

Release 2.0.1

Note the following that are new for release 2.0. The release 1.0 notes also apply to 2.0 (except for the required AOS version).

  • A Mineâ„¢ cluster requires a current LTS version of AOS (5.15 or 5.20). AOS STS versions are not recommended.
  • Foundation for Mineâ„¢ with Veeam automatically installs Veeam Backup & Replication v10. (Release 1.0 installed Veeam Backup & Replication v9.5.)
  • The "Physical Cluster Usage" and "Storage Throughput" widgets in the Mineâ„¢ dashboard now display usage data when you hover the mouse over any point on the time line.
  • If you see an "Unable to load Mine configuration settings …" error when viewing the Mineâ„¢ dashboard in Prism, the problem is likely DNS related. To circumvent the issue, access Prism using an IP address instead of a host name.
  • The Mineâ„¢ console now includes an option to upgrade the Mineâ„¢ version. However, this is a feature of 2.0, so it cannot be used to upgrade from 1.0 to 2.0 without some intervening manual steps. To upgrade from 1.0 to 2.0, do the following:
    1. Upgrade Veeam Backup & Replication server from 9.5 to 10 patch 2 (see Upgrading from Veeam Availability for Nutanix AHV 1.0 ).
    2. Log on to Prism, go to the VMs dashboard, select the Mineâ„¢ foundation VM ("Foundation for Mine with Veeam" VM), and click the Launch Console button (below the table) to open the console. Enter your login credentials when prompted.
    3. Using a text editor, add the following lines to the end of the file /etc/apt/sources.list :
      deb [arch=amd64] https://repository.veeam.com/mine/1/public/updater stable main
      deb [arch=amd64] https://repository.veeam.com/mine/1/public/mine stable main
    4. Enter the following curl command (as root) to add a repository key:
      veeam@minevm$ sudo su
      root@minevm$ curl http://repository.veeam.com/keys/veeam.gpg | apt-key add
      root@minevm$ exit
    5. Using a text editor, update the /etc/apt/apt.conf.d/50unattended-upgrades file as follows:
      1. Uncomment the following two lines:
        ${distro_id}ESM:${distro_codename}";
        ${distro_codename}-security";
        
      2. In the same section, add the following line:
        "Foundation for Mine With Veeam updater:stable";
      The following is an example of the updated section:
      // Automatically upgrade packages from these (orign:archive) pairs
      Unattended-Upgrade::Allowed-Origins {
              "${distro_id}:${distro_codename}";
              "${distro_id}:${distro_codename}-security";
              // Extended Security Maintenance; doesn't necessarily exist for
              // every release and this system may not have it installed, but if
              // avaliable, the policy for updates is such that unattended-upgrades
              // should also install from here by default.
              "${distro_id}ESM:${distro_codename}";
      //      "${distro_id}:${distro_codename}-updates";
      //      "${distro_id}:${distro_codename}-proposed";
      //      "${distro_id}:${distro_codename}-backports";
              "Foundation for Mine With Veeam updater:stable";
    6. Enter the following two commands to update the installer:
      veeam@minevm$ sudo apt-get update;
      veeam@minevm$ sudo apt-get install nirvana-mine nirvana-appliancemanager;
    7. You should now be able to use the update feature in the Mineâ„¢ console (see Upgrading Mine Using Mine Console).
  • After upgrading from Mineâ„¢ 1.0 to 2.0, you might see a "Fast clone is disabled, to enable it, turn on Align data block on SOBR extents" when you open the Mineâ„¢ dashboard in Prism. Veeam Backup & Replication v10 supports the fast clone operation for XFS, and it is enabled by default in Mineâ„¢ 2.0. However, when upgrading form Mineâ„¢ 1.0, you need to enable the fast clone feature manually as described here.
  • A default container (named default-XXX) is created as part of creating a Nutanix cluster. Do not delete or rename the default container before deploying a Mineâ„¢ cluster because Mineâ„¢ requires this container.
  • The Mineâ„¢ dashboard does not appear in Prism if you access the cluster (Prism Element) from Prism Central. It appears in Prism only when you log on to the cluster directly.

Compatibility Matrix of Mine and Veeam Backup & Replication versions

Table 1. Compatibility table for Mine and Veeam Backup & Replication Version
Mine Version Veeam Backup & Replication Versions
V1 1.0.406 9.5.4.2866 and later updates of 9.5
V2 1.0.715 10.0.0.4461 with KB3161 and later updates of Veeam Backup & Replication 10
V2 patch1 1.0.762 10.0.0.4461 with KB3161 and later updates of Veeam Backup & Replication 10
V2 patch2 1.0.1014 10.0.0.4461 with KB3161 and later updates of Veeam Backup & Replication 10
V3 3.0.1238 11.0.0.837 with Cumulative Patch20210525 and later updates of Veeam Backup & Replication 11
Note:

Later updates of version 10 does not include version 11, it is possible to install Cumulative patch or a KB but not Veeam Backup & Replication 11 for mine V2

Release 1.0

Note the following:

  • This guide provides instructions for installing and deploying a Mineâ„¢ cluster. However, only designated Nutanix and Veeam sales or support engineers, or VAR engineers, should perform the installation. (The FoundationMine image ISO to create the Foundation for Mineâ„¢ with Veeam VM is provided to the designated Nutanix, Veeam, or VAR engineers.)
  • A Mineâ„¢ cluster requires AOS version 5.11.2.1 or later.
  • After installation, DO NOT delete, reconfigure, or power off the Foundation for Mineâ„¢ with Veeam VM or any of the six deployed VMs ( Veeam-Win-Node1 , Veeam-Win-Node2 , Veeam-Win-Node3 , Veeam-Lin-Node1 , Veeam-Lin-Node2 , Veeam-Lin-Node3 ).
  • Do not enable AOS fingerprinting/deduplication on any of the containers on the Nutanix Mineâ„¢ with Veeam deployment.
  • Instead of using the default admin account to administer a Mineâ„¢ cluster, it is recommend that you create a separate administrator account for Mineâ„¢ to circumvent two potential issues that could occur when using the default admin account (warnings may appear about external authentication and robot accounts are not obliged under security policies to have password change after every n months).
  • If the custom Mineâ„¢ dashboard does not appear in Prism (see Monitoring the Cluster), refresh (clear) the cache on your browser and redisplay Prism. The missing Mineâ„¢ dashboard should now appear.
  • When uploading a Windows image to install (see step 7 in Deploying a Mineâ„¢ Cluster), be sure it is Windows Server 2019 Standard Edition as that is the only supported version currently.

Overview

Video: Mineâ„¢ Overview This video https://youtu.be/9NQRreAF1I0 is from Nutanix, Inc. on YouTube.

Nutanix Mineâ„¢ is the product name for joint solutions between Nutanix and select data protection software vendors. Nutanix Mineâ„¢ is a dedicated backup solution, where only backup component VMs run on the Mineâ„¢ cluster and the cluster storage is used to store backup workloads.

This version of Mineâ„¢ is a fully integrated data protection appliance that combines the Nutanix AOS software with the Veeam Backup & Replication solution. Mineâ„¢ can provide data protection for any applications running in a Nutanix cluster or for any virtualized workload running in your data center. Mineâ„¢ includes the following features:

  • Enterprise-grade data protection to any workload in your data center
  • Integrated installation and configuration experience
  • Integrated management experience
  • Seamless scale-out, self-healing, and break-fix functionality
Note: The Mineâ„¢ cluster is not a disaster recovery solution, thus restoring primary or production workloads to the Mineâ„¢ cluster is not supported currently. Attempting to run restored VMs on a Mineâ„¢ cluster disrupts the HA functionality of the cluster. In case of a node failure, the system might not be able to recover all the Veeam services automatically.

The Mineâ„¢ appliance comes in three initial sizes, "extra small" and "small" versions that include one preconfigured NX-1465 block and a "medium" version that includes two preconfigured NX-8235 blocks. You can add additional NX-8235 blocks (but not NX-1465 blocks) to scale out the cluster for more capacity.

Table 1. Mineâ„¢ Appliance Specifications
Specification X-Small Small Medium Scale Out
Model NX-1465-G7 NX-1465-G7 NX-8235-G7 (x 2) NX-8235-G7
Rack Size 2U 2U 4U 2U
Number of Nodes 4 4 4 2
Processor (per node) 2x Intel Xeon Silver 4210 (10-core 2.2 GHz) 2x Intel Xeon Silver 4210 (10-core 2.2 GHz) 2x Intel Xeon Silver 4214 (12-core 2.2 GHz) 2x Intel Xeon Silver 4214 (12-core 2.2 GHz)
RAM (per node) 192 GB 192 GB 192 GB 192 GB
SSD (per node) 1x 1.92 TB 1x 1.92 TB 2x 1.92 TB 2x 1.92 TB
HDD (per node) 2x 6 TB 2x 12 TB 4x 12 TB 4x 12 TB
Networking (per node) 2 or 4x 10GbE 2 or 4x 10GbE 2 or 4x 10GbE, or 2x 25/40 GbE 2 or 4x 10GbE, or 2x 25/40 GbE
Raw Capacity 48 TB 96 TB 192 TB 96 TB
Effective Capacity 30-50 TB 60-100 TB 120-200 TB 60-100 TB
Veeam Universal Licenses (VUL) 0 (naked Mine) 250 500 250 (additional)

Installation

Installing and configuring your Mineâ„¢ with Veeam solution requires the following steps:

  1. Install the Mineâ„¢ appliance (see Installing a Mineâ„¢ Appliance).
  2. Deploy a Mineâ„¢ cluster (see Deploying a Mineâ„¢ Cluster).
  3. Configure the Veeam backup and replication solution (see Configuring Veeam).

Installing a Mineâ„¢ Appliance

About this task

To install a Mine appliance at your site, do the following:

Note: The Mineâ„¢ cluster itself only supports AHV as a hypervisor. While Mine with Veeam can backup workloads from other hypervisors or physical infrastructure, the Mine cluster itself is deployed with AHV as the hypervisor.

Procedure

  1. Unpack the block and nodes (see the Getting Started Guide for steps 1-4).
  2. Mount the block in a rack.
  3. Connect each node to the network.
  4. Create a cluster of the Mine nodes with AHV as the hypervisor.

    It is recommended to use the AHV version bundled with the supported AOS (LTS) package. The recommended memory size for the Controller VMs is 32 GB.

  5. Download the NutanixFoundationForMineWithVeeam- <ver#> .vmdk file from the Nutanix Mineâ„¢ download page in the Nutanix support portal.
  6. After the cluster is up and running, import the Mine foundation VM image (VMDK file) that you downloaded in the previous step (see the "Configuring Images" section in the Prism Web Console Guide ).
    Note: See the "Logging Into the Web Console" section in the Prism Web Console Guide if you need help when logging on to Prism for the first time. In addition, an iSCSI data services IP address should be specified for the cluster before deploying Mine (see the "Modifying Cluster Details" section).
  7. Create a "Foundation for Mine with Veeam" VM from the imported image, connect it to the production network, and start it (see "VM Management" in the Prism Web Console Guide ).

    Allocate 4 vCPUs and 4 GB of memory to this VM. Select Clone from Image Service as the operation and FoundationMine as the image when adding the disk.

    Note: The VM is named "Foundation for Mine with Veeam" throughout this document, but you can name the VM anything you want when you create it.

Deploying a Mineâ„¢ Cluster

Before you begin

Make sure the Mineâ„¢ appliance is up and running before attempting this procedure.

About this task

Video: Mineâ„¢ Deployment Example This video https://youtu.be/e13I-VXVaoo is from Nutanix, Inc. on YouTube.

To deploy a new Mineâ„¢ cluster, do the following:

Procedure

  1. Do one of the following to access the Mineâ„¢ console:
    • If DHCP is configured for the production network, open a web browser and enter http:// foundation_for_mine_with_veeam_vm_ip_addr . This defaults to port 8743, which is the port for the Mineâ„¢ web console.
      Note: The supported browsers are the current version and two major versions back of Firefox, Chrome, and Safari, plus Internet Explorer version 11.
    • If DHCP is not configured, log on to Prism, go to the VMs dashboard, select the Foundation for Mine with Veeam VM, and click the Launch Console button (below the table) to open the console. In the console do the following to manually set the IP address:
      1. Log on to the VM (both user name and password are "veeam").
      2. Open the /etc/network/interfaces file for editing. For example, to use the nano editor, enter
        sudo nano /etc/network/interfaces
      3. Find the following lines in the file:
        auto eth0
        iface eth0 inet dhcp
      4. Replace those lines with the following lines:
        auto eth0
        iface eth0 inet static
        address yourIpAddress
        netmask yourSubnetMask
        gateway yourAddressAddress
      5. Save the file and then run the following command:
        sudo service networking restart

        If you require an Active Directory join, you also need to specify a DNS server with a proper Active Directory record.

  2. Enter your credentials (user name and password) in the login page.

    The default user name and password are both " veeam ". On the first login, you are prompted to change the password. Be sure to remember (save somewhere safe) the new password.

    Figure. Mineâ„¢ login Screen Click to enlarge Nirvana login screen
  3. In the Nutanix Mine with Veeam Configuration screen, click the Setup box.
    Figure. Mineâ„¢ with Veeam Configuration Menu Click to enlarge Mineâ„¢ appliance configuration main menu

    The Nutanix mine with Veeam cluster setup screen appears. The setup work flow appears on the left with the step details on the right.

  4. In the End User License Agreement screen, read the license agreements, check the two boxes at the bottom ( I accept the terms of license agreement and I accept the terms of the third-party components license agreement ), and then click Next .
    Figure. Cluster Setup: EULA Click to enlarge end user license agreement page
  5. In the Nutanix Cluster Credentials page, do the following in the indicated fields:
    Figure. Cluster Setup: Prism Credentials Click to enlarge cluster credentials page
    1. Prism Element IP Address/Hostname : Enter the virtual IP address for the cluster (if configured) or the IP address (or host name) of a Controller VM in the cluster.
      Note: It is recommended that you use the cluster virtual IP address. If a Controller VM IP address is used and that node goes down, you will need Nutanix customer support to reconfigure Mineâ„¢ to work with a different node IP address.
    2. Description (optional): Enter a description for the cluster.
    3. Prism Element User Name : Enter the cluster administrator (Prism) user name.
      Note: It is recommended that you create a separate local account for Mineâ„¢. Active directory is not recommended as an issue with active directory could prevent access to the Mineâ„¢ cluster.
    4. Prism Element Password : Enter the cluster administrator password.
  6. In the Nutanix Mine with Veeam Cluster Information page, review the information and then click the Next button.

    This page displays information about the nodes in the cluster. Verify the information is correct before proceeding.

    Figure. Cluster Setup: Node Information Click to enlarge cluster information page
  7. In the Windows Upload page, choose one of the following and then click the Next button.
    • To upload a Windows ISO file from your workstation, click the Use External ISO radio button, click the Browse button, and select that ISO file on your workstation.
    • If the ISO file is already loaded, click the Use internal ISO radio button and select Windows from the pull-down list.
    Note: Windows Server 2019 Standard Edition is the only supported version.
    Figure. Cluster Setup: Upload Windows ISO Click to enlarge Windows ISO upload page
  8. In the Veeam Backup & Replication Deployment Settings - License Upload page, click the Upload File button, upload your Veeam license file, and then click the Next button.
    Note: Only VUL licenses are accepted. If you are an existing Veeam customer and do not have instance-based (VUL) licenses, obtain an evaluation license from Veeam and enter the evaluation key here. Once the deployment is complete, you can apply your current Veeam license through the Veeam Backup & Replication console.
    Figure. Cluster Setup: Upload Veeam License File Click to enlarge Veeam license file upload page
  9. In the Veeam Backup & Replication Deployment Settings – Network Settings page, do the following in the indicated fields:
    Figure. Cluster Setup: Veeam Deployment Network Settings Click to enlarge Veeam B&R deployment network settings page
    1. Network Name : Displays the network name.

      This is a read-only field for the primary network configuration. However, if you create a guest network (later in this procedure), select the name for the guest network from the pull-down list for an existing network or enter the name for a new network.

    2. VLAN ID : Displays the VLAN ID for the network.

      This is a read-only field for the primary network configuration, but you specify the VLAN ID for a guest network. If you specify a VLAN other than 0, make sure that network switches are configured accordingly.

    3. Starting IP : Enter a starting IP address. The VMs are assigned sequential IP addresses after the starting IP address.

      Mineâ„¢ requires eight available IP addresses.

    4. Subnet Mask : Enter the subnet mask.
    5. Gateway : Enter the gateway IP address.
    6. DNS1 and DNS2 : Enter the DNS server IP address in the DNS1 field. To specify a second DNS server, enter that address in the DNS2 field.

      If you require an Active Directory join, you need to specify a DNS server with a proper Active Directory record.

    7. VM Hostname and IP Address : Update the VM names or IP addresses as needed.

      The VM names and IP addresses are populated automatically. There are three Windows VMs named Veeam-Win-Node x and three Linux VMs named Veeam-Lin-Node x with x being a sequential number from 1 to 3. You can change the name of a VM by entering a different name in the VM Hostname field. The IP addresses are assigned sequentially to the VMs starting after the Starting IP address, but you can change that address in the IP Address field for a VM.

      The Windows VMs are configured with 8 vCPUs and 16 GB memory, and they are used to manage the Veeam Backup & Replication application. The Linux VMs are configured with 8 vCPUs and 128 GB memory, and they are used to manage the Veeam scale out backup repository.

      Note: If you have multiple Mineâ„¢ clusters, it is recommended that you provide custom host names instead of the defaults to prevent duplicate namespace issues.
    8. (optional) To create an additional network for guest processing, check the create additional network for guest processing box.

      If you want the VMs to be backed up on a different network than the Veeam infrastructure, you can create this additional network for that purpose. (Creating an additional network is optional.)

    9. When all the fields are correct, click the Next button.

      The installer verifies the network configuration before proceeding.

  10. If the guest network box was checked in the previous step, the network configuration screen reappears with an additional Network field at the top. Enter the values for the guest network and then click the Next button.

    In the Network field, select either New (for a new network) or Existing (for an existing network) from the pull-down list.

  11. In the Windows Credentials page, do the following in the indicated fields:
    Figure. Cluster Setup: Windows Credentials Click to enlarge Windows credentials page
    1. Local Windows Administrator : Enter the user name for the Windows administrator.
    2. Password : Enter the user password.
    3. Repeat Password : Re-enter the password.
    4. To enable Microsoft Active Directory integration, check the box. This displays the following four fields.
      Note: Joining to Active Directory is optional and not recommended in most cases.
    5. Domain Name : Enter the AD domain name.
    6. User Name : Enter the AD administrator user name in the domain\name format.
    7. Password : Enter the AD administrator password.
    8. When all the fields are correct, click the Next button.
  12. In the Review Configuration Settings page, review the information and then do one of the following:
    • If the information is correct, click the Start Install button.
    • If the information is incorrect, click the Previous button and fix the configuration as needed in the preceding screens. Return to this page and click the Start Install button.

    The top part of the page displays information about the VMs Mineâ„¢ will create, and the bottom part displays cluster information including virtual IP address, storage capacity, and node count.

    Figure. Cluster Setup: Review Configuration Settings Click to enlarge review configuration settings page

    The installation begins. It typically takes over an hour (sometimes over two hours) for the installation to complete. A progress bar appears with status messages as the installation progresses. You can monitor the progress in more detail by logging on to Prism and checking the Task and VM dashboards (see Monitoring the Cluster).

    Figure. Cluster Setup: Installation Progress Click to enlarge installation progress page
  13. When the Windows Licensing page appears, do one of the following:
    • To activate Windows (optional), check the box, enter a Windows license key for each VM in the Product Keys fields, and then click the Activate button.
    • To continue without activating Windows, click the Next button.
    Figure. Cluster Setup: Windows Licensing Click to enlarge installation progress page

    When the installation completes, a success (or error) message appears. The success message includes a link to Prism; click the link to log on to Prism so you can monitor and manage the Mineâ„¢ cluster.

    Figure. Cluster Setup: Installation Complete Click to enlarge installation success/failure page

What to do next

Deploying a Mineâ„¢ cluster creates a volume group, and both the volume group and Foundation for Mineâ„¢ with Veeam VM are added automatically to a protection domain. A schedule is set up to take daily snapshots of the protection domain. Tune the schedule (if needed) per your company security policy.

It is recommended that you enable erasure coding (disabled by default). Erasure coding can provide significant storage savings for a Mineâ„¢ cluster. See the "Erasure Coding" and "Modifying a Storage Container" sections in the Prism Web Console Guide for more information.

Configuring Veeam

About this task

To configure the Veeam backup and replication solution for a Mineâ„¢ cluster, do the following:

Procedure

  1. Log on to Prism.
  2. Go to the Mine with Veeam dashboard (see Monitoring the Cluster) and click the Launch Console link in the Cluster widget.

    This opens a Veeam console window at the login screen. Enter the Windows administrator credentials you supplied when deploying the cluster (see Deploying a Mineâ„¢ Cluster).

    You can also launch the Veeam console by going to the VMs dashboard, selecting the Veeam-Win-Node1 VM, and then clicking the Launch Console button below the table.

  3. Click the Veeam Backup icon on the desktop to open that application.
    Figure. Veeam VM Desktop Click to enlarge example desktop view of Veeam icon display

  4. Configure the Veeam backup and replication solution as desired. See the Veeam Backup & Replication for Nutanix Mineâ„¢ (Getting Started Guide) for instructions.

    Depending on which systems you want to back up with Nutanix Mineâ„¢, also check the relevant guides from the following list to configure Veeam Backup & Replication for your environment:

Administration

After deploying a Mineâ„¢ cluster, you can monitor activity and perform administrative tasks as needed.

Note: Depending on the task, administration is through either the Mineâ„¢ or Prism web console. To access the desired console, open a web browser and enter one of the following addresses:
  • Mine console: http:// foundation_for_mine_with_veeam_vm_ip_addr . This defaults to port 8743, which is the port for the Mineâ„¢ web console.
  • Prism console: http:// cvm_ip_addr where cvm_ip_addr is the IP address for one of the Controller VMs in the cluster or the virtual IP address if one was created. This defaults to port 9440, which is the port for the Prism web console.
  • You can monitor cluster usage and health through Prism (see Monitoring the Cluster).
  • You can expand the cluster storage capacity by adding Mineâ„¢ blocks (see Expanding the Cluster).
  • You can upgrade the Mineâ„¢ version for the cluster (see Upgrading Mine Using Mine Console).
  • You can upgrade AOS in a Mineâ„¢ cluster, but it requires a few extra steps (see Upgrading AOS).
  • You can update your Veeam, Mineâ„¢, or Prism user credentials (see Updating User Credentials).
  • If you encounter a problem with the cluster, you can download a support bundle that contains various logs and other relevant data (see Downloading a Support Bundle).
  • If the installation fails or you want to start over for any reason, you can reset the cluster back to the original state (see Resetting the Cluster).
  • If you want to perform cluster actions that require the backup processes be suspended such as when expanding (or contracting) the cluster or upgrading AOS, you can enable a maintenance mode (see Maintenance Mode).

Monitoring the Cluster

You can monitor the Mineâ„¢ cluster health and activity through Prism. The Prism Web Console Guide describes how to use Prism. (To determine the AOS version your Mineâ„¢ cluster is running, go to the About Nutanix option under the user_name drop-down list in the Prism main menu.)

Mineâ„¢ with Veeam Dashboard

The Prism web console includes a custom Mine with Veeam dashboard specific to a Mineâ„¢ cluster, which appears by default when you first log on to Prism. To view this custom dashboard at any time, select Mine with Veeam from the pull-down list on the far left of the main menu.

The custom dashboard displays the following eight information tiles (widgets):

  • Cluster . The cluster tile provides quick visual indicators for the following components: cluster health (overall), Veeam implementation, Mineâ„¢ components, and storage capacity (available). Green indicates a healthy component, yellow indicates a warning condition, and red indicates a critical condition.
    • Click the Launch Console link to open a Veeam console.
    • Click the Mine Platform link to open the Mineâ„¢ console.
  • Protection . The protection tile displays a summary of the VMs and hosts protected currently.
  • Job Status . The job status tile displays a summary of the running, disabled, and idle jobs in the cluster currently.
  • Physical Cluster Usage . The physical cluster usage tile displays a timeline graph of storage usage in the cluster.
  • Storage Throughput . The storage throughput tile displays a timeline graph of storage throughput in the cluster.
  • Capacity Usage . The capacity usage tile summaries the current storage usage and available capacity (see Monitoring Storage).
  • Nutanix Alerts . The Nutanix alerts tile displays a list of the current Nutanix-specific alerts.
  • Veeam Backup & Replication Alerts and Events . The Veeam alerts and events tile displays a list of the current Veeam-specific alerts and events.
Figure. Prism Mineâ„¢ Dashboard Click to enlarge example Prism Mineâ„¢ dashboard

Other Dashboards

The Prism web console includes other dashboards to monitor specific elements of your cluster.

Note: All sections and chapters cited below are located in the Prism Web Console Guide.
  • You can monitor the Mineâ„¢ VMs through the VMs dashboard. This dashboard provides a summary table of the VMs and allows you to view details about each VM. See the "VMs Dashboard" section for more information.
    Figure. Prism VMs Dashboard (table view) Click to enlarge example Prism VMs dashboard table view

  • Mineâ„¢ creates automatically a storage container and volume group that you can monitor through the Storage dashboard. This dashboard provides summary and detailed information about storage containers, volume groups, and storage pools. See the "Storage Dashboard" section for more information.
    Figure. Prism Storage Container Tab Click to enlarge example Prism storage container tab

    Figure. Prism Volume Group Tab Click to enlarge example Prism volume group tab

  • You can monitor the progress and outcomes for Mine-related tasks through the Tasks dashboard. See the "Task Status" chapter for more information.
  • You can review the Mine-related alerts and events through the Alerts dashboard. See the "Alert and Event Monitoring" chapter for more information.

Monitoring Storage

When a Mineâ„¢ cluster is full, the cluster may become unavailable, and you will not be able to continue backup operations. To prevent such a situation, Mineâ„¢ includes a special monitoring feature (sometimes referred to as a "watchdog") that dynamically monitors storage usage and takes action as necessary to avoid reaching a storage full condition. If available storage space in the cluster falls below the minimum amount, the monitor automatically stops and disables Veeam Backup & Replication jobs. The monitor is regulated by three thresholds:

  • "Low on space" threshold ( restart_jobs_threshold_percent parameter). When available storage space on a cluster reaches the specified threshold, Veeam Backup & Replication starts to upload the VM backup files to Capacity Tier. For details, see the "Capacity Tier" section of the Veeam Backup & Replication User Guide .
  • "Job processing is impacted" threshold ( cancel_jobs_threshold_percent parameter). When available storage space on a cluster reaches the specified threshold, Mineâ„¢ disables all Veeam Backup & Replication jobs. Already started jobs continue the session, but subsequently scheduled runs are suspended until more storage space become available.
  • "Immediate stop" threshold ( stop_issuing_jobs_threshold_percent parameter). When available storage space on a cluster reaches the specified threshold, Mineâ„¢ immediately stops all running jobs.

The monitor automatically calculates and defines the threshold values according to your environment resources. (If you want to change default threshold values, contact technical support.) In addition, the monitor regulates the location of VMs. If Veeam repository extends and backup proxies are deployed on different AHV nodes, the monitor transfers them to one node.

By default, Mineâ„¢ reserves enough storage space for node rebuild should one of the nodes fail. If you want to add new VMs on the cluster or change the default monitor threshold values, consider leaving enough space for the rebuild of the node.

Note: When you free up space on a Veeam backup repository, it may take up to six hours until you see the change reflected in the Mineâ„¢ dashboard.

Expanding the Cluster

About this task

To increase the storage capacity of your Mineâ„¢ cluster, you can add an expansion block (see Overview). To add the nodes in an expansion block to the cluster, do the following:

Procedure

  1. Log on to the Mineâ„¢ console and enable maintenance mode (see Enabling Maintenance Mode).
  2. Log on to the Prism console and add the new nodes in the standard way (see Expanding a Cluster in the Prism Web Console Guide ).
  3. Disable maintenance mode (see Disabling Maintenance Mode).

Upgrading Mine

You can upgrade the Mine software whenever a new version is available.

Upgrading Mine Using Mine Console

You can upgrade the Mine software using the Mine console whenever a new version is available.

About this task

To check for and install an update, do the following:

Procedure

  1. Log on to the Mineâ„¢ console.
  2. Select Check updates from the main menu screen.
    Figure. Main Menu (check updates option) Click to enlarge main menu check updates option

  3. In the Updates page, do the following:
    Figure. Check Updates Page Click to enlarge check updates page example

    1. To check for available updates, click the Check updates button.

      Check the appropriate box to automatically check for available updates.

    2. When an update is available, the displays changes. Click the Start update button to start the update.
      Figure. Start Update Page Click to enlarge start update page example

    3. If an update requires a reboot and you want to specify when that reboot occurs, check the appropriate box and specify the time in the clock field to the right.

      If a time is not specified, the reboot (if required) happens immediately when triggered as part of the upgrade process.

Upgrading Mine Using Veeam VM Console for Dark Sites

You can upgrade the Mine software for a dark site using a Veeam VM console. You can use the dark site upgrade process to upgrade the Mine software at locations without Internet access.

Before you begin

Ensure that you download the following packages if you are upgrading from Mine 1.0.
Table 1. Packages
Package Download Location
libonig4_6.7.0-1_amd64.deb http://archive.ubuntu.com/ubuntu/pool/universe/libo/libonig/
libjq1_1.5+dfsg-2_amd64.deb http://archive.ubuntu.com/ubuntu/pool/universe/j/jq/
jq_1.5+dfsg-2_amd64.deb http://archive.ubuntu.com/ubuntu/pool/universe/j/jq/

About this task

To install an update for the Mine software at a dark site (offline upgrade), do the following:

Procedure

  1. Download the .deb package from Nutanix portal.
  2. Open console to Foundation for Mine with Veeam VM
  3. Log on with your credentials and run the following command:
    $ sudo systemctl start ssh
  4. From a terminal on the local machine, SCP the .deb file to the Foundation for Mine with Veeam VM using the following command:
    $ scp NutanixMineWithVeeamUpdate_v3.0.0.deb veeam@<ip_address>:~/

    Where, <ip_address> is the IP address of the Foundation for Mine with Veeam VM.

  5. Install the update on the Foundation for Mine with Veeam VM.
    1. If you are upgrading from v1.0, install the packages (libonig4, libjq1, and jq) using the following commands:
      $ sudo dpkg -i libonig4_6.7.0-1_amd64.deb
      $ sudo dpkg -i libjq1_1.5+dfsg-2_amd64.deb
      $ sudo dpkg -i jq_1.5+dfsg-2_amd64.deb
    2. Run the update using the following command:
      $ sudo dpkg -i NutanixMineWithVeeamUpdate_v3.0.0.deb
  6. To restart Nirvana Management service to complete the upgrade, run the following command:
    $ sudo systemctl restart NirvanaManagement
    Note: You can also reboot the Foundation for Mine VM for the upgrade to complete.

Upgrading AOS

Upgrading AOS requires a few additional steps for a Mineâ„¢ cluster.

About this task

To upgrade the AOS version in a Mine cluster, do the following:

Note: Upgrading AOS to versions 5.15.x and 5.20.x are supported. Mine is supported on AOS LTS versions only; do not upgrade to an STS version.

Procedure

  1. Log on to the Mineâ„¢ console and enable maintenance mode (see Enabling Maintenance Mode).
  2. Upgrade AOS for the cluster (see the "AOS Upgrade" chapter of the Acropolis Upgrade Guide for the target AOS version).
  3. After the AOS upgrade completes successfully, disable maintenance mode (see Disabling Maintenance Mode).
    Note: The Mineâ„¢ dashboard is a custom feature in Prism, and that dashboard disappears after upgrading AOS. The following step redeploys the Mineâ„¢ dashboard.
  4. To redeploy the Mineâ„¢ dashboard, do the following:
    1. Log on to the Mineâ„¢ console.
    2. Select Maintenance from the main menu screen.
      Figure. Main Menu (maintenance option) Click to enlarge main menu maintenance option

    3. Select Redeploy Mine Dashboard from the maintenance menu screen.
      Figure. Maintenance Menu (redeploy Mineâ„¢ dashboard) Click to enlarge maintence menu redeploy Mineâ„¢ dashboard option

    To verify the Mineâ„¢ dashboard is redeployed, log on to Prism, refresh the screen, and check that the dashboard appears again (see Monitoring the Cluster).

Updating User Credentials

You can update your Veeam, Mineâ„¢, or Prism user credentials at any time.

About this task

To update user account credentials, do the following:

Procedure

  1. Log on to the Mineâ„¢ console.
  2. Select Credentials manager from the main menu screen.
    Figure. Main Menu (credentials manager option) Click to enlarge main menu credentials manager option

  3. In the Credential manager window, do one or more of the following:
    Figure. Credential Manager Window Click to enlarge credentials manager window

    • To update your Veeam user name or password, click the Veeam Backup & Replication Update button. In the pop-up window, update your user name or password in the indicated fields and then click the OK button.
      Figure. User Name and Password Window Click to enlarge credentials manager window

    • To update your cluster (console) user name or password, click the Acropolis Cluster Update button. In the pop-up window, update your user name or password in the indicated fields and then click the OK button.
    • To update your Prism user name or password, click the Nutanix CVM Update button. In the pop-up window, update your user name or password in the indicated fields and then click the OK button.
    • To update your Mineâ„¢ user information, click the Set button. In the pop-up window, enter the current account (user) name and password in the first two fields and then the new account name and password in the next two fields (and reenter the new password in the last field to confirm). When all the fields are correct, click the OK button.
      Figure. User Name and Password Window Click to enlarge credentials manager window

Downloading a Support Bundle

If you encounter a problem, you can download a support bundle to troubleshoot the problem.

About this task

The support bundle contains service logs and other related data that can help locate and diagnose system issues. To download a support bundle, do the following:

Procedure

  1. Log on to the Mineâ„¢ console.
  2. Select Maintenance from the main menu screen.
    Figure. Main Menu (maintenance option) Click to enlarge main menu maintenance option

  3. Select Download support bundle from the maintenance menu screen.
    Figure. Maintenance Menu (download support bundle) Click to enlarge maintenance menu download support bundle option

    This step downloads a ZIP (compressed) file to your workstation named logs_veeam_ date&time .zip that contains the support bundle.

    Note: It is recommended that you generate a support bundle and provide it to Nutanix customer support when you open a support case.

Resetting the Cluster

About this task

If the installation is not successful or you want to start over for any reason, you first need to clean up and reset the environment. To reset a Mineâ„¢ cluster, do the following:

Caution: Resetting your cluster destroys all backed up data on the cluster and cannot be undone.

Procedure

  1. Log on to the Mineâ„¢ console.
  2. Select Maintenance from the main menu screen.
    Figure. Main Menu (maintenance option) Click to enlarge main menu maintenance option

  3. Select Reset Mineâ„¢ Cluster from the maintenance menu screen.
    Figure. Maintenance Menu (reset mine cluster) Click to enlarge maintenance menu destroy mine cluster option

  4. In the Nutanix Cluster Credentials screen, do the following in the indicated fields and then click the Next button:
    1. Prism Element IP Address/Hostname : Enter the virtual IP address for the cluster (if configured) or the IP address (or host name) of a Controller VM in the cluster.
    2. Prism Element User Name : Enter the cluster administrator (Prism) user name.
    3. Password : Enter the cluster administrator password.
    Figure. Cluster Credentials Screen Click to enlarge example reset cluster credentials screen

  5. In the Reset Settings screen, check the boxes for the virtual machines, networks, images, volume groups, and storage containers you want to reset.

    Click the plus sign for an entity tab (virtual machines, networks, and so on) to see the list of those entities. All virtual machines, volume groups, and storage containers are checked by default; the networks and images are not. Review the list for each entity and adjust (add or remove check marks) as desired.

    Figure. Reset Settings Screen Click to enlarge example reset settings screen

  6. When the settings are correct, enter I agree in the field at the bottom on the page and then click the Reset button.

    The reset process begins. Time estimates and a progress bar appear. When the process completes, the message "Reset process has been completed successfully" appears. Click the Close button. This redisplays the Nutanix Mineâ„¢ with Veeam Configuration screen.

    Figure. Reset Settings Progress Screen Click to enlarge example reset settings progress screen

Maintenance Mode

Mineâ„¢ provides a maintenance mode that stops and disables all running backup jobs which are targeted at the scale-out backup repository. Maintenance mode allows you to reconfigure cluster settings, expand the cluster, and perform additional tasks that might otherwise disrupt backup operations. When cluster maintenance is complete, you can disable maintenance mode, which resumes the scheduling of backup jobs. (However, backup jobs that were stopped during the maintenance window are not restarted.)

Enabling Maintenance Mode

About this task

To enable maintenance mode, do the following:

Procedure

  1. Log on to the Mineâ„¢ console.
  2. Select Maintenance from the main menu screen.
    Figure. Main Menu (maintenance option) Click to enlarge main menu maintenance option

  3. Select Enable maintenance mode from the maintenance menu screen.
    Figure. Maintenance Menu (enable maintenance mode) Click to enlarge maintenance menu enable maintenance mode option

    To verify maintenance mode is enabled, check the cluster widget on the Mineâ„¢ dashboard (see Monitoring the Cluster). The text "maintenance mode on" appears when maintenance mode is enabled.

Disabling Maintenance Mode

About this task

To disable maintenance mode, do the following:

Procedure

  1. Log on to the Mineâ„¢ console.
  2. Select Maintenance from the main menu screen.
    Figure. Main Menu (maintenance option) Click to enlarge main menu maintenance option

  3. Select Disable maintenance mode from the maintenance menu screen.
    Figure. Maintenance Menu (disable maintenance mode) Click to enlarge maintence menu disable maintenance mode option

    To verify maintenance mode is disabled, check the cluster widget on the Mineâ„¢ dashboard (see Monitoring the Cluster). The text "maintenance mode on" no longer appears when maintenance mode is disabled.

Read article