Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All

Database-MongoDB

Configure Federated Authentication from Okta — MongoDB Cloud Manager

Configure Federated Authentication from Okta¶

On this page

  • Prerequisites
  • Procedures
    • Configure Okta as an Identity Provider
    • Map your Domain
    • Associate Your Domain with Your Identity Provider
    • Test Your Domain Mapping
    • (Optional) Map an Organization
    • (Optional) Configure Advanced Federated Authentication Options
  • Sign in to Cloud Manager Using Your Login URL

This guide shows you how to configure federated authentication using Okta as your IdP .

After integrating Okta and Cloud Manager, you can use your company’s credentials to log in to Cloud Manager and other MongoDB cloud services.

Note

If you are using Okta’s built-in MongoDB Cloud app, you can use Okta’s documentation.

If you are creating your own SAML app, use the procedures described here.

Prerequisites¶

To use Okta as an IdP for Cloud Manager, you must have:

  • An Okta account.
  • A custom, routable domain name.

Procedures¶

Throughout the following procedure, it is helpful to have one browser tab open to your Federation Management Console and one tab open to your Okta account.

Configure Okta as an Identity Provider¶

1

Add a new application to your Okta account¶

  1. In the Okta top navigation, click the Applications tab.
  2. Click the Add Application button.
  3. Click the Create New App button.
  4. Select Web for the Platform field.
  5. Select SAML 2.0 for the Sign on method field.
  6. Click the Create button.
2

Create Okta SAML integration¶

  1. Fill in the App name text field with your desired application name.
  2. Optionally, add a logo image and set app visibility.
  3. Click the Next button.
3

Download Okta certificate¶

  1. Click the Download Okta Certificate button.
  2. Rename the downloaded file to have a .cer extension instead of .cert .
4
5

Create a new Identity Provider¶

  1. In the FMC dashboard, click the Manage Identity Providers button.
  2. Click the Setup Identity Provider button.
6

Enter SAML settings¶

  1. In the FMC dashboard, fill in the data fields with the following values:

    Field Value
    Configuration Name A descriptive name of your choosing.
    Issuer URI and Single Sign-On URL Click the Fill With Placeholder Values button to the right of the text fields. You will get the real values from Okta in a later step.
    Identity Provider Signature Certificate

    Click the Choose File button to upload the .cer file you received from Okta earlier.

    You can either:

    • Upload the certificate from your computer, or
    • Paste the contents of the certificate into a text box.
    Request Binding HTTP POST
    Response Signature Algorithm SHA-256
  2. Click the Next button.

7

Create SAML Integration¶

  1. In this step, copy values from the Cloud Manager FMC to the Okta Create SAML Integration page.

    Okta Data Field Value
    Single sign on URL

    Use the Assertion Consumer Service URL from the Cloud Manager FMC.

    Checkboxes:

    • Use this for Recipient URL and Destination URL : checked
    • Allow this app to request other SSO URLs : unchecked
    Audience URI (SP Entity ID) Use the Audience URI from the Cloud Manager FMC.
    Default RelayState

    Optionally, add a RelayState URL to your IdP to send users to a URL you choose and avoid unnecessary redirects after login. You can use:

    Destination RelayState URL
    MongoDB MongoDB Atlas The Login URL that was generated for your identity provider configuration in the MongoDB Atlas Federation Management App .
    MongoDB Support Portal
    https://auth.mongodb.com/app/salesforce/exk1rw00vux0h1iFz297/sso/saml
    
    MongoDB University
    https://university.mongodb.com
    
    MongoDB Community Forums
    https://auth.mongodb.com/home/mongodbexternal_communityforums_3/0oa3bqf5mlIQvkbmF297/aln3bqgadajdHoymn297
    
    MongoDB Feedback Engine
    https://auth.mongodb.com/home/mongodbexternal_uservoice_1/0oa27cs0zouYPwgj0297/aln27cvudlhBT7grX297
    
    MongoDB JIRA
    https://auth.mongodb.com/app/mongodbexternal_mongodbjira_1/exk1s832qkFO3Rqox297/sso/saml
    
    Name ID format Unspecified
    Application username Email
    Update application username on Create and update
  2. Click the Click Show Advanced Settings link in the Okta configuration page and ensure that the following values are set:

    Okta Data Field Value
    Response Signed
    Assertion Signature Signed
    Signature Algorithm RSA-SHA256
    Digest Algorithm SHA256
    Assertion Encryption Unencrypted
  3. Leave the remaining Advanced Settings fields in their default state.

  4. Scroll down to the Attribute Statements (Optional) section and create three attributes with the following values:

    Name Name Format Value
    email Unspecified user.email
    firstName Unspecified user.firstName
    lastName Unspecified user.lastName

    Important

    The values in the Name column are case-sensitive. Enter them exactly as shown.

    Note

    These values may be different if Okta is connected to an Active Directory. For the appropriate values, use the Active Directory fields that contain a user’s first name, last name, and full email address.

  5. Click the Next button in the Okta configuration.

  6. Select the radio button marked I’m an Okta customer adding an internal app .

  7. Click the Finish button.

8

Copy information back to the Cloud Manager FMC¶

  1. On the Okta application page, click the View Setup Instructions button in the middle of the page.

    Note

    The Okta setup instructions appear in a new browser tab.

  2. In the Cloud Manager FMC , click the Finish button to return to the Identity Providers page. Click the Modify button for your newly created IdP .

  3. Fill in the following text fields:

    FMC Data Field Value
    Issuer URI Use the Identity Provider Issuer value from the Okta Setup Instructions page.
    Single Sign-on URL Use the Identity Provider Single Sign-On URL value from the Okta Setup Instructions page.
  4. Close the Okta setup instructions browser tab.

  5. Click the Next button on the Cloud Manager FMC page.

  6. Click the Finish button the FMC Edit Identity Provider page.

9

Assign users to your Okta application¶

  1. On the Okta application page, click the Assignments tab.
  2. Ensure that all your Cloud Manager organization users who will use the Okta service are enrolled.

Map your Domain¶

Mapping your domain to the IdP lets Cloud Manager know that users from your domain should be directed to the Login URL for your identity provider configuration.

When users visit the Cloud Manager login page, they enter their email address. If the email domain is associated with an IdP, they are sent to the Login URL for that IdP.

Important

You can map a single domain to multiple identity providers. If you do, users who log in using the MongoDB Cloud console are automatically redirected to the first matching IdP mapped to the domain.

To log in using an alternative identity provider, users must either:

  • Initiate the MongoDB Cloud login through the desired IdP , or
  • Log in using the Login URL associated with the desired IdP .

Use the Federation Management Console to map your domain to the IdP :

1

Open the Federation Management Console

  1. Log in to Cloud Manager.
  2. Use the dropdown at the top-left of Cloud Manager to select the organization for which you want to manage federation settings.
  3. Click Settings in the left navigation pane.
  4. In Manage Federation Settings , click Visit Federation Management App .
2

Enter domain mapping information.¶

  1. Click Add a Domain .

  2. On the Domains screen, click Add Domain .

  3. Enter the following information for your domain mapping:

    Field Description
    Display Name Name to easily identify the domain.
    Domain Name Domain name to map.
  4. Click Next .

3

Choose how to verify your domain.¶

Note

You can choose the verification method once. It cannot be modified. To select a different verification method, delete and recreate the domain mapping.

Select the appropriate tab based on whether you are verifying your domain by uploading an HTML file or creating a DNS TXT record:

Upload an HTML file containing a verification key to verify that you own your domain.

  1. Click HTML File Upload .
  2. Click Next .
  3. Download the mongodb-site-verification.html file that Cloud Manager provides.
  4. Upload the HTML file to a web site on your domain. You must be able to access the file at <https://host.domain>/mongodb-site-verification.html .
  5. Click Finish .
4

Verify your domain.¶

The Domains screen displays both unverified and verified domains you’ve mapped to your IdP . To verify your domain, click the target domain’s Verify button. Cloud Manager shows whether the verification succeeded in a banner at the top of the screen.

Associate Your Domain with Your Identity Provider¶

After successfully verifying your domain, use the Federation Management Console to associate the domain with Okta:

1

Click Identity Providers in the left navigation.¶

2

For the IdP you want to associate with your domain, click pencil icon next to Associated Domains

3

Select the domain you want to associate with the IdP .

4

Click Confirm

Test Your Domain Mapping¶

Important

Before you begin testing, copy and save the Bypass SAML Mode URL for your IdP . Use this URL to bypass federated authentication in the event that you are locked out of your Cloud Manager organization.

While testing, keep your session logged in to the Federation Management Console to further ensure against lockouts.

To learn more about Bypass SAML Mode , see Bypass SAML Mode .

Use the Federation Management Console to test the integration between your domain and Okta:

1

In a private browser window, navigate to the Cloud Manager log in page.¶

2

Enter a username (usually an email address) with your verified domain.¶

Example

If your verified domain is mongodb.com , enter alice@mongodb.com .

3

Click Next

If you mapped your domain correctly, you’re redirected to your IdP to authenticate. If authenticating with your IdP succeeds, you’re redirected back to Cloud Manager.

Note

You can bypass the Cloud Manager log in page by navigating directly to your IdP ’s Login URL . The Login URL takes you directly to your IdP to authenticate.

(Optional) Map an Organization¶

Use the Federation Management Console to assign your domain’s users access to specific Cloud Manager organizations:

1

Open the Federation Management Console

  1. Log in to Cloud Manager.
  2. Use the dropdown at the top-left of Cloud Manager to select the organization for which you want to manage federation settings.
  3. Click Settings in the left navigation pane.
  4. In Manage Federation Settings , click Visit Federation Management App .
2

Connect an organization to the Federation Application.¶

  1. Click View Organizations .

    Cloud Manager displays all organizations where you are an Organization Owner .

    Organizations which are not already connected to the Federation Application have Connect button in the Actions column.

  2. Click the desired organization’s Connect button.

3

Apply an Identity Provider to the organization.¶

From the Organizations screen in the management console:

  1. Click the Name of the organization you want to map to an IdP .

  2. On the Identity Provider screen, click Apply Identity Provider .

    Cloud Manager directs you to the Identity Providers screen which shows all IdPs you have linked to Cloud Manager.

  3. For the IdP you want to apply to the organization, click Modify .

  4. At the bottom of the Edit Identity Provider form, select the organizations to which this IdP applies.

  5. Click Next .

  6. Click Finish .

4

Connect an organization to the Federation Application.¶

  1. Click Organizations in the left navigation.
  2. In the list of Organizations , ensure that your desired organization(s) now have the expected Identity Provider .

(Optional) Configure Advanced Federated Authentication Options¶

You can configure the following advanced options for federated authentication for greater control over your federated users and authentication flow:

  • Bypass SAML Mode

Note

The following advanced options for federated authentication require you to map an organization .

  • Assign a Default User Role for an Organization
  • Restrict Access to an Organization by Domain
  • Restrict User Membership to the Federation

Sign in to Cloud Manager Using Your Login URL¶

All users you assigned to the Okta application can log in to Cloud Manager using their Okta credentials on the Login URL . Users have access to the organizations you mapped to your IdP .

Important

You can map a single domain to multiple identity providers. If you do, users who log in using the MongoDB Cloud console are automatically redirected to the first matching IdP mapped to the domain.

To log in using an alternative identity provider, users must either:

  • Initiate the MongoDB Cloud login through the desired IdP , or
  • Log in using the Login URL associated with the desired IdP .

If you selected a default organization role, new users who log in to Cloud Manager using the Login URL have the role you specified.

Automation Configuration — MongoDB Cloud Manager

Automation Configuration¶

On this page

  • Overview
  • Configuration Version
  • Download Base
  • MongoDB Versions Specifications
  • Automation
  • Monitoring
  • Backup
  • MongoDB Instances
  • Cluster Wide
  • Replica Sets
  • Sharded Clusters
  • Cluster Balancer
  • Authentication
  • SSL
  • MongoDB Roles
  • Kerberos
  • Indexes

Overview¶

The Automation uses an automation configuration to determine the desired state of a MongoDB deployment and to effect changes as needed. If you modify the deployment through the Cloud Manager web interface, you never need manipulate this configuration.

If you are using the Automation without Cloud Manager, you can construct and distribute the configuration manually.

Optional fields are marked as such.

A field that takes a <number> as its value can take integers and floating point numbers.

Configuration Version¶

This lists the version of the automation configuration.

"version" : "<integer>"
Name Type Necessity Description
version integer Required Revision of this automation configuration file.

Download Base¶

Cloud Manager downloads automatic versions and runs starting scripts in the directory set in options.downloadBase .

"options" : {
  "downloadBase" : "<string>",
}
Name Type Necessity Description
options object Required Path for automatic downloads of new versions.
options.downloadBase string Required Directory on Linux and UNIX platforms for automatic version downloads and startup scripts.

MongoDB Versions Specifications¶

The mongoDbVersions[n] array defines specification objects for the MongoDB instances found in the processes array. Each MongoDB instance in the processes array must have a specification object in this array.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
"mongoDbVersions[n]" : [
  {
   "name" : "<string>",
    "builds" : [
      {
       "platform" : "<string>",
        "url" : "<string>",
        "gitVersion" : "<string>",
        "modules" : [ "<string>", ... ],
        "architecture" : "<string>",
        "bits" : "<integer>",
        "win2008plus" : "<Boolean>",
        "winVCRedistUrl" : "<string>",
        "winVCRedistOptions" : [ "<string>", ... ],
        "winVCRedistDll" : "<string>",
        "winVCRedistVersion" : "<string>"
      },
      ...
    ],
  },
  ...
]
Name Type Necessity Description
mongoDbVersions[n] array of objects Required Specification objects for the MongoDB instances found in the processes array. Each MongoDB instance in processes must have a specification object in mongoDbVersions[n] .
mongoDbVersions[n].name string Required Name of the specification object. The specification object is attached to a MongoDB instance through the instance’s processes.version parameter in this configuration.
mongoDbVersions[n].builds[k] array of objects Required Builds available for this MongoDB instance.
mongoDbVersions[n].builds[k].platform string Required Platform for this MongoDB instance.
mongoDbVersions[n].builds[k].url string Required URL from which to download MongoDB for this instance.
mongoDbVersions[n].builds[k].gitVersion string Required Commit identifier that identifies the state of the code used to build the MongoDB process. The MongoDB buildInfo command returns the gitVersion identifier.
mongoDbVersions[n].builds[k].modules array Required List of modules for this version. Corresponds to the modules parameter that the buildInfo command returns.
mongoDbVersions[n].builds[k].architecture string Required Processor’s architecture. Cloud Manager accepts amd64 or ppc64le .
mongoDbVersions[n].builds[k].bits integer Deprecated Processor’s bus width. Don’t remove or make modifications to this parameter.
mongoDbVersions[n].builds[k].win2008plus Boolean Optional Set to true if this is a Windows build that requires either Windows 7 later or Windows Server 2008 R2 or later.
mongoDbVersions[n].builds[k].winVCRedistUrl string Optional URL from which the required version of the Microsoft Visual C++ redistributable can be downloaded.
mongoDbVersions[n].builds[k].winVCRedistOptions array of strings Optional String values that list the command-line options to be specified when running the Microsoft Visual C++ redistributable installer. Each command-line option is a separate string in the array.
mongoDbVersions[n].builds[k].winVCRedistDll string Optional Name of the Microsoft Visual C++ runtime DLL file that the agent checks to determine if a new version of the Microsoft Visual C++ redistributable is needed.
mongoDbVersions[n].builds[k].winVCRedistVersion string Optional Minimum version of the Microsoft Visual C++ runtime DLL that must be present to skip over the installation of the Microsoft Visual C++ redistributable.

Automation¶

agentVersion specifies the version of the MongoDB Agent.

Note

While you can update the MongoDB Agent version through this configuration property, you should use the Update Agent Versions endpoint to ensure your versions are up to date.

"agentVersion" : {
  "name" : "<string>",
  "directoryUrl" : "<string>"
}
Name Type Necessity Description
agentVersion object Optional Version of the MongoDB Agent to run. If the running version does not match this setting, the MongoDB Agent downloads the specified version, shuts itself down, and starts the new version.
agentVersion.name string Optional Desired version of the MongoDB Agent.
agentVersion.directoryUrl string Optional URL from which to download the MongoDB Agent.

Monitoring¶

The monitoringVersions array specifies the version of the Monitoring Agent. Cloud Manager has made this parameter obsolete. To update the monitoring log settings, use the Update Monitoring Configuration Settings endpoint.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
"monitoringVersions" : [
  {
    "name" : "<string>",
    "hostname" : "<string>",
    "urls" : {
      "<platform1>" : {
        "<build1>" : "<string>",
        ...,
        "default" : "<string>"
      },
      ...
    },
    "baseUrl" : "<string>",
    "logPath" : "<string>",
    "logRotate" : {
      "sizeThresholdMB" : <number>,
      "timeThresholdHrs" : <integer>,
      "numUncompressed": <integer>,
      "percentOfDiskspace" : <number>,
      "numTotal" : <integer>
    }
  },
  ...
]
Name Type Necessity Description
monitoringVersions array of objects Optional Objects that define version information for each Monitoring Agent.
monitoringVersions.name string Required

Version of the Monitoring Agent.

See also

MongoDB Compatibility Matrix

Important

This property is read-only. Any modifications made to this property are not reflected when updating the Monitoring Agent through the API .

To update the Monitoring Agent version, use this endpoint .

monitoringVersions.hostname string Required FQDN of the host that runs the Monitoring Agent. If the Monitoring Agent is not running on the host, Cloud Manager installs the agent from the location specified in monitoringVersions.urls .
monitoringVersions.urls object Required Platform- and build-specific URL s from which to download the Monitoring Agent.
monitoringVersions.urls.<platform> object Required Label that identifies an operating system and its version. The field contains an object with key-value pairs, where each key is either the name of a build or default and each value is a URL for downloading the Monitoring Agent. The object must include the default key set to the default download URL for the platform.
monitoringVersions.baseUrl string Required Base URL used for the mmsBaseUrl setting.
monitoringVersions.logPath string Optional Directory where the agent stores its logs. The default is to store logs in /dev/null .
monitoringVersions.logRotate object Optional Enables log rotation for the MongoDB logs for a process.
monitoringVersions.logRotate.sizeThresholdMB number Required Maximum size in MB for an individual log file before rotation.
monitoringVersions.logRotate.timeThresholdHrs integer Required Maximum time in hours for an individual log file before rotation.
monitoringVersions.logRotate.numUncompressed integer Optional Maximum number of total log files to leave uncompressed, including the current log file. The default is 5 . In earlier versions of Cloud Manager, this field was named maxUncompressed . The earlier name is still recognized, though the new version is preferred.
monitoringVersions.logRotate.percentOfDiskspace number Optional Maximum percentage of total disk space all log files should take up before deletion. The default is .02 .
monitoringVersions.logRotate.numTotal integer Optional Total number of log files. If a number is not specified, the total number of log files defaults to 0 and is determined by other monitoringVersions.logRotate settings.

Backup¶

The backupVersions array specifies the version of the Backup Agent. Cloud Manager has made this parameter obsolete. To update the backup log settings, use the Update Backup Configuration Settings endpoint.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
"backupVersions[n]" : [
  {
   "name" : "<string>",
    "hostname" : "<string>",
    "urls" : {
     "<platform1>" : {
       "<build1>" : "<string>",
        ...,
        "default" : "<string>"
      },
      ...
    },
    "baseUrl" : "<string>",
    "logPath" : "<string>",
    "logRotate" : {
     "sizeThresholdMB" : "<number>",
      "timeThresholdHrs" : "<integer>",
      "numUncompressed": "<integer>",
      "percentOfDiskspace" : "<number>",
      "numTotal" : "<integer>"
    }
  },
  ...
]
Name Type Necessity Description
backupVersions[n] array of objects Optional Objects that define version information for each Backup Agent.
backupVersions[n].name string Required

Version of the Backup Agent.

See also

MongoDB Compatibility Matrix

Important

This property is read-only. Any modifications made to this property are not reflected when updating the Backup Agent through the API . To update the Backup Agent version, see this endpoint .

backupVersions[n].hostname string Required FQDN of the host that runs the Backup Agent. If the Backup Agent is not running on the host, Cloud Manager installs the agent from the location specified in backupVersions[n].urls .
backupVersions[n].urls object Required Platform- and build-specific URL s from which to download the Backup Agent.
backupVersions[n].urls.<platform> object Required Label that identifies an operating system and its version. The field contains an object with key-value pairs, where each key is either the name of a build or default and each value is a URL for downloading the Backup Agent. The object must include the default key set to the default download URL for the platform.
backupVersions[n].baseUrl string Required Base URL used for the mothership and https settings in the Custom Settings . For example, for “baseUrl”=https://cloud.mongodb.com , the backup configuration fields would have these values: mothership=api-backup.mongodb.com and https”=true .
backupVersions[n].logPath string Optional Directory where the agent stores its logs. The default is to store logs in /dev/null .
backupVersions[n].logRotate object Optional Enables log rotation for the MongoDB logs for a process.
backupVersions[n].logRotate.sizeThresholdMB number Required Maximum size in MB for an individual log file before rotation.
backupVersions[n].logRotate.timeThresholdHrs integer Required Maximum time in hours for an individual log file before rotation.
backupVersions[n].logRotate.numUncompressed integer Optional Maximum number of total log files to leave uncompressed, including the current log file. The default is 5 .
backupVersions[n].logRotate.percentOfDiskspace number Optional Maximum percentage of total disk space all log files should take up before deletion. The default is .02 .
backupVersions[n].logRotate.numTotal integer Optional If a number is not specified, the total number of log files defaults to 0 and is determined by other backupVersion.logRotate settings.

MongoDB Instances¶

The processes array determines the configuration of your MongoDB instances. Using this array, you can:

  • Restore an instance.
  • Start an initial sync process on one or more MongoDB instances.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
"processes": [{
  "<args>": {},
  "alias": "<string>",
  "authSchemaVersion": "<integer>",
  "backupRestoreUrl": "<string>",
  "cluster": "<string>",
  "defaultRWConcern": {
    "defaultReadConcern": {
      "level": "<string>"
    },
    "defaultWriteConcern": {
      "j": "<boolean>",
      "w": "<string>",
      "wtimeout": "<integer>"
    }
  }
  "disabled": "<Boolean>",
  "featureCompatibilityVersion": "<string>",
  "hostname": "<string>",
  "lastCompact" : "<dateInIso8601Format>",
  "lastRestart" : "<dateInIso8601Format>",
  "lastResync" : "<dateInIso8601Format>",
  "lastKmipMasterKeyRotation" : "<dateInIso8601Format>",
  "logRotate": {
    "sizeThresholdMB": "<number>",
    "timeThresholdHrs": "<integer>",
    "numUncompressed": "<integer>",
    "percentOfDiskspace": "<number>",
    "numTotal": "<integer>"
  },
  "manualMode": "<Boolean>",
  "name": "<string>",
  "numCores": "<integer>",
  "processType": "<string>",
  "version": "<string>"
}]
Name Type Necessity Description
processes array Required Contains objects that define the mongos and mongod instances that Cloud Manager monitors. Each object defines a different instance.
processes[n].args2_6 object Required

MongoDB configuration object for MongoDB versions 2.6 and later.

See also

Supported configuration options .

processes[n].alias string Optional Hostname alias (often a DNS CNAME) for the host on which the process runs. If an alias is specified, the MongoDB Agent prefers this alias over the hostname specified in processes.hostname when connecting to the host. You can also specify this alias in replicaSets.host and sharding.configServer .
processes[n].authSchemaVersion integer Required

Schema version of the user credentials for MongoDB database users. This should match all other elements of the processes array that belong to the same cluster.

  • Cloud Manager accepts 3 and 5 for this parameter.
  • MongoDB 3.x and 4.x clusters default to 5 .
  • MongoDB 2.6 clusters default to 3 .

See also

Upgrade to SCRAM-SHA-1 in the MongoDB 3.0 release notes.

processes[n].backupRestoreUrl string Optional

Delivery URL for the restore. Cloud Manager sets this when creating a restore.

See also

Automate Backup Restoration through the API .

processes[n].cluster string Conditional

Name of the sharded cluster. Set this value to the same value in the sharding.name parameter in the sharding array for the mongos .

  • Required for a mongos .
  • Not needed for a mongod .
defaultRWConcern.defaultReadConcern.level string Optional

Consistency and isolation properties set for the data read from replica sets and replica set shards. MongoDB Atlas accepts the following values:

  • “available”
  • “local”
  • “majority”
defaultRWConcern.defaultWriteConcern.j boolean Optional Flag that indicates whether the write acknowledgement must be written to the on-disk journal.
defaultRWConcern.defaultWriteConcern.w string Optional

Desired number of mongod instances that must acknowledge a write operation in a replica sets and replica set shards. MongoDB Atlas accepts the following values:

  • Any number 0 or greater
  • “majority”
defaultRWConcern.defaultWriteConcern.wtimeout number Optional Desired time limit for the write concern expressed in milliseconds. Set this value when you set defaultRWConcern.defaultWriteConcern.w to a value greater than 1 .
processes[n].disabled Boolean Optional Flag that indicates if this process should be shut down. Set to true to shut down the process.
processes[n].featureCompatibilityVersion string Required

Version of MongoDB with which this process has feature compatibility. Changing this value can enable or disable certain features that persist data incompatible with MongoDB versions earlier or later than the featureCompatibilityVersion you choose.

  • Cloud Manager accepts 3.2 , 3.6 , 4.2 and 4.4 as parameter values. If you have an existing deployment, Cloud Manager only accepts a featureCompatibilityVersion equal to or one release older than the MongoDB version you deployed. To learn which of these parameter values is supported for each MongoDB version, and which features each of these values enable or disable, see setFeatureCompatibilityVersion in the MongoDB Manual.
  • Cloud Manager sets this parameter to match the MongoDB version for new deployments.
  • Cloud Manager doesn’t automatically increment this parameter when you upgrade a host from one MongoDB version to the next.

See also

setFeatureCompatibilityVersion

processes[n].hostname string Required Name of the host that serves this process. This defaults to localhost .
processes[n].lastCompact string Optional

Timestamp in ISO 8601 date and time format in UTC when Cloud Manager last reclaimed free space on a cluster’s disks. During certain operations, MongoDB might move or delete data but it doesn’t free the currently unused space. Cloud Manager reclaims the disk space in a rolling fashion across members of the replica set or shards.

To reclaim this space:

  • Immediately, set this value to the current time as an ISO 8601 timestamp.
  • Later, set this value to a future ISO 8601 timestamp. Cloud Manager reclaims the space after the current time passes the provided timestamp.

To remove any ambiguity as to when you intend to reclaim the space on the cluster’s disks, specify a time zone with your ISO 8601 timestamp. For example, to set processes.lastCompact to 28 January 2021 at 2:43:52 PM US Central Standard Time, use "processes.lastCompact" : "2021-01-28T14:43:52-06:00"

processes[n].lastRestart string Optional Timestamp in ISO 8601 date and time format in UTC when Cloud Manager last restarted this process. If you set this parameter to the current timestamp, Cloud Manager forces a restart of this process after you upload this configuration. If you set this parameter for multiple processes in the same cluster, the Cloud Manager restarts the selected processes in a rolling fashion across members of the replica set or shards.
processes[n].lastResync string Optional

Timestamp in ISO 8601 date and time format in UTC of the last initial sync process that Cloud Manager performed on the node.

To trigger the init sync process on the node immediately, set this value to the current time as an ISO 8601 timestamp.

Warning

Use this parameter with caution. During initial sync, Automation removes the entire contents of the node’s dbPath directory.

If you set this parameter:

  • On the secondary node, the MongoDB Agent checks whether the specified timestamp is later than the time of the last resync, and if confirmed, starts init sync on this node.

    Example

    To set processes.lastResync on the secondary node to 28 May 2021 at 2:43:52 PM US CentralStandard Time, use:

    "processes.lastResync" : "2021-05-28T14:43:52-06:00" .

    If the MongoDB Agent confirms that this timestamp is later than the recorded time of the last resync, it starts init sync on the node.

  • On the primary node, the MongoDB Agent waits until you ask the primary node to become the secondary with the rs.stepDown() method, and then starts init sync on this node.

  • On all of the nodes in the same cluster, including the primary, the MongoDB Agent checks whether the specified timestamp is later than the time of the last resync, and if confirmed, starts init sync on the secondary nodes in a rolling fashion. The MongoDB Agent waits until you ask the primary node to become the secondary with the rs.stepDown() method, and then starts init sync on this node.

See also

Initial Sync

processes[n].lastKmipMasterKeyRotation string Optional Timestamp in ISO 8601 date and time format in UTC when Cloud Manager last rotated the master KMIP key. If you set this parameter to the current timestamp, Cloud Manager rotate the key after you upload this configuration.
processes[n].logRotate object Optional MongoDB configuration object for rotating the MongoDB logs of a process.
processes[n].logRotate. numTotal integer Optional Total number of log files that Cloud Manager retains. If you don’t set this value, the total number of log files defaults to 0 . Cloud Manager bases rotation on your other processes.logRotate settings.
processes[n].logRotate. numUncompressed integer Optional Maximum number of total log files to leave uncompressed, including the current log file. The default is 5 .
processes[n].logRotate. percentOfDiskspace number Optional

Maximum percentage of total disk space that Cloud Manager can use to store the log files expressed as decimal. If this limit is exceeded, Cloud Manager deletes compressed log files until it meets this limit. Cloud Manager deletes the oldest log files first.

The default is 0.02 .

processes[n].logRotate. sizeThresholdMB number Required Maximum size in MB for an individual log file before Cloud Manager rotates it. Cloud Manager rotates the log file immediately if it meets the value given in either this sizeThresholdMB or the processes.logRotate.timeThresholdHrs limit.
processes[n].logRotate. timeThresholdHrs integer Required

Maximum duration in hours for an individual log file before the next rotation. The time is since the last rotation.

Cloud Manager rotates the log file once the file meets either this timeThresholdHrs or the processes.logRotate.sizeThresholdMB limit.

processes[n].manualMode Boolean Optional

Flag that indicates if MongoDB Agent automates this process.

  • This defaults to false .
  • Set to true to disable Automation on this process. The MongoDB Agent takes no further actions on this process.
  • Set to false to enable Automation on this process. The MongoDB Agent automates actions on this process.
processes[n].name string Required Unique name to identify the instance.
processes[n].numCores integer Optional Number of cores that Cloud Manager should bind to this process. The MongoDB Agent distributes processes across the cores as evenly as possible.
processes[n].processType string Required Type of MongoDB process being run. Cloud Manager accepts mongod or mongos for this parameter.
processes[n].version string Required Name of the mongoDbVersions specification used with this instance.

Cluster Wide¶

clusterWideConfigurations specifies the parameters to set across a replica set or sharded cluster without requiring a rolling restart .

1
2
3
4
5
6
7
8
9
"clusterWideConfigurations" : {
  "<replicaSetID/clusterName>": {
    "changeStreamOptions": {
      "preAndPostImages": {
        "expireAfterSeconds": <integer>
      }
    }
  }
}
Name Type Necessity Description
replicaSetID/clusterName object Optional The change stream options to apply to the replica set or sharded cluster. MongoDB Agent only checks if this configuration is in a valid JSON format but doesn’t check the values for correctness.
changeStreamOptions.preAndPostImages.expireAfterSeconds number Required

Retention policy of change stream pre- and post-images in seconds. If you omit the value, the cluster retains the pre- and post-images until it removes the corresponding change stream events from the oplog.

If you remove this value, MongoDB Agent only removes this parameter from its automation configuration, but not from the server.

See also

changeStreamOptions.

Replica Sets¶

The replicaSets array defines each replica set’s configuration. This field is required for deployments with replica sets.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
"replicaSets":
[
  {
    "_id": "<string>",
    "protocolVersion": "<string>",
    "members":
    [
      {
        "_id": "<integer>",
        "host": "<string>",
        "arbiterOnly": "<boolean>",
        "buildIndexes": "<boolean>",
        "hidden": "<boolean>",
        "priority": "<number>",
        "tags": "<object>",
        "secondaryDelaySecs": "<integer>",
        "votes": "<number>"
      },{
        "_id": "<integer>",
        "host": "<string>",
        "arbiterOnly": "<boolean>",
        "buildIndexes": "<boolean>",
        "hidden": "<boolean>",
        "priority": "<number>",
        "tags": "<object>",
        "secondaryDelaySecs": "<integer>",
        "votes": "<number>"
      },{
        "_id": "<integer>",
        "host": "<string>",
        "arbiterOnly": "<boolean>",
        "buildIndexes": "<boolean>",
        "hidden": "<boolean>",
        "priority": "<number>",
        "tags": "<object>",
        "secondaryDelaySecs": "<integer>",
        "votes": "<number>"
      }
    ],
    "force":
    {
      "currentVersion": "<integer>"
    }
  }
]
Name Type Necessity Description
replicaSets array Optional

Configuration of each replica set . The MongoDB Agent uses the values in this array to create valid replica set configuration documents . The agent regularly checks that replica sets are configured correctly. If a problem occurs, the agent reconfigures the replica set according to its configuration document. The array can contain the following top-level fields from a replica set configuration document: _id ; version ; and members .

See also

replSetGetConfig

replicaSets[n]._id string Required The name of the replica set.
replicaSets[n].protocolVersion string Optional Protocol version of the replica set.
replicaSets[n].members array Optional

Objects that define each member of the replica set. The members.host field must specify the host’s name as listed in processes.name . The MongoDB Agent expands the host field to create a valid replica set configuration.

See also

replSetGetConfig.

replicaSets[n].members[m]._id integer Optional Any positive integer that indicates the member of the replica set.
replicaSets[n].members[m].host string Optional Hostname, and port number when applicable, that serves this replica set member.
replicaSets[n].members[m].arbiterOnly boolean Optional Flag that indicates whether this replica set member acts as an arbiter.
replicaSets[n].members[m].buildIndexes boolean Optional Flag that indicates whether the mongod process builds indexes on this replica set member.
replicaSets[n].members[m].hidden boolean Optional Flag that indicates whether the replica set allows this member to accept read operations.
replicaSets[n].members[m].priority number Optional Relative eligibility for Cloud Manager to select this replica set member as a primary. Larger number increase eligibility. This value can be between 0 and 1000, inclusive for data-bearing nodes. Arbiters can have values of 0 or 1.
replicaSets[n].members[m].tags object Optional List of user-defined labels and their values applied to this replica set member.
replicaSets[n].members[m].secondaryDelaySecs integer Optional Amount of time in seconds that this replica set memberr should lag behind the primary.
replicaSets[n].members[m].votes number Optional Quantity of votes this replica set member can cast for a replica set election. All data bearing nodes can have 0 or 1 votes. Arbiters always have 1 vote.
replicaSets[n].force object Optional

Instructions to the MongoDB Agent to force a replica set to use the Configuration Version specified in replicaSets.force.CurrentVersion .

With this object, the MongoDB Agent can force a replica set to accept a new configuration to recover from a state in which a minority of its members are available.

replicaSets[n].force.currentVersion integer Optional

Configuration Version that the MongoDB Agent forces the replica set to use. Set to -1 to force a replica set to accept a new configuration.

Warning

Forcing a replica set reconfiguration might lead to a rollback of majority-committed writes.

Proceed with caution. Contact MongoDB Support if you have questions about the potential impacts of this operation.

Sharded Clusters¶

The sharding array defines the configuration of each sharded cluster. This parameter is required for deployments with sharded clusters.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
"sharding" : [
  {
    "managedSharding" : <boolean>,
    "name" : "<string>",
    "configServerReplica" : "<string>",
    "collections" : [
      {
        "_id" : "<string>",
        "key" : [
          [ "shard key" ],
          [ "shard key" ],
          ...
        ],
        "unique" : <boolean>
      },
      ...
    ],
    "shards" : [
      {
        "_id" : "<string>",
        "rs" : "<string>",
        "tags" : [ "<string>", ... ]
      },
      ...
    ],
    "tags" : [
      {
        "ns" : "<string>",
        "min" : [
          {
            "parameter" : "<string>",
            "parameterType" : "<string>",
            "value" : "<string>"
          }
        ],
        "max" : [
          {
            "parameter" : "<string>",
            "parameterType" : "<string>",
            "value" : "<string>"
          }
        ],
        "tag" : "<string>"
      },
      ...
    ]
  },
  ...
]
Name Type Necessity Description
sharding array of objects Optional Objects that define the configuration of each sharded cluster . Each object in the array contains the specifications for one cluster. The MongoDB Agent regularly checks each cluster’s state against the specifications. If the specification and cluster don’t match, the agent will change the configuration of the cluster, which might cause the balancer to migrate chunks.
sharding.managedSharding boolean Conditional Flag that indicates whether Cloud Manager Automation manages all sharded collections and tags in the deployment
sharding.name string Conditional Name of the cluster. This must correspond with the value in processes.cluster for a mongos .
sharding.configServerReplica string Conditional

Name of the config server replica set .

You can add this array parameter if your config server runs as a replica set.

If you run legacy mirrored config servers that don’t run as a replica set, use sharding.configServer .

sharding.configServer array of strings Conditional

Names of the config server hosts. The host names match the names used in each host’s processes.name parameter.

If your sharded cluster runs MongoDB 3.4 or later, use sharding.configServerReplica .

Important

MongoDB 3.4 removes support for mirrored config servers.

sharding.collections array of objects Conditional Objects that define the sharded collections and their shard keys .
sharding.collections._id string Conditional namespace of the sharded collection. The namespace is the combination of the database name and the name of the collection. For example, testdb.testcoll .
sharding.collections.key array of arrays Conditional

Collection’s shard keys . It contains:

  • One array if your cluster uses one shard key.
  • Multiple arrays if your cluster uses a compound shard key.
sharding.collections.unique boolean Conditional Flag that indicates whether MongoDB enforces uniqueness for the shard key.
sharding.shards array of objects Conditional Cluster’s shards .
sharding.shards._id string Conditional Name of the shard.
sharding.shards.rs string Conditional Name of the shard’s replica set. This is specified in the replicaSets._id parameter.
sharding.shards.tags array of strings Conditional

Zones assigned to this shard.

You can add this array parameter if you use zoned sharding.

sharding.tags array of objects Conditional Definition of zones for zoned sharding. Each object in this array defines a zone and configures the shard key range for that zone.
sharding.tags.ns string Conditional

Namespace of the collection that uses zoned sharding. The namespace combines the database name and the name of the collection.

Example

testdb.testcoll

sharding.tags.min array Conditional

Minimum value of the shard key range.

Specify the field name, field type, and value in a document of the following form.

{
  "field" : <string>,
  "fieldType" : <string>,
  "value" : <string>
}

fieldType must be one of the following:

  • string
  • integer
  • long
  • double
  • decimal
  • date
  • timestamp
  • oid
  • minKey
  • maxKey

value must be passed in as a string value.

To use a compound shard key, specify each field in a separate document, as shown in the example after this table. For more information on shard keys, see Shard Keys in the MongoDB manual.

sharding.tags.max array Conditional

Maximum value of the shard key range.

Specify the field name, field type, and value in a document of the following form.

{
  "field" : <string>,
  "fieldType" : <string>,
  "value" : <string>
}

fieldType must be one of the following:

  • string
  • integer
  • long
  • double
  • decimal
  • date
  • timestamp
  • oid
  • minKey
  • maxKey

value must be passed in as a string value.

To use a compound shard key, specify each field in a separate document, as shown in the example after this table. For more information on shard keys, see Shard Keys in the MongoDB manual.

sharding.tags.tag string Conditional Name of the zone associated with the shard key range specified by sharding.tags.min and sharding.tags.max .

Example

The sharding.tags Array with Compound Shard Key

The following example configuration defines a compound shard key range with a min value of { a : 1, b : ab } and a max value of { a : 100, b : fg } . The example defines the range on the testdb.test1 collection and assigns it to zone zone1 .

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
"tags" : [
  {
    "ns" : "testdb.test1",
    "min" : [
      {
        "parameter" : "a",
        "parameterType" : "integer",
        "value" : "1"
      },
      {
        "parameter" : "b",
        "parameterType" : "string",
        "value" : "ab"
      }
    ],
    "max" : [
      {
        "parameter" : "a",
        "parameterType" : "integer",
        "value" : "100"
      },
      {
        "parameter" : "b",
        "parameterType" : "string",
        "value" : "fg"
      }
    ],
    "tag" : "zone1"
  }
]

Cluster Balancer¶

The balancer object is optional and defines balancer settings for each cluster.

1
2
3
4
5
"balancer": {
  "<clusterName1>": {},
  "<clusterName2>": {},
  ...
}
Name Type Necessity Description
balancer object Optional Parameters named according to clusters, each parameter containing an object with the desired balancer settings for the cluster. The object uses the stopped and activeWindow parameters, as described in the procedure to schedule the balancing window in this tutorial in the MongoDB manual.

Authentication¶

Cloud Manager doesn’t require the auth object. This object defines authentication-related settings.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
{
  "auth": {
    "authoritativeSet": "<boolean>",
    "autoUser": "<string>",
    "autoPwd": "<string>",
    "disabled": "<boolean>",
    "deploymentAuthMechanisms": ["<string>", "<string>"],
    "autoAuthMechanisms": ["<string>"],
    "key": "<string>",
    "keyfile": "<string>",
    "newAutoPwd": "<string>",
    "newKey": "<string>",
    "usersDeleted": [{
      "user": "<string>",
      "dbs": ["<string>", "<string>"]
    }],
    "usersWanted": [{
      "authenticationRestrictions": [{
        "clientSource": ["(IP | CIDR range)", "(IP | CIDR range)"],
        "serverAddress": ["(IP | CIDR range)", "(IP | CIDR range)"]
      }],
      "db": "<string>",
      "initPwd": "<string>",
      "otherDBRoles": {
        "<string>": ["<string>", "<string>"]
      },
      "roles": [{
        "db": "<string>",
        "role": "<string>"
      }],
      "pwd": "<string>",
      "user": "<string>"
    }]
  }
}
Name Type Necessity Description
auth object Optional

Defines authentication-related settings.

Note

If you omit this parameter, skip the rest of this section.

auth.authoritativeSet boolean Conditional

Sets whether or not Cloud Manager enforces a consistent set of managed MongoDB users and roles in all managed deployments in the project.

  • If “auth.authoritativeSet” : true , then Cloud Manager enforces consistent users and roles .
  • If “auth.authoritativeSet” : false , then Cloud Manager doesn’t enforce consistent users and roles .

auth.authoritativeSet defaults to false .

Required if “auth” : true .

auth.autoUser string Conditional

Username that the Automation uses when connecting to an instance.

Required if “auth” : true .

auth.autoPwd string Conditional

Password that the Automation uses when connecting to an instance.

Required if “auth” : true .

auth.disabled boolean Optional Flag indicating if auth is disabled. If not specified, disabled defaults to false .
auth.deploymentAuthMechanisms array of strings Conditional

Lists the supported authentication mechanisms for the processes in the deployment.

Required if “auth” : true .

Specify:

Value Authentication Mechanism
MONGODB-CR SCRAM-SHA-1
SCRAM-SHA-256 SCRAM-SHA-256
MONGODB-X509 x.509 Client Certificate
PLAIN LDAP
GSSAPI Kerberos
auth.autoAuthMechanisms array of strings Conditional

Sets the authentication mechanism used by the Automation. If not specified, disabled defaults to false .

Required if “auth” : true .

Note

This parameter contains more than one element only when it’s configured for both SCRAM-SHA-1 and SCRAM-SHA-256.

Specify:

Value Authentication Mechanism
MONGODB-CR SCRAM-SHA-1
SCRAM-SHA-256 SCRAM-SHA-256
MONGODB-X509 x.509 Client Certificate
PLAIN LDAP
GSSAPI Kerberos
auth.key string Conditional

Contents of the key file that Cloud Manager uses to authenticate to the MongoDB processes.

Required if “auth” : true and “auth.disabled” : false .

Note

If you change the auth.key value, you must change the auth.keyfile value.

auth.keyfile string Conditional

Path and name of the key file that Cloud Manager uses to authenticate to the MongoDB processes.

Required if “auth” : true and “auth.disabled” : false .

Note

If you change the auth.keyfile value, you must change the auth.key value.

auth
.newAutoPwd
string Optional

New password that the Automation uses when connecting to an instance. To rotate passwords without losing the connection:

  1. Set auth.newAutoPwd and leave auth.autoPwd with its current password.
  2. Wait for the goal state.
  3. auth.newAutoPwd copies over the auth.autoPwd password automatically.

Note

You can set this option only when you include SCRAM-SHA-1 or SCRAM-SHA-256 as one of the authentication mechanisms for the Automation in auth.autoAuthMechanisms .

auth.newKey string Optional

Contents of a new key file that you want Cloud Manager to use to authenticate to the MongoDB processes.

When you set this option, Cloud Manager rotates the key that the application uses to authenticate to the MongoDB processes in your deployment. When all MongoDB Agents use the new key, Cloud Manager replaces the value of auth.key with the new key that you provided in auth.newKey and removes auth.newKey from the automation configuration.

auth.usersDeleted array of objects Optional Objects that define the authenticated users to be deleted from specified databases or from all databases. This array must contain auth.usersDeleted.user and auth.usersDeleted.dbs .
auth.usersDeleted[n].user string Optional Username of user that Cloud Manager should delete.
auth.usersDeleted[n].dbs array of strings Optional List the names of the databases from which Cloud Manager should delete the authenticated user.
auth.usersWanted array of objects Optional Contains objects that define authenticated users to add to specified databases. Each object must have the auth.usersWanted[n].db , auth.usersWanted[n].user , and auth.usersWanted[n].roles parameters, and then have exactly one of the following parameters: auth.usersWanted[n].pwd , auth.usersWanted[n].initPwd , or auth.usersWanted[n].userSource .
auth.usersWanted[n].db string Conditional Database to which to add the user.
auth.usersWanted[n].user string Conditional Name of the user that Cloud Manager should add.
auth.usersWanted[n].roles array Conditional List of the roles to be assigned to the user from the user’s database, which is specified in auth.usersWanted[n].db .
auth.usersWanted[n].pwd string Conditional

32-character hex SCRAM-SHA-1 hash of the password currently assigned to the user.

Cloud Manager doesn’t use this parameter to set or change a password.

Required if:

  • “auth” : true ,
  • “auth.deploymentAuthMechanisms” : “MONGODB-CR” , and
  • “auth.usersWanted[n].initPwd” is unset.
auth.usersWanted[n].initPwd string Conditional

Cleartext password that you want to assign to the user.

Required if:

  • “auth” : true ,
  • “auth.deploymentAuthMechanisms” : “MONGODB-CR” , and
  • “auth.usersWanted[n].pwd” is unset.
auth.usersWanted[n].userSource string Deprecated No longer supported.
auth.usersWanted[n].otherDBRoles object Optional If you assign the user’s database “auth.usersWanted[n].db” : “admin” , then you can use this object to assign the user roles from other databases as well. The object contains key-value pairs where the key is the name of the database and the value is an array of string values that list the roles be assigned from that database.
auth.usersWanted[n].authenticationRestrictions array of documents Optional

Authentication restrictions that the host enforces on the user.

Warning

If a user inherits multiple roles with incompatible authentications restrictions, that user becomes unusable. For example, if a user inherits one role in which the clientSource field is [198.51.100.0] and another role in which the clientSource field is [203.0.113.0] , the server is unable to authenticate the user.

For more information about authentication in MongoDB, see Authentication.

auth.usersWanted[n].authenticationRestrictions[k].clientSource array of strings Conditional If present when authenticating a user, the host verifies that the given list contains the client’s IP address CIDR range. If the client’s IP address is not present, the host does not authenticate the user.
auth.usersWanted[n].authenticationRestrictions[k].serverAddress array of strings Conditional Comma-separated array of IP addresses to which the client can connect. If present, the host verifies that Cloud Manager accepted the client’s connection from an IP address in the given array. If the connection was accepted from an unrecognized IP address, the host doesn’t authenticate the user.

SSL¶

The ssl object enables TLS for encrypting connections. This object is optional.

"ssl" : {
  "CAFilePath" : "<string>"
}
Name Type Necessity Description
ssl object Optional

Enables TLS for encrypting connections. To use TLS , choose a package that supports TLS .

All platforms that support MongoDB Enterprise also support TLS .

ssl.clientCertificateMode string Conditional Indicates whether connections to Cloud Manager require a TLS certificate. The values are OPTIONAL and REQUIRE .
ssl.CAFilePath string Conditional

Absolute file path to the certificate used to authenticate through TLS on a Linux or UNIX host.

Cloud Manager requires either ssl.CAFilePath or ssa.CAFilePathWindows if:

  • You’re using TLS or X.509 authentication, and
  • The CA file is not in your operating system’s root certificates.
ssl.CAFilePathWindows string Conditional

Absolute file path to the certificate used to authenticate through TLS on a Windows host.

Cloud Manager requires either ssl.CAFilePath or ssa.CAFilePathWindows if:

  • You’re using TLS or X.509 authentication, and
  • The CA file is not in your operating system’s root certificates.
ssl.autoPEMKeyFilePath string Conditional

Absolute file path to the client private key (PEM) file that authenticates the TLS connection on a Linux or UNIX host.

Cloud Manager requires either ssl.autoPEMKeyFilePath or ssa.autoPEMKeyFilePathWindows if you’re using TLS or X.509 authentication.

ssl.autoPEMKeyFilePathWindows string Conditional

Absolute file path to the client private key (PEM) file that authenticates the TLS connection on a Windows host.

Cloud Manager requires either ssl.autoPEMKeyFilePath or ssa.autoPEMKeyFilePathWindows if you’re using TLS or X.509 authentication.

ssl.autoPEMKeyFilePwd string Conditional Password for the private key (PEM) file specified in ssl.autoPEMKeyFilePath or ssa.autoPEMKeyFilePathWindows . Cloud Manager requires this password if the PEM file is encrypted.

MongoDB Roles¶

The roles array is optional and describes user-defined roles.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
"roles" : [
  {
    "role" : "<string>",
    "db" : "<string>",
    "privileges" : [
      {
        "resource" : { ... },
        "actions" : [ "<string>", ... ]
      },
      ...
    ],
    "roles" : [
      {
        "role" : "<string>",
        "db" : "<string>"
      }
    ]
    "authenticationRestrictions" : [
     {
      "clientSource": [("<IP>" | "<CIDR range>"), ...],
      "serverAddress": [("<IP>" | "<CIDR range>"), ...]
    }, ...
  ]
  },
  ...
]
Name Type Necessity Description
roles array of objects Optional Roles and privileges that MongoDB has assigned to a cluster’s user-defined roles. Each object describes a different user-defined role. Objects in this array contain the same fields as documents in the system roles collection, except for the _id field.
roles[n].role string Conditional Name of the user-defined role.
roles[n].db string Conditional Database to which the user-defined role belongs.
roles[n].privileges array of documents Conditional Privileges this role can perform.
roles[n].privileges[k].resource string Conditional Specifies the resources upon which the privilege actions apply.
roles[n].privileges[k].actions string Conditional

Actions permitted on the resource.

See also

Privilege Actions

roles[n].roles array of documents Conditional Roles from which this role inherits privileges.
roles[n].authenticationRestrictions array of documents Optional

Authentication restrictions that the MongoDB server enforces on this role.

Warning

If a user inherits multiple roles with incompatible authentications restrictions, that user becomes unusable. For example, if a user inherits one role in which the clientSource field is [198.51.100.0] and another role in which the clientSource field is [203.0.113.0] , the server is unable to authenticate the user.

For more information about authentication in MongoDB, see Authentication.

roles[n].authenticationRestrictions[k].clientSource array of strings Conditional If present, when authenticating a user, the MongoDB server verifies that the client’s IP address is either in the given list or belongs to a CIDR range in the list. If the client’s IP address is not present, the MongoDB server does not authenticate the user.
roles[n].authenticationRestrictions[k].serverAddress array of strings Conditional Comma-separated array of IP addresses to which the client can connect. If present, the MongoDB server verifies that it accepted the client’s connection from an IP address in the given array. If the MongoDB server accepts a connection from an unrecognized IP address, the MongoDB server does not authenticate the user.

Kerberos¶

The kerberos object is optional and defines a kerberos service name used in authentication.

"kerberos": {
  "serviceName": "<string>"
}
Name Type Necessity Description
kerberos object Optional Key-value pair that defines the kerberos service name agents use to authenticate via kerberos.
kerberos.serviceName string Required

Label that sets:

  • The service name that the agents use to authenticate to a mongod or mongos via Kerberos.
  • The saslServiceName option in the MongoDB Server Parameters.

Indexes¶

The indexConfigs array is optional and defines indexes to be built for specific replica sets.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
"indexConfigs": [{
  "key": [
    ["<string>", "<value>"]
  ],
  "rsName": "<string>",
  "dbName": "<string>",
  "collectionName": "<string>",
  "collation": {
    "locale": "<string>",
    "caseLevel": <boolean>,
    "caseFirst": "<string>",
    "strength": <number>,
    "numericOrdering": <boolean>,
    "alternate": "<string>",
    "maxVariable": "<string>",
    "normalization": <boolean>,
    "backwards": <boolean>
  },
  "options": {
    "<key>": "<value>"
  }
}]
Name Type Necessity Description
indexConfigs array of objects Optional Specific indexes to be built for specific replica sets.
indexConfigs.key array of arrays Required Keys in the index. This “array of arrays” contains a single array if the index has just one key.
indexConfigs.rsName string Required Replica set on which MongoDB builds the index.
indexConfigs.dbName string Required Database that MongoDB indexes.
indexConfigs.collectionName string Required Collection that MongoDB indexes.
indexConfigs.collation object Optional

Language-specific rules to use when sorting and matching strings if the index uses collation.

If you include the indexConfigs.collation object, you must include the indexConfigs.collation.locale parameter. All other parameters are optional.

If you don’t include the indexConfigs.collation object, the index can’t include collation.

indexConfigs.collation.locale string Required

Locale that the ICU defines.

The MongoDB Server Manual lists the supported locales in its Collation Locales and Default Parameters section.

To specify simple binary comparison, set this value to simple .

indexConfigs.collation.caseLevel boolean Optional

Flag that indicates how the index uses case comparison.

If you set this parameter to true , the index uses case comparison.

This parameter applies only if you set indexConfigs.collation.strength to 1 or 2 .

See also

Collation

indexConfigs.collation.caseFirst string Optional

Sort order of case differences during tertiary level comparisons.

The MongoDB Server Manual lists the possible values in its Collation section.

indexConfigs.collation.strength number Optional

Level of comparison to perform. Corresponds to ICU Comparison Levels.

The MongoDB Server Manual lists the possible values in its Collation section.

indexConfigs.collation.numericOrdering boolean Optional

Flag that indicates how to compare numeric strings.

Value Collation Method Example
true numeric strings compared as numbers 10 > 2 .
false numeric strings compared as strings 10 < 2 .

The default is false .

See also

Collation

indexConfigs.collation.alternate string Optional

Setting that determines how collation should consider whitespace and punctuation as base characters during comparisons.

The MongoDB Server Manual lists the possible values in its Collation section.

indexConfigs.collation.maxVariable string Optional

Characters the index can ignore. This parameter applies only if indexConfigs.collation.alternate is set to shifted .

The MongoDB Server Manual lists the possible values in its Collation section.

indexConfigs.collation.normalization boolean Optional

Flag that indicates if the text should be normalized.

If you set this parameter to true , collation:

  • Checks if text requires normalization.
  • Performs normalization to compare text.

The default is false .

See also

Collation

indexConfigs.collation.backwards boolean Optional

Flag that indicates how the index should handle diacritic strings.

If you set this parameter to true , strings with diacritics sort from the back to the front of the string.

The default is false .

See also

Collation

indexConfigs.options document Required Index options that the MongoDB Go Driver supports.
Read article

Cloud Manager Administration API Error Codes — MongoDB Cloud Manager

Cloud Manager Administration API Error Codes¶

Note

Groups and projects are synonymous terms. Your {PROJECT-ID} is the same as your project id. For existing groups, your group/project id remains the same. This page uses the more familiar term group when referring to descriptions. The endpoint remains as stated in the document.

If you encounter an error when issuing a request to the Cloud Manager Administration API, Cloud Manager returns one of the following error codes:

Error HTTP Code Description
ACCOUNT_SUSPENDED
402 Group has an unpaid invoice that is more than 30 days old.
ACKNOWLEDGEMENT_COMMENT_TOO_LONG
400 Acknowledgement comment too long. It must not exceed <number> characters.
ADDRESS_ALREADY_IN_WHITELIST
409 The address <address> is already on the whitelist.
ALERT_CONFIG_NOT_FOUND
404 No alert configuration with ID <ID> exists in group <group> .
ALERT_NOT_FOUND
404 No alert with ID <ID> exists in group <group> .
API_KEY_CANNOT_CREATE_GROUP
401 API Keys cannot create groups .
API_KEY_CANNOT_CREATE_ORG
401 API Keys cannot create organizations .
API_KEY_NOT_FOUND
400 No API Key with ID {API-KEY-ID} exists.
API_KEY_WHITELIST_ACCESS_DENIED
400 API Key whitelists are only accessible by the API Key itself or by a user administrator.
API_KEY_WHITELIST_NOT_FOUND
404 The specified IP address does not exist in the corresponding API Key whitelist.
ATTRIBUTE_NEGATIVE_OR_ZERO
400 The attribute <attribute> cannot be negative or zero.
ATTRIBUTE_NEGATIVE
400 The attribute <attribute> cannot be negative.
ATTRIBUTE_READ_ONLY
400 The attribute <attribute> is read-only and cannot be changed by the user.
AUTH_MECHANISM_REQUIRES_SSL
400 Authentication mechanism <mechanism> requires SSL.
AUTOMATION_CONFIG_NOT_FOUND
404 No automation configuration exists for group <group> .
BACKUP_CONFIG_NOT_FOUND
404 No backup configuration exists for cluster <cluster> in group <group> .
BAD_USERNAME_IN_GROUP_REF
400 User <username> is not in group <group> .
BAD_USERNAME_REF
400 No user with username <username> exists.
BAD_WHITELIST_ADD_REQUEST
400 Should not specify both the IP address and the CIDR block.
BLOCKED_USERNAME
400 The specified username <username> is not allowed.
CANNOT_ADD_IP_ADDRESS_TO_API_KEY_WHITELIST
400 The specified address cannot be added to whitelists. Cloud Manager does not allow certain IP addresses to be whitelisted, such as 0.0.0.0/32 .
CANNOT_ADD_GLOBAL_ROLE
403 Adding a global role is not supported.
CANNOT_CHANGE_GROUP_NAME
403 Current user is not authorized to change group name.
CANNOT_CLOSE_ACCOUNT_ACTIVE_BACKUP
409 Cannot close account while the group has active backups; please terminate all backups.
CANNOT_CLOSE_ACCOUNT_FAILED_INVOICES
402 Cannot close account because there are failed invoices.
CANNOT_DELETE_FROM_CLUSTER_SNAPSHOT
403 Cannot individually delete a snapshot that is part of a cluster snapshot.
CANNOT_DELETE_LAST_OWNER
403 Cannot remove the last owner from the group. If you are trying to close the group by removing all users, please delete the group instead.
CANNOT_DEMOTE_LAST_ORG_OWNER
403 Cannot demote the last owner of the organization.
CANNOT_DEMOTE_LAST_OWNER
403 Cannot demote the last owner of the group.
CANNOT_DISTRIBUTE_SUBNETS
400 Cannot distribute subnets. There must be at least one subnet available.
CANNOT_DOWNLOAD_EXPIRED_JOB
403 Cannot download a log collection request job in the EXPIRED state.
CANNOT_DOWNLOAD_JOB_IN_PROGRESS
403 Cannot download a log collection request job in the IN_PROGRESS state.
CANNOT_EXTEND_EXPIRED_JOB
403 Cannot extend duration of logs that have already expired.
CANNOT_GET_BACKUP_CONFIG_INVALID_STATE
409 Cannot get backup configuration without cluster being monitored.
CANNOT_GET_VOLUME_SIZE_LIMITS
500 Cannot get volume size limits for volume type <type> .
CANNOT_MODIFY_MANAGED_HOST
403 Cannot modify host <host> because it is managed by Automation.
CANNOT_MODIFY_SHARD_BACKUP_CONFIG
409 Cannot modify backup configuration for individual shard; use cluster ID <ID> for entire cluster.
CANNOT_REMOVE_CALLER_FROM_WHITELIST
400 Cannot remove caller’s IP address <address> from whitelist.
CANNOT_SET_BACKUP_AUTH_FOR_MANAGED_CLUSTER
409 Username and password cannot be manually set for a managed cluster.
CANNOT_SET_CLUSTER_CHECKPOINT_INTERVAL_FOR_REPLICA_SET
400 Cluster checkpoint interval can only be set for sharded clusters, not replica sets.
CANNOT_SET_CREDENTIALS_FOR_AUTH_MECHANISM
400 Username and password fields are only supported for authentication mechanism MONGODB_CR or PLAIN .
CANNOT_SET_PASSWORD_FOR_AUTH_MECHANISM
400 Cannot change password unless authentication mechanism is MONGODB_CR or PLAIN .
CANNOT_SET_POINT_IN_TIME_WINDOW
400 Setting the point in time window is not allowed.
CANNOT_SET_REF_TIME_OF_DAY
400 Setting the reference point time of day is not allowed.
CANNOT_START_BACKUP_INVALID_STATE
409 Cannot start backup unless the cluster is in the INACTIVE or STOPPED state.
CANNOT_START_BACKUP_NO_BILLING_INFO
402 Cannot start backup without providing billing information.
CANNOT_START_RESTORE_JOB_FOR_DELETED_CLUSTER_SNAPSHOT
409 Cannot start restore job for deleted cluster snapshot.
CANNOT_START_RESTORE_JOB_FOR_DELETED_SNAPSHOT
409 Cannot start restore job for deleted snapshot.
CANNOT_START_RESTORE_JOB_FOR_INCOMPLETE_CLUSTER_SNAPSHOT
409 Cannot start restore job for incomplete cluster snapshot.
CANNOT_STOP_BACKUP_INVALID_STATE
409 Cannot stop backup unless the cluster is in the STARTED state.
CANNOT_TERMINATE_BACKUP_INVALID_STATE
409 Cannot terminate backup unless the cluster is in the STOPPED state.
CHECKPOINT_NOT_FOUND
404 No checkpoint with ID <ID> exists for cluster <cluster> .
CLUSTER_NOT_FOUND
404 No cluster with ID <ID> exists in group <group> .
CONFIG_RESTORE_JOB_NOT_FOUND
404 No restore job with ID <ID> exists for config server <config server> .
CONFIG_SNAPSHOT_NOT_FOUND
404 No snapshot with ID <ID> exists for config server <config server> .
DATABASE_NAME_REQUIRED
400 Metric <metric> requires a database name to be provided.
DATABASE_NOT_FOUND
404 No database with name <name> exists on host <host> .
DEFAULT_CONFIG_LIMIT_EXCEPTION
400 The limit check failed while trying to add the requested resource. Please try again.
DEFAULT_INVITATION_EXCEPTION
400 Failed to send an invitation to <username> to join <group> .
DEVICE_NAME_REQUIRED
400 Metric <metric> requires a device name to be provided.
DEVICE_NOT_FOUND
404 No device with name <name> exists on host <host> .
DOMAIN_NAME_TOO_LONG
400 The domain name for the machine is too long. Try shortening the hostname prefix.
DUPLICATE_ADDRESSES_IN_INPUT
400 Two or more of the IP addresses being added to the whitelist are the same.
EMAIL_OR_SMS_REQUIRED_FOR_GROUP_NOTIFICATION
400 Email and/or SMS must be enabled for group notifications.
EMAIL_OR_SMS_REQUIRED_FOR_USER_NOTIFICATION
400 Email and/or SMS must be enabled for user notifications.
EXPIRATION_DATE_MUST_BE_IN_FUTURE
400 Expiration date for log collection request job must be in the future.
EXPIRATION_DATE_TOO_DISTANT
400 Expiration date for log collection request job can only be as far as 6 months in the future.
FAILED_TO_CLOSE_ACCOUNT_CHARGE_FAILED
402 Cannot close account due to a charge failure.
FEATURE_UNSUPPORTED
403 Feature not supported by current account level.
FRACTIONAL_TIMESTAMP
400 Timestamp must be whole number of seconds.
GLOBAL_ALERTS_ONLY
400 The specified event type <type> can only be used for global alerts.
GROUP_ALREADY_EXISTS
409 A group with name <name> already exists.
GROUP_API_KEY_NOT_FOUND
404 No group with API Key <key> exists.
GROUP_MISMATCH
400 The specified group ID <ID> does not match the URL.
GROUP_NAME_NOT_FOUND
404 No group with name <name> exists.
GROUP_NOT_FOUND
404 No group with ID <ID> exists.
HOST_LAST_PING_NOT_FOUND
404 No last ping exists for host <host> in group <group> .
HOST_NOT_FOUND
404 No host with ID <ID> exists in group <group> .
HOSTNAME_AND_PORT_NOT_FOUND
404 No host with hostname and port <name:port> exists in group <group> .
INCORRECT_SECURITY_GROUP_COUNT
400 Instance must be created with exactly one SSH-enabled security group.
INVALID_AGENT_TYPE_NAME
400 An invalid agent type name <name> was specified.
INVALID_ALERT_CONFIG_ID
404 An invalid alert configuration ID <ID> was specified.
INVALID_ALERT_ID
404 An invalid alert ID <ID> was specified.
INVALID_ALERT_STATUS
400 An invalid alert status <status> was specified.
INVALID_ATTRIBUTE
400 Invalid attribute <attribute> specified.
INVALID_AUTH_MECHANISM
400 Invalid authentication mechanism <mechanism> .
INVALID_AUTH_TYPE_NAME
400 An invalid authentication type name <name> was specified.
INVALID_CHECKPOINT_ID
404 An invalid checkpoint ID <ID> was specified.
INVALID_CLUSTER_CHECKPOINT_INTERVAL
400 Cluster checkpoint interval must be 15, 30, or 60 minutes.
INVALID_CLUSTER_ID
404 An invalid cluster ID <ID> was specified.
INVALID_DAILY_SNAPSHOT_RETENTION_PERIOD
400 Daily snapshot retention period must be between 1 and 365 days.
INVALID_DIRECTORY
400 An invalid directory name <name> was specified.
INVALID_EMAIL_ADDRESS
400 An invalid email address was specified.
INVALID_ENUM_VALUE
400 An invalid enumeration value <value> was specified.
INVALID_EVENT_TYPE_FOR_ALERT
400 Event type <type> not supported for alerts.
INVALID_FILTERLIST
400 Backup configuration cannot specify both included namespaces and excluded namespaces.
INVALID_GRANULARITY
400 An invalid granularity <granularity> was specified.
INVALID_GROUP_ID
404 An invalid group ID <ID> was specified.
INVALID_GROUP_NAME_10GEN
400 Group name cannot contain “10gen-” or “-10gen”.
INVALID_GROUP_NAME
400 An invalid group name <name> was specified.
INVALID_GROUP_TOKEN
400 A group tag must be a string (alphanumeric, periods, underscores, and dashes) of length <MAX_TAG_LENGTH> characters or less.
INVALID_HOST_PORT
400 Invalid host port <number> .
INVALID_HOSTNAME_PREFIX
400 Invalid hostname prefix <prefix> . It must contain only alphanumeric characters and hyphens, may not begin or end with a hyphen (“-“), and must not be more than 63 characters long.
INVALID_HOSTNAME
400 Invalid hostname <name> .
INVALID_INSTANCE_COUNT
400 Invalid instance count <number> . It must be between <number> and <number> .
INVALID_INSTANCE_TYPE_NAME
400 Invalid instance type <type> . It must be one of the listed instance types returned in the machine configuration options.
INVALID_IOPS_INVALID_RATIO
400 The IOPS value <number> is not valid. The maximum ratio between the IOPS value and the volume size is 30 : 1.
INVALID_IOPS_OUT_OF_BOUNDS
400 The IOPS value <number> is not valid. It must be between the minimum and maximum values returned in the machine configuration options.
INVALID_JOB_ID
404 An invalid restore job ID <ID> was specified.
INVALID_JSON_ATTRIBUTE
400 Received JSON for the <attribute> attribute does not match expected format.
INVALID_JSON
400 Received JSON does not match expected format.
INVALID_KEY_ID
404 An invalid key ID <ID> was specified.
INVALID_LOG_REQUEST_SIZE
400 Log request size must be a positive number.
INVALID_MACHINE_ID
404 An invalid machine ID <ID> was specified.
INVALID_MACHINE_IMAGE
400 The specified machine image is invalid.
INVALID_METRIC_NAME
404 An invalid metric name <name> was specified.
INVALID_MONGODB_USERNAME
400 The username <username> is not a valid MongoDB login.
INVALID_MONTHLY_SNAPSHOT_RETENTION_PERIOD
400 Monthly snapshot retention period must be between 1 and 36 months.
INVALID_MOUNT_LOCATION
400 An invalid mount location <location> was specified. The mount location must be equal to or a parent of <location> .
INVALID_OPERATOR_FOR_EVENT_TYPE
400 Operator <operator> is not compatible with event type <type> .
INVALID_PERIOD
400 An invalid period was specified.
INVALID_PROVIDER_PARAMETERS
400 Invalid parameter combination specified for provider <provider> .
INVALID_QUERY_PARAMETER
400 Invalid query parameter <parameter> specified.
INVALID_REFERENCE_HOUR_OF_DAY
400 Snapshot schedule reference hour must be between 0 and 23, inclusive.
INVALID_REFERENCE_MINUTE_OF_HOUR
400 Snapshot schedule reference minute must be between 0 and 59, inclusive.
INVALID_REFERENCE_TIMEZONE_OFFSET
400 Snapshot schedule timezone offset must conform to ISO-8601 time offset format, such as “+0000”.
INVALID_REGION
400 No region <region> exists for provider <provider> .
INVALID_ROLE_FOR_GROUP
400 Role <role> is invalid for group <group> .
INVALID_ROOT_VOLUME_SIZE
400 Invalid root volume size <number> . It must be between the minimum and maximum values returned in the machine configuration options.
INVALID_SECURITY_GROUP
400 Security group <group> is invalid. It must be one of the security groups returned in the machine configuration options.
INVALID_SNAPSHOT_ID
404 An invalid snapshot ID <ID> was specified.
INVALID_SNAPSHOT_INTERVAL
400 Snapshot interval must be 6, 8, 12, or 24 hours.
INVALID_SNAPSHOT_RETENTION_PERIOD
400 Snapshot retention period must be between 1 and 5 days.
INVALID_SSH_KEY
400 An invalid SSH key was specified.
INVALID_USER_ID
404 An invalid user ID <ID> was specified.
INVALID_USERNAME
400 The specified username is not a valid email address.
INVALID_USER
400 No user <username> exists.
INVALID_VOLUME_NAME
400 Invalid volume name <name> . It must be one of the listed volume names returned in the machine configuration options.
INVALID_VPC_OR_SUBNET
400 Invalid or unavailable VPC <VPC> or subnet <subnet> .
INVALID_WEEKLY_SNAPSHOT_RETENTION_PERIOD
400 Weekly snapshot retention period must be between 1 and 52 weeks.
INVALID_WINDOW_ID
404 An invalid maintenance window ID <ID> was specified.
INVALID_ZONE
400 No zone <zone> exists for region <region> .
IP_ADDRESS_NOT_ON_WHITELIST
403 IP address <address> is not allowed to access this resource.
LAST_PING_NOT_FOUND
404 No last ping exists for group <group> .
409 Cannot set HTTP link expiration time after snapshot deletion time.
404 No job with the given ID exists in this group.
MACHINE_CONFIG_PARAMS_NOT_FOUND
400 No machine configuration parameters exist for provider <provider> .
MAINTENANCE_WINDOW_NOT_FOUND
404 No maintenance window with ID <ID> exists in group <group> .
MAINTENANCE_WINDOW_START_DATE_AFTER_END_DATE
400 Maintenance window configurations must specify a start date before their end date.
MAX_USERS_PER_GROUP_EXCEEDED
400 Maximum number of users per group ( <number> ) in <ID> exceeded while trying to add users.
MAX_USERS_PER_ORG_EXCEEDED
400 Maximum number of users per organization ( <number> ) in <ID> exceeded while trying to add users.
MAX_TEAMS_PER_GROUP_EXCEEDED
400 Maximum number of teams per group ( <number> ) in <ID> exceeded while trying to add teams.
MAX_USERS_PER_TEAM_EXCEEDED
400 Maximum number of Cloud Manager users per team exceeded while trying to add users. Teams are limited to 250 users.
MAX_TEAMS_PER_ORG_EXCEEDED
400 Maximum number of teams per organization exceeded while trying to add team. Organizations are limited to 250 teams.
METRIC_THRESHOLD_PRESENT
400 The metric threshold should only be specific for host metric alerts.
MISSING_ALERT_CONFIG_ID
404 No alert configuration ID was found.
MISSING_ATTRIBUTE
400 The required attribute <attribute> was not specified.
MISSING_AUTH_ATTRIBUTES
400 The attributes <attribute> and <attribute> must be specified for authentication type <type> .
MISSING_CREDENTIALS_FOR_AUTH_MECHANISM
400 Authentication mechanism <mechanism> requires username and password.
MISSING_MAINTENANCE_WINDOW_ALERT_TYPE_NAME
400 Maintenance window configurations must specify at least one alert type.
MISSING_MAINTENANCE_WINDOW_END_DATE
400 Maintenance window configurations must specify an end date.
MISSING_MAINTENANCE_WINDOW_START_DATE
400 Maintenance window configurations must specify a start date.
MISSING_METRIC_THRESHOLD
400 A metric threshold must be specified for host metric alerts.
MISSING_NOTIFICATIONS
400 At least one notification must be specified for an alert configuration.
MISSING_ONE_OF_ATTRIBUTES
400 Either the <attribute> attribute or the <attribute> attribute must be specified.
MISSING_ONE_OF_THREE_ATTRIBUTES
400 Either the <attribute> attribute, the <attribute> attribute, or the <attribute> attribute must be specified.
MISSING_OR_INVALID_ATTRIBUTE
400 The required attribute <attribute> was incorrectly specified or omitted.
MISSING_PASSWORD
400 Username cannot be changed without specifying password.
MISSING_QUERY_PARAMETER
400 The required query parameter <parameter> was not specified.
MISSING_ROLES_FOR_GROUP_NOTIFICATION
400 Group notifications cannot specify an empty list of roles.
MISSING_SYNC_SOURCE
409 Changing the storage engine will require a resync, so a sync source must be provided.
MISSING_THRESHOLD
400 A threshold must be specified for member health alerts.
MULTIPLE_GROUPS
409 Multiple groups exist with the specified name.
MUTUALLY_EXCLUSIVE_QUERY_PARAMETERS
400 Either the <parameter> query parameter or the <parameter> query parameter but not both should be specified.
NO_CHECKPOINT_FOR_PIT_RESTORE
409 A suitable checkpoint could not be found for the specified point-in-time restore.
NO_CURRENT_USER
401 No current user.
NO_FREE_TIER_API
403 The API is not supported for the Free Tier of Cloud Manager.
NO_GROUP_SSH_KEY
409 No group SSH key exists for group <group> .
NO_PAYMENT_INFORMATION_FOUND
402 No payment information was found for group <group> .
NO_PROVIDER_AVAILABILITY_ZONES
400 Could not retrieve availability zones from <account> account.
NO_PROVIDER_AVAILABLE_INSTANCE_TYPES
400 Could not retrieve available instance types from <account> account.
NO_PROVIDER_SECURITY_GROUPS
400 Could not retrieve security groups from <account> account.
NO_SSH_KEYS_IN_GROUP
404 No SSH keys found in group <group> .
NONZERO_DELAY_REQUIRED
400 The specified metric requires a nonzero delay for all notifications.
NOT_CONFIG_SERVER
404 Host <host> is not an SCCC config server.
NOT_DATABASE_OR_DISK_METRIC
404 Metric <metric> is neither a database nor a disk metric.
NOT_GLOBAL_USER_ADMIN
401 The currently logged in user does not have the global user administrator.
NOT_GROUP_USER_ADMIN
401 The currently logged in user does not have the user administrator role in group <group> .
NOT_IN_GROUP
401 The current user is not in the group, or the group does not exist.
NOT_ORG_ADMIN
401 The currently logged in user does not have the administrator role in organization <organization> .
NOT_SHARDED
400 Only sharded clusters and replica sets can be patched.
NOT_USER_ADMIN
401 The currently logged in user does not have the user administrator role for any group, team, or organization containing user <username> .
NOTIFICATION_INTERVAL_OUT_OF_RANGE
400 Notifications must have an internal of at least 5 minutes.
NOTIFICATION_TYPE_IS_GLOBAL_ONLY
400 At least one notification is a type that is only available for global alert configurations.
ONLY_FAILED_JOB_CAN_BE_RESTARTED
400 A log collection request job can only be restarted if it is in the FAILED state.
ORG_NOT_FOUND
404 No organization with ID <ID> exists.
PROVIDER_AUTH_FAILED
401 Account failed to authenticate with <credentials> .
PROVIDER_CONFIG_ID_NOT_FOUND
404 No provider configuration with ID <ID> exists for provider <provider> .
PROVIDER_CONFIG_NOT_FOUND
404 No provider configuration exists for provider <provider> .
PROVIDER_NOT_FOUND
404 No provider <provider> exists.
PROVIDER_UNSUPPORTED
404 Provider <provider> not currently supported.
PROVISION_MACHINE_JOB_NOT_FOUND
404 No provision machine job with ID <ID> exists in group <group> .
PROVISIONED_MACHINE_COULD_NOT_TERMINATE
409 Provisioned machine with ID <ID> could not terminate because a MongoDB process, Monitoring, or Backup is currently running on the machine.
PROVISIONED_MACHINE_NOT_FOUND
404 No provisioned machine with ID <ID> exists in group <group> .
PROVISIONING_FAILED_FROM_PROVIDER
500 Unable to retrieve configuration options from the provider.
RATE_LIMITED
429 Resource <resource> is limited to <number> requests every <number> minutes.
RATE_LIMITED_IP
400 Rate limit of <number> invitations per <number> minutes exceeded.
RESOURCE_NOT_FOUND
404 Cannot find resource <resource> .
RESTORE_JOB_NOT_FOUND_IN_GROUP
404 No restore job with ID <ID> exists in group <group> .
RESTORE_JOB_NOT_FOUND
404 No restore job with ID <ID> exists for cluster <cluster> .
ROLE_NEEDS_GROUP_ID
400 Group-specific role <role> requires a group ID.
ROLE_NEEDS_NO_GROUP_ID
400 Global role <role> cannot be specified with a group ID.
ROLE_NEEDS_NO_ORG_ID
400 Role <role> cannot be specified with an organization ID.
ROLE_NEEDS_ORG_ID
400 Role <role> requires an organization ID.
ROLES_SPECIFIED_FOR_USER
403 Roles specified for user.
SNAPSHOT_NOT_FOUND
404 No snapshot with ID <ID> exists for cluster <cluster> .
SSH_KEY_ALREADY_EXISTS
409 An SSH key with the name <name> already exists.
SSH_KEY_NAME_NOT_FOUND
404 No SSH key with name <name> exists.
SSH_KEY_NOT_FOUND
404 No SSH key with ID <ID> exists.
THRESHOLD_PRESENT
400 A threshold should only be present for member health alerts.
TOO_MANY_GROUP_NOTIFICATIONS
400 At most one group notification can be specified for an alert configuration.
TOO_MANY_GROUP_TOKENS
400 Groups are limited to <MAX_TAGS_PER_GROUP> tags.
TOTAL_MODE_DEPRECATED
400 Mode TOTAL is no longer supported.
UNEXPECTED_ERROR
500 Unexpected error.
UNITS_MISMATCH
400 Threshold units cannot be converted to metric units.
UNSUPPORTED_AUTOMATION_AGENT_VERSION
Automation agent version is less than the accepted minimum version.
UNSUPPORTED_DELIVERY_METHOD
400 The specified delivery method is not supported.
UNSUPPORTED_FOR_CURRENT_CONFIG
403 Operation not supported for current configuration.
UNSUPPORTED_FOR_CURRENT_PLAN
403 Operation not supported for current plan.
UNSUPPORTED_NOTIFICATION_TYPE
400 Notification type <type> is unsupported.
UNSUPPORTED_SET_BACKUP_STATE
403 Setting the backup state to <state> is not supported.
UPGRADE_FOR_CLUSTER_CHECKPOINT_INTERVAL
409 Cluster checkpoint interval not supported by the Backup version; please upgrade .
UPGRADE_FOR_EXCLUDED_NAMESPACES
409 Excluded namespaces are not supported by this Backup version; please upgrade.
UPGRADE_FOR_INCLUDED_NAMESPACES
409 Included namespaces are not supported by this Backup version; please upgrade .
USER_ALREADY_EXISTS
409 A user with username <username> already exists.
USER_NOT_FOUND
404 No user with ID <ID> exists.
USER_NOT_IN_GROUP
404 User <username> is not in group <group> .
USER_UNAUTHORIZED
401 Current user is not authorized to perform this action.
USERNAME_NOT_FOUND
404 No user with username <username> exists.
VOLUME_ENCRYPTION_NOT_AVAILABLE
400 Volume encryption is not available on instances of type <type> .
VOLUME_OPTIMIZATION_NOT_AVAILABLE
400 Volume optimization is not available on instances of type <type> .
WEAK_PASSWORD
400 The specified password is not strong enough.
WEBHOOK_URL_NOT_SET
400 Webhook URL must be set in the group before adding webhook notifications.
WHITELIST_ACCESS_DENIED
401 Cannot access whitelist for user <username> , which is not currently logged in.
WHITELIST_NOT_FOUND
404 IP address <address> not on whitelist for user <username> .
Read article

Add Existing MongoDB Processes to Cloud Manager — MongoDB Cloud Manager

Add Existing MongoDB Processes to Cloud Manager¶

On this page

  • Considerations
  • Prerequisites
  • Procedures

Cloud Manager provides a wizard for adding your existing MongoDB deployments to monitoring and management. The wizard prompts you to:

  • Install an Automation if it doesn’t already exist

  • Identify the sharded cluster , the replica set , or the standalone to add. You can choose to add the deployment to Monitoring or to both Monitoring and Automation .

    If you are adding a deployment that you intend to live migrate to Atlas, you need to add the deployment (and its credentials) only for Monitoring .

Considerations¶

Unique Names¶

Deployments must have unique names within the projects.

Important

Replica set, sharded cluster, and shard names within the same project must be unique. Failure to have unique names for the deployments will result in broken backup snapshots.

MongoDB Configuration Options¶

Automation doesn’t support all MongoDB options. To review which options are supported, see MongoDB Settings that Automation Supports .

TLS

If you enable TLS , the FQDN for the host serving a MongoDB process must match the SAN for the TLS certificate on that host.

Caution

To prevent man-in-the-middle attacks, keep the scope of TLS certificates as narrow as possible. Although you can use one TLS certificate with many SANs , or a wildcard TLS certificate on each host, you should not. To learn more, see RFC 2818, section 3.1 .

Preferred Hostnames¶

Set up a preferred hostname if you:

  • Require a specific hostname, FQDN , IPv4 address or IPv6 address to access the MongoDB process, or
  • Must specify the hostname to use for hosts with multiple aliases.

To learn more, see the Preferred Hostnames setting in Project Settings .

Managing Windows MongoDB Services¶

If you are adding an existing MongoDB process that runs as a Windows Service to Automation, Automation:

  • Stops and disables the existing service
  • Creates and starts a new service

Authentication Credentials on Source and Destination Clusters¶

If the Cloud Manager project has MongoDB authentication settings enabled for its deployments, the MongoDB deployment you import must support the project’s authentication mechanism.

We recommend that you import to a new destination project that has no running processes and doesn’t have authentication enabled.

If the source cluster uses authentication, and the destination Cloud Manager project doesn’t have any existing managed processes, Cloud Manager enables authentication in the destination project, imports the existing keyfile from the source cluster, and uses it to authenticate the user that conducts the import process.

If the source cluster and the destination Cloud Manager project both use authentication, and the project has processes, Cloud Manager attempts to use existing authentication settings in the destination project during the import process. For the import process to succeed, authentication credentials on the source cluster and the Cloud Manager destination project must be the same.

To ensure that import is successful, before you start the import process, add the Cloud Manager destination project’s credentials on the source cluster. To learn more, see Rotate Keys for Replica Set or Rotate Keys for Sharded Clusters.

Authentication Use Cases¶

If your MongoDB deployment requires authentication, when you add the deployment to Cloud Manager for monitoring, you must provide the necessary credentials .

  • If the deployment doesn’t use Automation, but did use Backup, Monitoring, or both, you can find those credentials where the credentials were before updating to the MongoDB Agent.
  • If the deployment doesn’t use Automation, but will use Backup, Monitoring, or both:
    1. Create the credentials for the MongoDB Deployment. To learn more, see Required Access for MongoDB Agent for Monitoring and Required Access for MongoDB Agent for Backup .
    2. Add the credentials that you granted to those functions to Cloud Manager after you add the MongoDB processes. To learn more, see Add Credentials for Monitoring and Add Credentials for Backup .
  • If the deployment uses Automation, Cloud Manager uses the credentials from the MongoDB Agent. You can delete the credentials from the legacy Backup, and Monitoring Agents. The MongoDB Agent uses those credentials for its Automation, Backup, and Monitoring functions.
  • If the deployment will use Automation but didn’t use it before you import it, add the mms-automation user to the database processes you imported and add the user’s credentials to Cloud Manager.

To learn more, see Add Credentials for Automation .

Automation and Updated Security Settings Upon Import¶

Adding a MongoDB deployment to automation may affect the security settings of the Cloud Manager project and the MongoDB deployment.

  • Automation enables the Project Security Setting . If the MongoDB deployment requires authentication but the Cloud Manager project doesn’t have authentication settings enabled, when you add the MongoDB deployment to automation, Cloud Manager updates the project’s security settings to the security settings of the newly imported deployment.

    The import process only updates the Cloud Manager project’s security setting if the project’s security setting is currently disabled. The import process doesn’t disable the project’s security setting or change its enabled authentication mechanism.

  • Automation Imports MongoDB Users and Roles . The following statements apply to situations where a MongoDB deployment requires authentication or the Cloud Manager project has authentication settings enabled.

    If the MongoDB deployment contains users or user-defined roles, you can choose to import these users and roles for Cloud Manager to manage. The imported users and roles are Synced to all managed deployments in the Cloud Manager project.

    • If you set the project’s Enforce Consistent Set value to Yes , Cloud Manager deletes from the MongoDB deployments those users and roles that are not imported.
    • If you set the project’s Enforce Consistent Set value to No , Cloud Manager stops managing non-imported users and roles in the project. These users and roles remain in the MongoDB deployment. To manage these users and roles, you must connect directly to the MongoDB deployment.

    If you don’t want the Cloud Manager project to manage specific users and roles, use the Authentication & Users and Authentication & Roles pages to remove these users and roles during import before you confirm and deploy the changes. To learn more, see Manage or Unmanage MongoDB Users .

    If the imported MongoDB deployment already has mms-backup-agent and mms-monitoring-agent users in its admin database, the import process overrides these users’ roles with the roles for mms-backup-agent and mms-monitoring-agent users as set in the Cloud Manager project.

  • Automation Applies to All Deployments in the Project . The project’s updated security settings, including all users and roles managed by the Cloud Manager project, apply to all deployments in the project, including the imported MongoDB deployment.

    Cloud Manager restarts all deployments in the project with the new setting, including the imported MongoDB deployment. After import, all deployments in the project use the Cloud Manager automation keyfile upon restart.

    The deployment that you import must use the same keyfile as the existing processes in the destination project or the import process may not proceed. To learn more, see Authentication Credentials on Source and Destination Clusters .

    If the existing deployments in the project require a different security profile from the imported process, create a new project into which you can import the source MongoDB deployment.

Examples of Imported Users¶

The following examples apply to situations where the MongoDB deployment requires authentication or the Cloud Manager project has authentication settings enabled.

If you import the MongoDB users and custom roles, once the Cloud Manager project begins to manage the MongoDB deployment, the following happens, regardless of the Enforce Consistent Set value:

  • The Cloud Manager project enables authentication, manages imported users and roles, and syncs the new users and roles to all its managed deployments.
  • The MongoDB deployment’s access control is enabled and requires authentication. The MongoDB deployment has all users and roles that the Cloud Manager project manages. These users and roles have Synced set to Yes .

If you don’t import the MongoDB users and custom roles, once the Cloud Manager project begins to manage the MongoDB deployment, the following happens:

If Enforce Consistent Set is set to Yes :

  • The Cloud Manager project enables authentication and doesn’t change its managed users and roles.
  • The MongoDB deployment’s access control is enabled and requires authentication.
  • Cloud Manager deletes the non-imported MongoDB users and roles from the deployment.
  • The MongoDB deployment has all users and roles that the Cloud Manager project manages. These users and roles have Synced set to Yes .

If Enforce Consistent Set is set to No :

  • The Cloud Manager project enables authentication and doesn’t change its security settings, including users and roles.
  • The MongoDB deployment’s access control is enabled and requires authentication.
  • The non-imported MongoDB users and roles remain in the MongoDB deployment.
  • The MongoDB deployment has all users and roles managed by the Cloud Manager project. These users and roles have Synced set to Yes .

Prerequisites¶

  • If mongod is enabled as a service on the deployment, a race condition might result where systemd starts mongod on reboot, rather than the Automation. To prevent this issue, ensure the mongod service is disabled before you add your deployment to Automation:

    1. Verify whether the mongod service is enabled:
    sudo systemctl is-enabled mongod.service
    
    1. If the service is enabled, disable it:
    sudo systemctl disable mongod.service
    
  • If the Cloud Manager project doesn’t have authentication settings enabled but the MongoDB process requires authentication, add the MongoDB Agent user for the Cloud Manager project with the appropriate roles. The import process displays the required roles for the user. The added user becomes the project’s MongoDB Agent user.

  • If the Cloud Manager project has authentication settings enabled, add the Cloud Manager project’s MongoDB Agent user to the MongoDB process.

    • To find the MongoDB Agent user, click Deployments , then Security , then Users .

    • To find the password for the Cloud Manager project’s MongoDB Agent user, use one of the following methods:

      Follow the steps in the Add MongoDB Processes procedure to launch the wizard in the UI. When you reach the modal that says Do you want to add automation to this deployment? :

      1. Select Add Automation and Configure Authentication .
      2. Click Show Password .
  • The import process requires that the authentication credentials and keyfiles are the same on the source and destination clusters. To learn more, see Authentication Credentials on Source and Destination Clusters .

Important

If you are adding a sharded cluster, you must create this user through the mongos and on every shard. That is, create the user both as a cluster wide user through mongos as well as a shard local user on each shard.

Procedures¶

Add MongoDB Processes¶

To add existing MongoDB processes to Cloud Manager:

1
2

Click Add and select Existing MongoDB Deployment

3

Follow the prompts to add the deployment.¶

Add Authentication Credentials to your Deployment¶

After you add existing MongoDB process to Cloud Manager, you might have to add authentication credentials for the new deployments if authentication is enabled for the project into which you imported the deployment. See Authentication Use Cases to learn in which situations you must add Automation, Monitoring, or Backup credentials for your new deployment.

If you are adding a deployment that you intend to live migrate to Atlas, you need to add the deployment (and its credentials) only for Monitoring .

Select the authentication mechanism that you want to use:

Add Credentials for Automation¶

To add credentials for a deployment that will use Automation but didn’t use it before you imported it to Cloud Manager:

1

Add the MongoDB Agent user to your databases.¶

The MongoDB Agent user performs automation tasks for your MongoDB databases. Make sure this MongoDB user has the proper privileges .

2
3

Click Edit Credentials

4

Continue through the modal until you see the Configure Cloud Manager Agents page.¶

5

Add the appropriate credentials:¶

Setting Value
MongoDB Agent Username Enter the MongoDB Agent username.
MongoDB Agent Password Enter the password for MongoDB Agent Username.

Add Credentials for Monitoring¶

To add credentials for a deployment that will not use Automation but will use Monitoring:

1
2

Click Credentials

3

Add the appropriate credentials:¶

Setting Value
Monitoring Username Enter the Monitoring username.
Monitoring Password Enter the password for Monitoring Username.

Add Credentials for Backup¶

To add credentials for a deployment that will not use Automation but will use Backup:

1
2

For your deployment, Click ellipsis icon , then click Edit Credentials

3

Add the appropriate credentials:¶

Setting Value
Backup Username Enter the Backup username.
Backup Password Enter the password for Backup Username.
Read article

MongoDB Compatibility Matrix — MongoDB Cloud Manager

MongoDB Compatibility Matrix¶

On this page

  • MongoDB Versions Compatible with Cloud Manager
  • Agent Compatibility
  • MongoDB Deployment Types

This page describes compatibility between Cloud Manager features and MongoDB.

Cloud Manager support for End of Life MongoDB versions

Cloud Manager doesn’t support Backup, Monitoring, or Automation for versions earlier than MongoDB 3.6.

MongoDB Versions Compatible with Cloud Manager¶

  • Cloud Manager can automate deployments running MongoDB versions 3.6 or later.
  • Cloud Manager can monitor deployments running MongoDB versions 2.6 or later.
  • Cloud Manager can back up deployments running MongoDB versions 2.6 or later.

Backup Considerations for MongoDB Versions¶

To learn more about backup considerations specific to MongoDB 4.4 and later and 4.2 and earlier, see Backup Considerations .

To learn more about MongoDB versioning, see MongoDB Versioning in the MongoDB Manual.

Agent Compatibility¶

Monitoring Compatibility¶

To monitor a deployment running MongoDB 3.6 or later release series, you must use Monitoring Agent version 2.7.0 or later.

Automation PowerPC Compatibility¶

To manage PowerPC Linux-based hosts, you must use Automation Agent 3.2.7.1927 or later.

MongoDB Deployment Types¶

Using Cloud Manager, you can configure all MongoDB deployment types: sharded clusters, replica sets, and standalones.

The shards in a sharded cluster must be replica sets. That is, a shard cannot be a standalone mongod . If you must run a shard as a single mongod (which provides no redundancy or failover), run the shard as a single-member replica set.

Note

You may not upgrade a sharded MongoDB deployment to version 3.4 if the deployment uses mirrored mongod instances as config servers. To allow the sharded deployment to be upgraded, see Convert Config Servers to a Replica Set . The conversion requires that the sharded deployment run MongoDB version 3.2.4 or later. Deployments running previous versions must upgrade to version 3.2.4 before an upgrade to version 3.4.

Read article

Stop Monitoring a Process — MongoDB Cloud Manager

Stop Monitoring a Process¶

On this page

  • Understand the Objectives
  • Complete the Prerequisites
  • Follow These Steps

This tutorial shows you how to stop monitoring a process . Once you stop monitoring a process, Cloud Manager stops displaying its status and tracking its metrics.

Understand the Objectives¶

Learn how to use the Cloud Manager Administration API to:

  • Find the host ID for the process.
  • Stop monitoring the process that matches the host ID.
  • Verify that Cloud Manager no longer monitors the process.

Complete the Prerequisites¶

Complete these prerequisites before you complete the tutorial.

  • Configure your access to the Cloud Manager Administration API .
  • Get the permissions needed to change monitoring settings. You need one of the following roles:
    • Project Monitoring Admin
    • Project Owner
  • Terminate the backups for the process before you stop monitoring it.

Follow These Steps¶

Complete all the following steps to use the API to stop monitoring a process.

1

Find the host ID for the process.¶

Use the Get One Host by Hostname and Port resource to find the process and retrieve the id value.

Learn What This Step Does¶

The Get One Host by Hostname and Port resource uses the hostname and port you specify to find the process. Then, it returns information about this process. You can find the id needed for the next step in the response.

Issue This Command¶

Copy the following curl command. Paste it into your preferred terminal or console. Replace the displayed placeholders with these values:

Placeholder Description
{PUBLIC-KEY} Public part of your API key.
{PRIVATE-KEY} Private part of your API key.
{PROJECT-ID} Unique identifier of the project that owns the host.
{HOSTNAME} Primary hostname that Cloud Manager uses to connect to the instance. This may be a hostname, an FQDN , an IPv4 address, or an IPv6 address.
{PORT} Port on which the process listens.

Replace the placeholders in the command, then execute it.

curl --user "{PUBLIC-KEY}:{PRIVATE-KEY}" --digest \
     --request GET "https://cloud.mongodb.com/api/public/v1.0/groups/{PROJECT-ID}/hosts/byName/{HOSTNAME}:{PORT}"

Copy the Host’s ID¶

In the response body, copy the value returned in the id field. You need the value for the next step.

Example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
{
  "alertsEnabled" : true,
  "aliases": [ "server1.example.com:27017", "203.0.113.3:27017" ],
  "authMechanismName" : "SCRAM-SHA-1",
  "clusterId" : "<cluster-ID-1>",
  "created" : "2021-04-22T19:56:50Z",
  "groupId" : "<project-ID-1>",
  "hasStartupWarnings" : false,
  "hidden" : false,
  "hostEnabled" : true,
  "hostname" : "server1.example.com",
  "id" : "{HOST-ID}",
  "ipAddress": "203.0.113.3",
}
2

Stop monitoring the process that matches the host ID.¶

Use the Stop Monitoring One Host resource to stop monitoring the host.

Learn What This Step Does¶

The Stop Monitoring One Host resource doesn’t actually delete the host. The resource deletes the host from the list of hosts that Cloud Manager monitors. This removes the process from monitoring.

Issue This Command¶

Copy the following curl command. Paste it into your preferred terminal or console. Replace the displayed placeholders with these values:

Placeholder Description
{PUBLIC-KEY} Public part of your API key.
{PRIVATE-KEY} Private part of your API key.
{PROJECT-ID} Unique identifier of the project that owns the host.
{HOST-ID} Unique identifier of the host for the process. Use the id from step 1.

Replace the placeholders in the command, then execute it.

curl --user "{PUBLIC-KEY}:{PRIVATE-KEY}" --digest \
     --request DELETE "https://cloud.mongodb.com/api/public/v1.0/groups/{PROJECT-ID}/hosts/{HOST-ID}"
3

Verify that Cloud Manager no longer monitors the process.¶

Use the Get One Host by Hostname and Port resource again to attempt to find the process using its hostname and port. Then, verify that details returns No host with hostname and port {HOSTNAME}:{PORT} exists in group {PROJECT-ID} .

Learn What This Step Does¶

The Get One Host by Hostname and Port resource uses the hostname and port you specify to find the process. Then, it returns information about this process. You can tell that Cloud Manager doesn’t monitor the process if the details value in the response is No host with hostname and port {HOSTNAME}:{PORT} exists in group {PROJECT-ID} . This means that Cloud Manager can’t find the host in the list of processes that it monitors.

Issue This Command¶

Copy the following curl command. Paste it into your preferred terminal or console. Replace the displayed placeholders with these values:

Placeholder Description
{PUBLIC-KEY} Public part of your API key.
{PRIVATE-KEY} Private part of your API key.
{PROJECT-ID} Unique identifier of the project that owns the host.
{HOSTNAME} Primary hostname that Cloud Manager uses to connect to this instance. This may be a hostname, an FQDN, an IPv4 address, or an IPv6 address.
{PORT} Port on which the process listens.

Replace the placeholders in the command, then execute it.

curl --user "{PUBLIC-KEY}:{PRIVATE-KEY}" --digest \
     --request GET "https://cloud.mongodb.com/api/public/v1.0/groups/{PROJECT-ID}/hosts/byName/{HOSTNAME}:{PORT}"

Check the Response Details¶

In the response body, check the value returned in the details field. If details returns No host with hostname and port {HOSTNAME}:{PORT} exists in group {PROJECT-ID} , you succeeded. Cloud Manager no longer monitors the process.

Read article