This is one stop global knowledge base where you can learn about all the products, solutions and support features.
On this page
One of the benefits of MongoDB’s rich schema model is the ability to store arrays as document field values. Storing arrays as field values allows you to model one-to-many or many-to-many relationships in a single document, instead of across separate collections as you might in a relational database.
However, you should exercise caution if you are consistently adding
elements to arrays in your documents. If you do not limit the number of
elements in an array, your documents may grow to an unpredictable size.
As an array continues to grow, reading and building indexes on that
array gradually decrease in performance. A large, growing array can
strain application resources and put your documents at risk of exceeding
the
BSON
Document
Size
limit.
Instead, consider bounding your arrays to improve performance and keep your documents a manageable size.
Consider the following schema for a
publishers
collection:
// publishers collection
{
"_id": "orielly"
"name": "O'Reilly Media",
"founded": 1980,
"location": "CA",
"books": [
{
"_id": 123456789,
"title": "MongoDB: The Definitive Guide",
"author": [ "Kristina Chodorow", "Mike Dirolf" ],
"published_date": ISODate("2010-09-24"),
"pages": 216,
"language": "English"
},
{
"_id": 234567890,
"title": "50 Tips and Tricks for MongoDB Developer",
"author": "Kristina Chodorow",
"published_date": ISODate("2011-05-06"),
"pages": 68,
"language": "English"
}
]
}
In this scenario, the
books
array is
unbounded
. Each new book
released by this publishing company adds a new sub-document to the
books
array. As publishing companies continue to release books, the
documents will eventually grow very large and cause a disproportionate
amount of memory strain on the application.
To avoid mutable, unbounded arrays, separate the
publishers
collection into two collections, one for
publishers
and one for
books
. Instead of embedding the entire
book
document in the
publishers
document, include a
reference
to the publisher inside of the book document:
// publishers collection
{
"_id": "oreilly"
"name": "O'Reilly Media",
"founded": 1980,
"location": "CA"
}
// books collection
{
"_id": 123456789,
"title": "MongoDB: The Definitive Guide",
"author": [ "Kristina Chodorow", "Mike Dirolf" ],
"published_date": ISODate("2010-09-24"),
"pages": 216,
"language": "English",
"publisher_id": "oreilly"
}
{
"_id": 234567890,
"title": "50 Tips and Tricks for MongoDB Developer",
"author": "Kristina Chodorow",
"published_date": ISODate("2011-05-06"),
"pages": 68,
"language": "English",
"publisher_id": "oreilly"
}
This updated schema removes the unbounded array in the
publishers
collection and places a reference to the publisher in each book document
using the
publisher_id
field. This ensures that each document has a
manageable size, and there is no risk of a document field growing
abnormally large.
$lookups
¶
This approach works especially well if your application loads the book
and publisher information separately. If your application requires the
book and information together, it needs to perform a
$lookup
operation to join the data from the
publishers
and
books
collections.
$lookup
operations are not very performant, but
in this scenario may be worth the trade off to avoid unbounded arrays.
To learn how to incorporate the flexible data model into your schema, see the following presentations from MongoDB.live 2020 :
The Performance Advisor recognizes a query as slow if it takes longer
to execute than the value of
slowOpThresholdMs.
By default, this value is
100
milliseconds. You can change the
threshold with either the
profile
command or the db.setProfilingLevel()
mongosh
method.
Example
The following
profile
command example sets the threshold at 200
milliseconds:
db.runCommand({
profile: 0,
slowOpThresholdMs: 200
})
If you are running MongoDB 3.6 or later, you can customize the
percentage of slow queries in your logs used by the Performance Advisor
by specifying the
sampleRate
parameter.
Example
This sets the slow query threshold to a lower value of 100 milliseconds but also sets the sample rate to 10%.
db.runCommand({
profile: 0,
slowOpThresholdMs: 100,
sampleRate: 0.1
})
Note
By default, the value of
profile
is
0
. MongoDB recommends
leaving this value unchanged since other values can negatively
impact database performance. To learn more, see the
profile command.
On this page
From your Context menu, click the project that has the hosts you want to configure.
Click Deployments .
Click Servers .
On the host where you want to activate Backup, click ellipsis icon .
Click Activate Backup .
From the banner, click Review & Deploy .
If you want to activate Backup, click Confirm & Deploy . Otherwise click Cancel , then Discard Changes to cancel activating Backup.
Note
Only one host can backup a deployment at a time. On the Server tab, the host that is backing up the deployment displays Backup - active . Any other host with Backup activated displays Backup - standby .
From your Context menu, click the project that has the hosts you want to configure.
Click Deployments .
Click Servers .
On the host where you want to activate Monitoring, click ellipsis icon .
Click Activate Monitoring .
From the banner, click Review & Deploy .
If you want to activate Monitoring, click Confirm & Deploy . Otherwise click Cancel , then Discard Changes to cancel activating Monitoring.
Note
Only one host can monitor a deployment at a time. On the Server tab, the host that is monitoring the deployment displays Monitoring - active . Any other host with Monitoring activated displays Monitoring - standby .
Multiple Monitoring Agents
You can activate Monitoring on multiple MongoDB Agents to distribute monitoring assignments and provide failover. Cloud Manager distributes monitoring assignments among up to 100 running MongoDB Agents. Each MongoDB Agent running active Monitoring monitors a different set of MongoDB processes. One MongoDB Agent running active Monitoring per project is the primary Monitor. The primary Monitor reports the cluster’s status to Cloud Manager. As MongoDB Agents have Monitoring enabled or disabled, Cloud Manager redistributes assignments. If the primary Monitor fails, Cloud Manager assigns another MongoDB Agent running active Monitoring to be the primary Monitor.
If you run more than 100 MongoDB Agents with active Monitoring, the additional MongoDB Agents run as standby MongoDB Agents. A standby MongoDB Agent is idle, except to log its status as a standby and periodically ask Cloud Manager if it should begin monitoring.
If you install multiple Monitoring Agents, ensure that
all
the
MongoDB Agents with active Monitoring can reach all the
mongod
processes in the deployment.
To activate Monitoring on multiple MongoDB Agents, repeat the activation process on multiple MongoDB Agents.
Automation is activated when you:
On this page
We recommend that you rotate the automation user’s password periodically. Cloud Manager provides an automated procedure for password rotation with no downtime.
To enable password rotation for the automation user, you must meet the following requirement:
Otherwise, click Cancel and you can make additional changes.
On this page
The Data Explorer provides an aggregation pipeline builder to process your data. Aggregation pipelines transform your documents into aggregated results based on selected pipeline stages.
The MongoDB Atlas aggregation pipeline builder is primarily designed for building pipelines, rather than executing them. The pipeline builder provides an easy way to export your pipeline to execute in a driver.
To interact with data in the Cloud Manager UI:
To create and execute aggregation pipelines in the
Data Explorer
, you must have been granted at least the
Project
Data
Access
Read
Only
role.
To utilize the
$out
stage in your pipeline, you must
have been granted at least the
Project
Data
Access
Read/Write
role.
The main panel and Namespaces on the left side list the collections in the database.
The main panel displays the Find , Indexes , and Aggregation views.
When you first open the Aggregation view, the Data Explorer displays an empty aggregation pipeline.
Select an aggregation stage from the Select dropdown in the bottom-left panel.
The toggle to the right of the dropdown dictates whether the stage is enabled.
Fill in your stage with the appropriate values. If Comment Mode is enabled, the pipeline builder provides syntactic guidelines for your selected stage.
As you modify your stage, the Data Explorer updates the preview documents on the right based on the results of the current stage.
There are two ways to add additional stages to your pipeline:
To delete a pipeline stage, click the trash icon icon on the desired stage.
Use collation to specify language-specific rules for string comparison, such as rules for lettercase and accent marks.
To specify a collation document, click Collation at the top of the pipeline builder.
A collation document has the following fields:
{
locale: <string>,
caseLevel: <boolean>,
caseFirst: <string>,
strength: <int>,
numericOrdering: <boolean>,
alternate: <string>,
maxVariable: <string>,
backwards: <boolean>
}
The
locale
field is mandatory; all other collation fields are
optional. For descriptions of the fields, see
Collation Document.
You can import aggregation pipelines from plain text into the pipeline builder to easily modify and verify your pipelines.
To import a pipeline from plain text:
Click the arrow next to the plus icon at the top of the pipeline builder.
Click New Pipeline from Text .
Your pipeline must match the syntax of the
pipeline
parameter of
the
db.collection.aggregate()
method.
To return your pipeline to the initial blank state, click the plus icon at the top of the pipeline builder.
You can use the aggregation pipeline builder to export your finished pipeline to one of the supported driver languages; Java, Node, C#, and Python 3. Use this feature to format and export pipelines for use in your applications.
To export your aggregation pipeline:
For instructions on creating an aggregation pipeline, see Create an Aggregation Pipeline .
In the Export Pipeline To dropdown, select your desired language.
The
My Pipeline
pane on the left displays your
pipeline in
mongosh
syntax.
The pane on the right displays your pipeline in the selected language.
(Optional) : Check the Include Import Statements option to include the required import statements for the language selected.
Click the Copy button at the top-right of the pipeline to copy the pipeline for the selected language to your clipboard. You can now integrate your pipeline into your application.
To modify the aggregation pipeline builder settings:
You can modify the following settings:
Setting | Description | Default |
---|---|---|
Comment Mode |
When enabled, the Data Explorer adds helper comments to each stage. Note Changing this setting only affects new stages and does not modify stages which have already been added to your pipeline. |
On |
Number of Preview Documents | Number of documents to show in the preview for each stage. | 20 |
Before the introduction of the MongoDB Agent, each function – Automation, Backup, and Monitoring – ran as a separate agent binary in your project.
The MongoDB Agent runs as a single binary that can perform any – or all – of the three functions depending upon what you need.