This is one stop global knowledge base where you can learn about all the products, solutions and support features.
The
oci
exporter outputs the build result into an
OCI image layout
tarball. The
docker
exporter behaves the same way, except it exports a Docker
image layout instead.
The
docker
driver doesnât support these exporters. You
must use
docker-container
or some other driver if you want to generate these
outputs.
Build a container image using the
oci
and
docker
exporters:
$ docker buildx build --output type=oci[,parameters] .
$ docker buildx build --output type=docker[,parameters] .
The following table describes the available parameters:
Parameter | Type | Default | Description |
---|---|---|---|
name
|
String | Â | Specify image name(s) |
dest
|
String | Â | Path |
tar
|
true
,
false
|
true
|
Bundle the output into a tarball layout |
compression
|
uncompressed
,
gzip
,
estargz
,
zstd
|
gzip
|
Compression type, see compression |
compression-level
|
0..22
|
 | Compression level, see compression |
force-compression
|
true
,
false
|
false
|
Forcefully apply compression, see compression |
oci-mediatypes
|
true
,
false
|
 |
Use OCI media types in exporter manifests. Defaults to
true
for
type=oci
, and
false
for
type=docker
. See OCI Media types
|
buildinfo
|
true
,
false
|
true
|
Attach inline build info |
buildinfo-attrs
|
true
,
false
|
false
|
Attach inline build info attributes |
annotation.<key>
|
String | Â |
Attach an annotation with the respective
key
and
value
to the built image,see annotations
|
These exporters support adding OCI annotation using
annotation.*
dot notation
parameter. The following example sets the
org.opencontainers.image.title
annotation for a build:
$ docker buildx build \
--output "type=<type>,name=<registry>/<image>,annotation.org.opencontainers.image.title=<title>" .
For more information about annotations, see BuildKit documentation.
For more information on the
oci
or
docker
exporters, see the
BuildKit README.
Docker Build is one of Docker Engineâs most used features. Whenever you are creating an image you are using Docker Build. Build is a key part of your software development life cycle allowing you to package and bundle your code and ship it anywhere.
The Docker Engine uses a client-server architecture and is composed of multiple components
and tools. The most common method of executing a build is by issuing a
docker build
command. The CLI
sends the request to Docker Engine which, in turn, executes your build.
There are now two components in Engine that can be used to build an image. Starting with the 18.09 release, Engine is shipped with Moby BuildKit, the new component for executing your builds by default.
The new client Docker Buildx,
is a CLI plugin that extends the
docker
command with the full support of the
features provided by BuildKit builder toolkit.
docker buildx build
command
provides the same user experience as
docker build
with many new features like
creating scoped builder instances, building against
multiple nodes concurrently, outputs configuration, inline
build caching, and specifying target platform. In
addition, Buildx also supports new features that arenât yet available for
regular
docker build
like building manifest lists, distributed caching, and
exporting build results to OCI image tarballs.
Docker Build is more than a simple build command, and itâs not only about packaging your code. Itâs a whole ecosystem of tools and features that support not only common workflow tasks but also provides support for more complex and advanced scenarios.
Build and package your application to run it anywhere: locally or in the cloud.
Keep your images small and secure with minimal dependencies.
Build, push, pull, and run images seamlessly on different computer architectures.
Configure where and how you run your builds.
Avoid unnecessary repetitions of costly operations, such as package installs.
Learn how to use Docker in your continuous integration pipelines.
Export any artifact you like, not just Docker images.
Orchestrate your builds with Bake.
Learn about the Dockerfile frontend for BuildKit.
Take a deep dive into the internals of BuildKit to get the most out of your builds.
Docker Buildx is included by default in Docker Desktop.
Docker Linux packages also include Docker Buildx when installed using the
.deb
or
.rpm
packages.
Here is how to install and use Buildx inside a Dockerfile through the
docker/buildx-bin
image:
# syntax=docker/dockerfile:1
FROM docker
COPY --from=docker/buildx-bin:latest /buildx /usr/libexec/docker/cli-plugins/docker-buildx
RUN docker buildx version
Important
This section is for unattended installation of the Buildx component. These instructions are mostly suitable for testing purposes. We do not recommend installing Buildx using manual download in production environments as they will not be updated automatically with security updates.
On Windows, macOS, and Linux workstations we recommend that you install Docker Desktop instead. For Linux servers, we recommend that you follow the instructions specific for your distribution.
You can also download the latest binary from the releases page on GitHub.
Rename the relevant binary and copy it to the destination matching your OS:
OS | Binary name | Destination folder |
---|---|---|
Linux |
docker-buildx
|
$HOME/.docker/cli-plugins
|
macOS |
docker-buildx
|
$HOME/.docker/cli-plugins
|
Windows |
docker-buildx.exe
|
%USERPROFILE%\.docker\cli-plugins
|
Or copy it into one of these folders for installing it system-wide.
On Unix environments:
/usr/local/lib/docker/cli-plugins
OR
/usr/local/libexec/docker/cli-plugins
/usr/lib/docker/cli-plugins
OR
/usr/libexec/docker/cli-plugins
On Windows:
C:\ProgramData\Docker\cli-plugins
C:\Program Files\Docker\cli-plugins
Note
On Unix environments, it may also be necessary to make it executable with
chmod +x
:$ chmod +x ~/.docker/cli-plugins/docker-buildx
Running the command
docker buildx install
sets up the
docker build
command as an alias to
docker buildx
. This results in
the ability to have
docker build
use the current Buildx builder.
To remove this alias, run
docker buildx uninstall
.
This page contains information about the new features, improvements, and bug fixes in Docker Buildx.
2022-08-18
inspect
command now displays the BuildKit version in use docker/buildx#1279
For more details, see the complete release notes in the Buildx GitHub repository.
2022-08-17
remote
that you can use
to connect to any already running BuildKit instance docker/buildx#1078
docker/buildx#1093 docker/buildx#1094
docker/buildx#1103 docker/buildx#1134
docker/buildx#1204
oci-layout://
for loading
build context from local OCI layout directories.
Note that this feature depends on an unreleased BuildKit feature and builder
instance from
moby/buildkit:master
needs to be used until BuildKit v0.11 is
released docker/buildx#1173
--print
flag to run helper functions supported by the
BuildKit frontend performing the build and print their results. You can use
this feature in Dockerfile to show the build arguments and secrets that the
current build supports with
--print=outline
and list all available
Dockerfile stages with
--print=targets
. This feature is experimental for
gathering early feedback and requires enabling
BUILDX_EXPERIMENTAL=1
environment variable. We plan to update/extend this feature in the future
without keeping backward compatibility docker/buildx#1100
docker/buildx#1272
--invoke
flag to launch interactive containers from
build results for an interactive debugging cycle. You can reload these
containers with code changes or restore them to an initial state from the
special monitor mode. This feature is experimental for gathering early
feedback and requires enabling
BUILDX_EXPERIMENTAL=1
environment variable.
We plan to update/extend this feature in the future without enabling backward
compatibility docker/buildx#1168
docker/buildx#1257 docker/buildx#1259
BUILDKIT_COLORS
and
NO_COLOR
to customize/disable the colors of interactive build progressbar docker/buildx#1230
docker/buildx#1226
buildx ls
command now shows the current BuildKit version of each builder
instance docker/buildx#998
bake
command now loads
.env
file automatically when building Compose
files for compatibility docker/buildx#1261
cache_to
definition docker/buildx#1155
timestamp()
to access current time docker/buildx#1214
x-bake
docker/buildx#1256
buildx ls
command output has been updated with better access to errors
from different builders docker/buildx#1109
buildx create
command now performs additional validation of builder parameters
to avoid creating a builder instance with invalid configuration docker/buildx#1206
buildx imagetools create
command can now create new multi-platform images
even if the source subimages are located on different repositories or
registries docker/buildx#1137
--config
value docker/buildx#1111
dockerd
instance supports initially
disabled Buildkit features like multi-platform images docker/buildx#1260
docker/buildx#1262
.
in the name are now converted to use
_
so the selector keys can still be used in such targets docker/buildx#1011
remove
command now displays the removed builder and forbids removing
context builders docker/buildx#1128
securityContext
in kubernetes
driver docker/buildx#1052
prune
command docker/buildx#1252
--builder
flag correctly docker/buildx#1067
For more details, see the complete release notes in the Buildx GitHub repository.
2022-04-04
buildx bake
to v1.2.1 to fix parsing ports definition docker/buildx#1033
buildx bake
when already loaded by a parent group docker/buildx#1021
For more details, see the complete release notes in the Buildx GitHub repository.
2022-03-21
.
on Compose target names in
buildx bake
for backward compatibility docker/buildx#1018
For more details, see the complete release notes in the Buildx GitHub repository.
2022-03-09
--build-context
flag to define additional named build contexts
for your builds docker/buildx#904
imagetools inspect
now accepts
--format
flag allowing access to config
and buildinfo for specific images docker/buildx#854
docker/buildx#972
--no-cache-filter
allows configuring build, so it ignores cache
only for specified Dockerfile stages docker/buildx#860
BUILDKIT_INLINE_BUILDINFO_ATTRS
allows opting-in to embed
building attributes to resulting image docker/buildx#908
--keep-buildkitd
allows keeping BuildKit daemon running when removing a builder
--metadata-file
output now supports embedded structure types docker/buildx#946
buildx rm
now accepts new flag
--all-inactive
for removing all builders
that are not currently running docker/buildx#885
-f -
docker/buildx#864
--iidfile
now always writes the image config digest independently of the
driver being used (use
--metadata-file
for digest) docker/buildx#980
docker
driver docker/buildx#989
du
command docker/buildx#867
UsernsMode
when using rootless container docker/buildx#887
For more details, see the complete release notes in the Buildx GitHub repository.
2021-08-25
.dockerignore
docker/buildx#858
bake --print
JSON output for current group docker/buildx#857
For more details, see the complete release notes in the Buildx GitHub repository.
2021-11-10
docker-container
and
kubernetes
drivers docker/buildx#787
--ulimit
flag for feature parity docker/buildx#800
--shm-size
flag for feature parity docker/buildx#790
--quiet
for feature parity docker/buildx#740
--cgroup-parent
flag for feature parity docker/buildx#814
BAKE_LOCAL_PLATFORM
docker/buildx#748
x-bake
extension field in Compose files docker/buildx#721
kubernetes
driver now supports colon-separated
KUBECONFIG
docker/buildx#761
kubernetes
driver now supports setting Buildkit config file with
--config
docker/buildx#682
kubernetes
driver now supports installing QEMU emulators with driver-opt docker/buildx#682
buildx imagetools
command docker/buildx#825
buildx create --bootstrap
docker/buildx#692
registry:insecure
output option for multi-node pushes docker/buildx#825
--print
docker/buildx#720
docker
driver now dials build session over HTTP for better performance docker/buildx#804
--iidfile
together with a multi-node push docker/buildx#826
--push
in Bake does not clear other image export options in the file docker/buildx#773
buildx bake
when
https
protocol was used docker/buildx#822
--builder
flags for commands that donât use it docker/buildx#818
For more details, see the complete release notes in the Buildx GitHub repository.
2021-08-30
For more details, see the complete release notes in the Buildx GitHub repository.
2021-08-21
For more details, see the complete release notes in the Buildx GitHub repository.
2021-07-30
ConfigFile
to parse compose files with Bake docker/buildx#704
For more details, see the complete release notes in the Buildx GitHub repository.
2021-07-16
--cache-to type=gha
and
--cache-from type=gha
docker/buildx#535
--metadata-file
flag has been added to build and Bake command that
allows saving build result metadata in JSON format docker/buildx#605
kubernetes
driver now supports defining resources/limits docker/buildx#618
docker-container
driver now keeps BuildKit state in volume. Enabling
updates with keeping state docker/buildx#672
moby/buildkit:buildx-stable-1-rootless
docker/buildx#480
imagetools create
command now correctly merges JSON descriptor with old one docker/buildx#592
--network=none
not requiring extra security entitlements docker/buildx#531
For more details, see the complete release notes in the Buildx GitHub repository.
2020-12-15
--platform
on
buildx create
outside
kubernetes
driver docker/buildx#475
For more details, see the complete release notes in the Buildx GitHub repository.
2020-12-15
docker
driver now supports the
--push
flag docker/buildx#442
BUILDX_CONFIG
env var allow users to have separate buildx state from
Docker config docker/buildx#385
BUILDKIT_MULTI_PLATFORM
build arg allows to force building multi-platform
return objects even if only one
--platform
specified docker/buildx#467
--append
to be used with
kubernetes
driver docker/buildx#370
--debug
docker/buildx#389
kubernetes
driver docker/buildx#368
docker/buildx#460
docker-container
driver docker/buildx#462
--builder
flag to switch to default instance docker/buildx#425
BUILDX_NO_DEFAULT_LOAD
config value docker/buildx#390
quiet
option by a warning docker/buildx#403
For more details, see the complete release notes in the Buildx GitHub repository.
2020-08-22
cacheonly
exporter docker/buildx#337
go-cty
to pull in more
stdlib
functions docker/buildx#277
--builder
is wired from root options docker/buildx#321
For more details, see the complete release notes in the Buildx GitHub repository.
2020-05-01
For more details, see the complete release notes in the Buildx GitHub repository.
2020-04-30
kubernetes
driver docker/buildx#167
--builder
flag to override builder instance for a single command docker/buildx#246
prune
and
du
commands for managing local builder cache docker/buildx#249
pull
and
no-cache
options for HCL targets docker/buildx#165
--load
and
--push
docker/buildx#164
driver-opt
docker/buildx#170
For more details, see the complete release notes in the Buildx GitHub repository.
2019-09-27
build -f -
) docker/buildx#153
For more details, see the complete release notes in the Buildx GitHub repository.
2019-08-02
buildkitd
daemon flags docker/buildx#102
create
docker/buildx#122
--no-cache
and
--pull
docker/buildx#118
build --allow
docker/buildx#104
--build-arg foo
would not read
foo
from environment docker/buildx#116
For more details, see the complete release notes in the Buildx GitHub repository.
2019-05-30
For more details, see the complete release notes in the Buildx GitHub repository.
2019-05-25
BUILDKIT_PROGRESS
env var docker/buildx#69
local
platform docker/buildx#70
For more details, see the complete release notes in the Buildx GitHub repository.
2019-04-25
For more details, see the complete release notes in the Buildx GitHub repository.
This document outlines the conversion of an application defined in a Compose file to ACI objects. At a high-level, each Compose deployment is mapped to a single ACI container group. Each service is mapped to a container in the container group. The Docker ACI integration does not allow scaling of services.
The table below lists supported Compose file fields and their ACI counterparts.
Legend:
Keys | Map | Notes |
---|---|---|
Service | â | Â |
service.service.build | x | Ignored. No image build support on ACI. |
service.cap_add, cap_drop | x | Â |
service.command | â |
Override container Command. On ACI, specifying
command
will override the image command and entrypoint, if the image has an command or entrypoint defined
|
service.configs | x | Â |
service.cgroup_parent | x | Â |
service.container_name | x | Service name is used as container name on ACI. |
service.credential_spec | x | Â |
service.deploy | â | Â |
service.deploy.endpoint_mode | x | Â |
service.deploy.mode | x | Â |
service.deploy.replicas | x | Only one replica is started for each service. |
service.deploy.placement | x | Â |
service.deploy.update_config | x | Â |
service.deploy.resources | â | Restriction: ACI resource limits cannot be greater than the sum of resource reservations for all containers in the container group. Using container limits that are greater than container reservations will cause containers in the same container group to compete with resources. |
service.deploy.restart_policy | â |
One of:
any
,
none
,
on-failure
. Restriction: All services must have the same restart policy. The entire ACI container group will be restarted if needed.
|
service.deploy.labels | x | ACI does not have container-level labels. |
service.devices | x | Â |
service.depends_on | x | Â |
service.dns | x | Â |
service.dns_search | x | Â |
service.domainname | â |
Mapped to ACI DNSLabelName. Restriction: all services must specify the same
domainname
, if specified.
domainname
must be unique globally in
|
service.tmpfs | x | Â |
service.entrypoint | x | ACI only supports overriding the container command. |
service.env_file | â | Â |
service.environment | â | Â |
service.expose | x | Â |
service.extends | x | Â |
service.external_links | x | Â |
service.extra_hosts | x | Â |
service.group_add | x | Â |
service.healthcheck | â | Â |
service.hostname | x | Â |
service.image | â | Private images will be accessible if the user is logged into the corresponding registry at deploy time. Users will be automatically logged in to Azure Container Registry using their Azure login if possible. |
service.isolation | x | Â |
service.labels | x | ACI does not have container-level labels. |
service.links | x | Â |
service.logging | x | Â |
service.network_mode | x | Â |
service.networks | x |
Communication between services is implemented by defining mapping for each service in the shared
/etc/hosts
file of the container group. Each service can resolve names for other services and the resulting network calls will be redirected to
localhost
.
|
service.pid | x | Â |
service.ports | â | Only symmetrical port mapping is supported in ACI. See Exposing ports. |
service.secrets | â | See Secrets. |
service.security_opt | x | Â |
service.stop_grace_period | x | Â |
service.stop_signal | x | Â |
service.sysctls | x | Â |
service.ulimits | x | Â |
service.userns_mode | x | Â |
service.volumes | â | Mapped to AZure File Shares. See Persistent volumes. |
service.restart | x | Replaced by service.deployment.restart_policy |
 |  |  |
Volume | x | Â |
driver | â | See Persistent volumes. |
driver_opts | â | Â |
external | x | Â |
labels | x | Â |
 |  |  |
Secret | x | Â |
TBD | x | Â |
 |  |  |
Config | x | Â |
TBD | x | Â |
 |  |  |
Container logs can be obtained for each container with
docker logs <CONTAINER>
.
The Docker ACI integration does not currently support aggregated logs for containers in a Compose application, see https://github.com/docker/compose-cli/issues/803.
When one or more services expose ports, the entire ACI container group will be exposed and will get a public IP allocated. As all services are mapped to containers in the same container group, only one service cannot expose a given port number. ACI does not support port mapping, so the source and target ports defined in the Compose file must be the same.
When exposing ports, a service can also specify the service
domainname
field to set a DNS hostname.
domainname
will be used to specify the ACI DNS Label Name, and the ACI container group will be reachable at
Docker volumes are mapped to Azure file shares. Only the long Compose volume format is supported meaning that volumes must be defined in the
volume
section.
Volumes are defined with a name, the
driver
field must be set to
azure_file
, and
driver_options
must define the storage account and file share to use for the volume.
A service can then reference the volume by its name, and specify the target path to be mounted in the container.
services:
myservice:
image: nginx
volumes:
- mydata:/mount/testvolumes
volumes:
mydata:
driver: azure_file
driver_opts:
share_name: myfileshare
storage_account_name: mystorageaccount
The short volume syntax is not allowed for ACI volumes, as it was designed for local path bind mounting when running local containers. A Compose file can define several volumes, with different Azure file shares or storage accounts.
Credentials for storage accounts will be automatically fetched at deployment time using the Azure login to retrieve the storage account key for each storage account used.
Secrets can be defined in compose files, and will need secret files available at deploy time next to the compose file.
The content of the secret file will be made available inside selected containers, by default under
/run/secrets/<SECRET_NAME>
.
External secrets are not supported with the ACI integration.
services:
nginx:
image: nginx
secrets:
- mysecret1
db:
image: mysql
secrets:
- mysecret2
secrets:
mysecret1:
file: ./my_secret1.txt
mysecret2:
file: ./my_secret2.txt
The nginx container will have secret1 mounted as
/run/secrets/mysecret1
, the db container will have secret2 mounted as
/run/secrets/mysecret2
A target can also be specified to set the name of the mounted file or by specifying an absolute path where to mount the secret file
services:
nginx:
image: nginx
secrets:
- source: mysecret1
target: renamedsecret1.txt
db:
image: mysql
secrets:
- source: mysecret1
target: /mnt/dbmount/mysecretonmount1.txt
- source: mysecret2
target: /mnt/dbmount/mysecretonmount2.txt
secrets:
mysecret1:
file: ./my_secret1.txt
mysecret2:
file: ./my_secret2.txt
In this example the
nginx
service will have its secret mounted to
/run/secrets/renamedsecret1.txt
and
db
will have 2 files (
mysecretonmount1.txt
and
mysecretonmount2.txt
).
Both of them with be mounted in the same folder (
/mnt/dbmount/
).
Note: Relative file paths are not allowed in the target
Note: Secret files cannot be mounted in a folder next to other existing files
CPU and memory reservations and limits can be set in compose. Resource limits must be greater than reservation. In ACI, setting resource limits different from resource reservation will cause containers in the same container group to compete for resources. Resource limits cannot be greater than the total resource reservation for the container group. (Therefore single containers cannot have resource limits different from resource reservations)
services:
db:
image: mysql
deploy:
resources:
reservations:
cpus: '2'
memory: 2G
limits:
cpus: '3'
memory: 3G
web:
image: nginx
deploy:
resources:
reservations:
cpus: '1.5'
memory: 1.5G
In this example, the db container will be allocated 2 CPUs and 2G of memory. It will be allowed to use up to 3 CPUs and 3G of memory, using some of the resources allocated to the web container. The web container will have its limits set to the same values as reservations, by default.
A health check can be described in the
healthcheck
section of each service. This is translated to a
LivenessProbe
in ACI. If the health check fails then the container is considered unhealthy and terminated.
In order for the container to be restarted automatically, the service needs to have a restart policy other than
none
to be set. Note that the default restart policy if one isnât set is
any
.
services:
web:
image: nginx
deploy:
restart_policy:
condition: on-failure
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:80"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
Note:
that the
test
command can be a
string
or an array starting or not by
NONE
,
CMD
,
CMD-SHELL
. In the ACI implementation, these prefixes are ignored.
Single containers can be executed on ACI with the
docker run
command.
A single container is executed in its own ACI container group, which will contain a single container.
Containers can be listed with the
docker ps
command, and stopped and removed with
docker stop <CONTAINER>
and
docker rm <CONTAINER>
.
The table below lists supported
docker run
flags and their ACI counterparts.
Legend:
Flag | Map | Notes |
---|---|---|
--cpus | â | See Container Resources. |
-d, --detach | â | Detach from container logs when container starts. By default, the command line stays attached and follow container logs. |
--domainname | â | See Exposing ports. |
--e, --env | â | Sets environment variable. |
--env-file | â | Sets environment variable from and external file. |
--health-cmd | â | Specify healthcheck command. See Healthchecks. |
--health-interval | â | Specify healthcheck interval |
--health-retries | â | Specify healthcheck number of retries |
--health-start-period | â | Specify healthcheck initial delay |
--health-timeout | â | Specify healthcheck timeout |
-l, --label | x | Unsupported in Docker ACI integration, due to limitations of ACI Tags. |
-m, --memory | â | See Container Resources. |
--name | â | Provide a name for the container. Name must be unique withing the ACI resource group. a name is generated by default. |
-p, --publish | â | See Exposing ports. Only symetrical port mapping is supported in ACI. |
--restart | â |
Restart policy, must be one of:
always
,
no
,
on-failure
.
|
--rm | x | Not supported as ACI does not support auto-delete containers. |
-v, --volume | â | See Persistent Volumes. |
You can expose one or more ports of a container with
docker run -p <PORT>:<PORT>
If ports are exposed when running a container, the corresponding ACI container group will be exposed with a public IP allocated and the required port(s) accessible.
Note: ACI does not support port mapping, so the same port number must be specified when using
-p <PORT>:<PORT>
.
When exposing ports, a container can also specify the service
--domainname
flag to set a DNS hostname.
domainname
will be used to specify the ACI DNS Label Name, and the ACI container group will be reachable at
<DOMAINNANE>.<REGION>.azurecontainer.io
.
domainname
must be unique globally in
Docker volumes are mapped to Azure File shares, each file share is part of an Azure Storage Account.
One or more volumes can be specified with
docker run -v <STORAGE-ACCOUNT>/<FILESHARE>:<TARGET-PATH>
.
A run command can use the
--volume
or
-v
flag several times for different volumes. The volumes can use the same or different storage accounts. The target paths for different volume mounts must be different and not overlap.
There is no support for mounting a single file, or mounting a subfolder from an Azure File Share.
Credentials for storage accounts will be automatically fetched at deployment time using the Azure login to retrieve the storage account key for each storage account used.
CPU and memory reservations can be set when running containers with
docker run --cpus 1.5 --memory 2G
.
It is not possible to set resource limits that differ from resource reservation on single containers. ACI allows setting resource limits for containers in a container group but these limits must stay within the reserved resources for the entire group. In the case of a single container deployed in a container group, the resource limits must be equal to the resource reservation.
You can view container logs with the command
docker logs <CONTAINER-ID>
.
You can follow logs with the
--follow
(
-f
) option.
When running a container with
docker run
, by default the command line stays attached to container logs when the container starts. Use
docker run --detach
to not follow logs once the container starts.
Note: Following ACI logs may have display issues especially when resizing a terminal that is following container logs. This is due to ACI providing raw log pulling but no streaming of logs. Logs are effectively pulled every 2 seconds when following logs.
A health check can be described using the flags prefixed by
--health-
. This is translated into
LivenessProbe
for ACI. If the health check fails then the container is considered unhealthy and terminated.
In order for the container to be restarted automatically, the container needs to be run with a restart policy (set by the
--restart
flag) other than
no
. Note that the default restart policy if one isnât set is
no
.
In order to restart automatically, the container also need to have a restart policy set with
--restart
(
docker run
defaults to no restart policy)