This is one stop global knowledge base where you can learn about all the products, solutions and support features.
Docker Buildx is included by default in Docker Desktop.
Docker Linux packages also include Docker Buildx when installed using the
.deb
or
.rpm
packages.
Here is how to install and use Buildx inside a Dockerfile through the
docker/buildx-bin
image:
# syntax=docker/dockerfile:1
FROM docker
COPY --from=docker/buildx-bin:latest /buildx /usr/libexec/docker/cli-plugins/docker-buildx
RUN docker buildx version
Important
This section is for unattended installation of the Buildx component. These instructions are mostly suitable for testing purposes. We do not recommend installing Buildx using manual download in production environments as they will not be updated automatically with security updates.
On Windows, macOS, and Linux workstations we recommend that you install Docker Desktop instead. For Linux servers, we recommend that you follow the instructions specific for your distribution.
You can also download the latest binary from the releases page on GitHub.
Rename the relevant binary and copy it to the destination matching your OS:
OS | Binary name | Destination folder |
---|---|---|
Linux |
docker-buildx
|
$HOME/.docker/cli-plugins
|
macOS |
docker-buildx
|
$HOME/.docker/cli-plugins
|
Windows |
docker-buildx.exe
|
%USERPROFILE%\.docker\cli-plugins
|
Or copy it into one of these folders for installing it system-wide.
On Unix environments:
/usr/local/lib/docker/cli-plugins
OR
/usr/local/libexec/docker/cli-plugins
/usr/lib/docker/cli-plugins
OR
/usr/libexec/docker/cli-plugins
On Windows:
C:\ProgramData\Docker\cli-plugins
C:\Program Files\Docker\cli-plugins
Note
On Unix environments, it may also be necessary to make it executable with
chmod +x
:$ chmod +x ~/.docker/cli-plugins/docker-buildx
Running the command
docker buildx install
sets up the
docker build
command as an alias to
docker buildx
. This results in
the ability to have
docker build
use the current Buildx builder.
To remove this alias, run
docker buildx uninstall
.
This page contains information about the new features, improvements, and bug fixes in Docker Buildx.
2022-08-18
inspect
command now displays the BuildKit version in use docker/buildx#1279
For more details, see the complete release notes in the Buildx GitHub repository.
2022-08-17
remote
that you can use
to connect to any already running BuildKit instance docker/buildx#1078
docker/buildx#1093 docker/buildx#1094
docker/buildx#1103 docker/buildx#1134
docker/buildx#1204
oci-layout://
for loading
build context from local OCI layout directories.
Note that this feature depends on an unreleased BuildKit feature and builder
instance from
moby/buildkit:master
needs to be used until BuildKit v0.11 is
released docker/buildx#1173
--print
flag to run helper functions supported by the
BuildKit frontend performing the build and print their results. You can use
this feature in Dockerfile to show the build arguments and secrets that the
current build supports with
--print=outline
and list all available
Dockerfile stages with
--print=targets
. This feature is experimental for
gathering early feedback and requires enabling
BUILDX_EXPERIMENTAL=1
environment variable. We plan to update/extend this feature in the future
without keeping backward compatibility docker/buildx#1100
docker/buildx#1272
--invoke
flag to launch interactive containers from
build results for an interactive debugging cycle. You can reload these
containers with code changes or restore them to an initial state from the
special monitor mode. This feature is experimental for gathering early
feedback and requires enabling
BUILDX_EXPERIMENTAL=1
environment variable.
We plan to update/extend this feature in the future without enabling backward
compatibility docker/buildx#1168
docker/buildx#1257 docker/buildx#1259
BUILDKIT_COLORS
and
NO_COLOR
to customize/disable the colors of interactive build progressbar docker/buildx#1230
docker/buildx#1226
buildx ls
command now shows the current BuildKit version of each builder
instance docker/buildx#998
bake
command now loads
.env
file automatically when building Compose
files for compatibility docker/buildx#1261
cache_to
definition docker/buildx#1155
timestamp()
to access current time docker/buildx#1214
x-bake
docker/buildx#1256
buildx ls
command output has been updated with better access to errors
from different builders docker/buildx#1109
buildx create
command now performs additional validation of builder parameters
to avoid creating a builder instance with invalid configuration docker/buildx#1206
buildx imagetools create
command can now create new multi-platform images
even if the source subimages are located on different repositories or
registries docker/buildx#1137
--config
value docker/buildx#1111
dockerd
instance supports initially
disabled Buildkit features like multi-platform images docker/buildx#1260
docker/buildx#1262
.
in the name are now converted to use
_
so the selector keys can still be used in such targets docker/buildx#1011
remove
command now displays the removed builder and forbids removing
context builders docker/buildx#1128
securityContext
in kubernetes
driver docker/buildx#1052
prune
command docker/buildx#1252
--builder
flag correctly docker/buildx#1067
For more details, see the complete release notes in the Buildx GitHub repository.
2022-04-04
buildx bake
to v1.2.1 to fix parsing ports definition docker/buildx#1033
buildx bake
when already loaded by a parent group docker/buildx#1021
For more details, see the complete release notes in the Buildx GitHub repository.
2022-03-21
.
on Compose target names in
buildx bake
for backward compatibility docker/buildx#1018
For more details, see the complete release notes in the Buildx GitHub repository.
2022-03-09
--build-context
flag to define additional named build contexts
for your builds docker/buildx#904
imagetools inspect
now accepts
--format
flag allowing access to config
and buildinfo for specific images docker/buildx#854
docker/buildx#972
--no-cache-filter
allows configuring build, so it ignores cache
only for specified Dockerfile stages docker/buildx#860
BUILDKIT_INLINE_BUILDINFO_ATTRS
allows opting-in to embed
building attributes to resulting image docker/buildx#908
--keep-buildkitd
allows keeping BuildKit daemon running when removing a builder
--metadata-file
output now supports embedded structure types docker/buildx#946
buildx rm
now accepts new flag
--all-inactive
for removing all builders
that are not currently running docker/buildx#885
-f -
docker/buildx#864
--iidfile
now always writes the image config digest independently of the
driver being used (use
--metadata-file
for digest) docker/buildx#980
docker
driver docker/buildx#989
du
command docker/buildx#867
UsernsMode
when using rootless container docker/buildx#887
For more details, see the complete release notes in the Buildx GitHub repository.
2021-08-25
.dockerignore
docker/buildx#858
bake --print
JSON output for current group docker/buildx#857
For more details, see the complete release notes in the Buildx GitHub repository.
2021-11-10
docker-container
and
kubernetes
drivers docker/buildx#787
--ulimit
flag for feature parity docker/buildx#800
--shm-size
flag for feature parity docker/buildx#790
--quiet
for feature parity docker/buildx#740
--cgroup-parent
flag for feature parity docker/buildx#814
BAKE_LOCAL_PLATFORM
docker/buildx#748
x-bake
extension field in Compose files docker/buildx#721
kubernetes
driver now supports colon-separated
KUBECONFIG
docker/buildx#761
kubernetes
driver now supports setting Buildkit config file with
--config
docker/buildx#682
kubernetes
driver now supports installing QEMU emulators with driver-opt docker/buildx#682
buildx imagetools
command docker/buildx#825
buildx create --bootstrap
docker/buildx#692
registry:insecure
output option for multi-node pushes docker/buildx#825
--print
docker/buildx#720
docker
driver now dials build session over HTTP for better performance docker/buildx#804
--iidfile
together with a multi-node push docker/buildx#826
--push
in Bake does not clear other image export options in the file docker/buildx#773
buildx bake
when
https
protocol was used docker/buildx#822
--builder
flags for commands that donât use it docker/buildx#818
For more details, see the complete release notes in the Buildx GitHub repository.
2021-08-30
For more details, see the complete release notes in the Buildx GitHub repository.
2021-08-21
For more details, see the complete release notes in the Buildx GitHub repository.
2021-07-30
ConfigFile
to parse compose files with Bake docker/buildx#704
For more details, see the complete release notes in the Buildx GitHub repository.
2021-07-16
--cache-to type=gha
and
--cache-from type=gha
docker/buildx#535
--metadata-file
flag has been added to build and Bake command that
allows saving build result metadata in JSON format docker/buildx#605
kubernetes
driver now supports defining resources/limits docker/buildx#618
docker-container
driver now keeps BuildKit state in volume. Enabling
updates with keeping state docker/buildx#672
moby/buildkit:buildx-stable-1-rootless
docker/buildx#480
imagetools create
command now correctly merges JSON descriptor with old one docker/buildx#592
--network=none
not requiring extra security entitlements docker/buildx#531
For more details, see the complete release notes in the Buildx GitHub repository.
2020-12-15
--platform
on
buildx create
outside
kubernetes
driver docker/buildx#475
For more details, see the complete release notes in the Buildx GitHub repository.
2020-12-15
docker
driver now supports the
--push
flag docker/buildx#442
BUILDX_CONFIG
env var allow users to have separate buildx state from
Docker config docker/buildx#385
BUILDKIT_MULTI_PLATFORM
build arg allows to force building multi-platform
return objects even if only one
--platform
specified docker/buildx#467
--append
to be used with
kubernetes
driver docker/buildx#370
--debug
docker/buildx#389
kubernetes
driver docker/buildx#368
docker/buildx#460
docker-container
driver docker/buildx#462
--builder
flag to switch to default instance docker/buildx#425
BUILDX_NO_DEFAULT_LOAD
config value docker/buildx#390
quiet
option by a warning docker/buildx#403
For more details, see the complete release notes in the Buildx GitHub repository.
2020-08-22
cacheonly
exporter docker/buildx#337
go-cty
to pull in more
stdlib
functions docker/buildx#277
--builder
is wired from root options docker/buildx#321
For more details, see the complete release notes in the Buildx GitHub repository.
2020-05-01
For more details, see the complete release notes in the Buildx GitHub repository.
2020-04-30
kubernetes
driver docker/buildx#167
--builder
flag to override builder instance for a single command docker/buildx#246
prune
and
du
commands for managing local builder cache docker/buildx#249
pull
and
no-cache
options for HCL targets docker/buildx#165
--load
and
--push
docker/buildx#164
driver-opt
docker/buildx#170
For more details, see the complete release notes in the Buildx GitHub repository.
2019-09-27
build -f -
) docker/buildx#153
For more details, see the complete release notes in the Buildx GitHub repository.
2019-08-02
buildkitd
daemon flags docker/buildx#102
create
docker/buildx#122
--no-cache
and
--pull
docker/buildx#118
build --allow
docker/buildx#104
--build-arg foo
would not read
foo
from environment docker/buildx#116
For more details, see the complete release notes in the Buildx GitHub repository.
2019-05-30
For more details, see the complete release notes in the Buildx GitHub repository.
2019-05-25
BUILDKIT_PROGRESS
env var docker/buildx#69
local
platform docker/buildx#70
For more details, see the complete release notes in the Buildx GitHub repository.
2019-04-25
For more details, see the complete release notes in the Buildx GitHub repository.
This document outlines the conversion of an application defined in a Compose file to ACI objects. At a high-level, each Compose deployment is mapped to a single ACI container group. Each service is mapped to a container in the container group. The Docker ACI integration does not allow scaling of services.
The table below lists supported Compose file fields and their ACI counterparts.
Legend:
Keys | Map | Notes |
---|---|---|
Service | â | Â |
service.service.build | x | Ignored. No image build support on ACI. |
service.cap_add, cap_drop | x | Â |
service.command | â |
Override container Command. On ACI, specifying
command
will override the image command and entrypoint, if the image has an command or entrypoint defined
|
service.configs | x | Â |
service.cgroup_parent | x | Â |
service.container_name | x | Service name is used as container name on ACI. |
service.credential_spec | x | Â |
service.deploy | â | Â |
service.deploy.endpoint_mode | x | Â |
service.deploy.mode | x | Â |
service.deploy.replicas | x | Only one replica is started for each service. |
service.deploy.placement | x | Â |
service.deploy.update_config | x | Â |
service.deploy.resources | â | Restriction: ACI resource limits cannot be greater than the sum of resource reservations for all containers in the container group. Using container limits that are greater than container reservations will cause containers in the same container group to compete with resources. |
service.deploy.restart_policy | â |
One of:
any
,
none
,
on-failure
. Restriction: All services must have the same restart policy. The entire ACI container group will be restarted if needed.
|
service.deploy.labels | x | ACI does not have container-level labels. |
service.devices | x | Â |
service.depends_on | x | Â |
service.dns | x | Â |
service.dns_search | x | Â |
service.domainname | â |
Mapped to ACI DNSLabelName. Restriction: all services must specify the same
domainname
, if specified.
domainname
must be unique globally in
|
service.tmpfs | x | Â |
service.entrypoint | x | ACI only supports overriding the container command. |
service.env_file | â | Â |
service.environment | â | Â |
service.expose | x | Â |
service.extends | x | Â |
service.external_links | x | Â |
service.extra_hosts | x | Â |
service.group_add | x | Â |
service.healthcheck | â | Â |
service.hostname | x | Â |
service.image | â | Private images will be accessible if the user is logged into the corresponding registry at deploy time. Users will be automatically logged in to Azure Container Registry using their Azure login if possible. |
service.isolation | x | Â |
service.labels | x | ACI does not have container-level labels. |
service.links | x | Â |
service.logging | x | Â |
service.network_mode | x | Â |
service.networks | x |
Communication between services is implemented by defining mapping for each service in the shared
/etc/hosts
file of the container group. Each service can resolve names for other services and the resulting network calls will be redirected to
localhost
.
|
service.pid | x | Â |
service.ports | â | Only symmetrical port mapping is supported in ACI. See Exposing ports. |
service.secrets | â | See Secrets. |
service.security_opt | x | Â |
service.stop_grace_period | x | Â |
service.stop_signal | x | Â |
service.sysctls | x | Â |
service.ulimits | x | Â |
service.userns_mode | x | Â |
service.volumes | â | Mapped to AZure File Shares. See Persistent volumes. |
service.restart | x | Replaced by service.deployment.restart_policy |
 |  |  |
Volume | x | Â |
driver | â | See Persistent volumes. |
driver_opts | â | Â |
external | x | Â |
labels | x | Â |
 |  |  |
Secret | x | Â |
TBD | x | Â |
 |  |  |
Config | x | Â |
TBD | x | Â |
 |  |  |
Container logs can be obtained for each container with
docker logs <CONTAINER>
.
The Docker ACI integration does not currently support aggregated logs for containers in a Compose application, see https://github.com/docker/compose-cli/issues/803.
When one or more services expose ports, the entire ACI container group will be exposed and will get a public IP allocated. As all services are mapped to containers in the same container group, only one service cannot expose a given port number. ACI does not support port mapping, so the source and target ports defined in the Compose file must be the same.
When exposing ports, a service can also specify the service
domainname
field to set a DNS hostname.
domainname
will be used to specify the ACI DNS Label Name, and the ACI container group will be reachable at
Docker volumes are mapped to Azure file shares. Only the long Compose volume format is supported meaning that volumes must be defined in the
volume
section.
Volumes are defined with a name, the
driver
field must be set to
azure_file
, and
driver_options
must define the storage account and file share to use for the volume.
A service can then reference the volume by its name, and specify the target path to be mounted in the container.
services:
myservice:
image: nginx
volumes:
- mydata:/mount/testvolumes
volumes:
mydata:
driver: azure_file
driver_opts:
share_name: myfileshare
storage_account_name: mystorageaccount
The short volume syntax is not allowed for ACI volumes, as it was designed for local path bind mounting when running local containers. A Compose file can define several volumes, with different Azure file shares or storage accounts.
Credentials for storage accounts will be automatically fetched at deployment time using the Azure login to retrieve the storage account key for each storage account used.
Secrets can be defined in compose files, and will need secret files available at deploy time next to the compose file.
The content of the secret file will be made available inside selected containers, by default under
/run/secrets/<SECRET_NAME>
.
External secrets are not supported with the ACI integration.
services:
nginx:
image: nginx
secrets:
- mysecret1
db:
image: mysql
secrets:
- mysecret2
secrets:
mysecret1:
file: ./my_secret1.txt
mysecret2:
file: ./my_secret2.txt
The nginx container will have secret1 mounted as
/run/secrets/mysecret1
, the db container will have secret2 mounted as
/run/secrets/mysecret2
A target can also be specified to set the name of the mounted file or by specifying an absolute path where to mount the secret file
services:
nginx:
image: nginx
secrets:
- source: mysecret1
target: renamedsecret1.txt
db:
image: mysql
secrets:
- source: mysecret1
target: /mnt/dbmount/mysecretonmount1.txt
- source: mysecret2
target: /mnt/dbmount/mysecretonmount2.txt
secrets:
mysecret1:
file: ./my_secret1.txt
mysecret2:
file: ./my_secret2.txt
In this example the
nginx
service will have its secret mounted to
/run/secrets/renamedsecret1.txt
and
db
will have 2 files (
mysecretonmount1.txt
and
mysecretonmount2.txt
).
Both of them with be mounted in the same folder (
/mnt/dbmount/
).
Note: Relative file paths are not allowed in the target
Note: Secret files cannot be mounted in a folder next to other existing files
CPU and memory reservations and limits can be set in compose. Resource limits must be greater than reservation. In ACI, setting resource limits different from resource reservation will cause containers in the same container group to compete for resources. Resource limits cannot be greater than the total resource reservation for the container group. (Therefore single containers cannot have resource limits different from resource reservations)
services:
db:
image: mysql
deploy:
resources:
reservations:
cpus: '2'
memory: 2G
limits:
cpus: '3'
memory: 3G
web:
image: nginx
deploy:
resources:
reservations:
cpus: '1.5'
memory: 1.5G
In this example, the db container will be allocated 2 CPUs and 2G of memory. It will be allowed to use up to 3 CPUs and 3G of memory, using some of the resources allocated to the web container. The web container will have its limits set to the same values as reservations, by default.
A health check can be described in the
healthcheck
section of each service. This is translated to a
LivenessProbe
in ACI. If the health check fails then the container is considered unhealthy and terminated.
In order for the container to be restarted automatically, the service needs to have a restart policy other than
none
to be set. Note that the default restart policy if one isnât set is
any
.
services:
web:
image: nginx
deploy:
restart_policy:
condition: on-failure
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:80"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
Note:
that the
test
command can be a
string
or an array starting or not by
NONE
,
CMD
,
CMD-SHELL
. In the ACI implementation, these prefixes are ignored.
Single containers can be executed on ACI with the
docker run
command.
A single container is executed in its own ACI container group, which will contain a single container.
Containers can be listed with the
docker ps
command, and stopped and removed with
docker stop <CONTAINER>
and
docker rm <CONTAINER>
.
The table below lists supported
docker run
flags and their ACI counterparts.
Legend:
Flag | Map | Notes |
---|---|---|
--cpus | â | See Container Resources. |
-d, --detach | â | Detach from container logs when container starts. By default, the command line stays attached and follow container logs. |
--domainname | â | See Exposing ports. |
--e, --env | â | Sets environment variable. |
--env-file | â | Sets environment variable from and external file. |
--health-cmd | â | Specify healthcheck command. See Healthchecks. |
--health-interval | â | Specify healthcheck interval |
--health-retries | â | Specify healthcheck number of retries |
--health-start-period | â | Specify healthcheck initial delay |
--health-timeout | â | Specify healthcheck timeout |
-l, --label | x | Unsupported in Docker ACI integration, due to limitations of ACI Tags. |
-m, --memory | â | See Container Resources. |
--name | â | Provide a name for the container. Name must be unique withing the ACI resource group. a name is generated by default. |
-p, --publish | â | See Exposing ports. Only symetrical port mapping is supported in ACI. |
--restart | â |
Restart policy, must be one of:
always
,
no
,
on-failure
.
|
--rm | x | Not supported as ACI does not support auto-delete containers. |
-v, --volume | â | See Persistent Volumes. |
You can expose one or more ports of a container with
docker run -p <PORT>:<PORT>
If ports are exposed when running a container, the corresponding ACI container group will be exposed with a public IP allocated and the required port(s) accessible.
Note: ACI does not support port mapping, so the same port number must be specified when using
-p <PORT>:<PORT>
.
When exposing ports, a container can also specify the service
--domainname
flag to set a DNS hostname.
domainname
will be used to specify the ACI DNS Label Name, and the ACI container group will be reachable at
<DOMAINNANE>.<REGION>.azurecontainer.io
.
domainname
must be unique globally in
Docker volumes are mapped to Azure File shares, each file share is part of an Azure Storage Account.
One or more volumes can be specified with
docker run -v <STORAGE-ACCOUNT>/<FILESHARE>:<TARGET-PATH>
.
A run command can use the
--volume
or
-v
flag several times for different volumes. The volumes can use the same or different storage accounts. The target paths for different volume mounts must be different and not overlap.
There is no support for mounting a single file, or mounting a subfolder from an Azure File Share.
Credentials for storage accounts will be automatically fetched at deployment time using the Azure login to retrieve the storage account key for each storage account used.
CPU and memory reservations can be set when running containers with
docker run --cpus 1.5 --memory 2G
.
It is not possible to set resource limits that differ from resource reservation on single containers. ACI allows setting resource limits for containers in a container group but these limits must stay within the reserved resources for the entire group. In the case of a single container deployed in a container group, the resource limits must be equal to the resource reservation.
You can view container logs with the command
docker logs <CONTAINER-ID>
.
You can follow logs with the
--follow
(
-f
) option.
When running a container with
docker run
, by default the command line stays attached to container logs when the container starts. Use
docker run --detach
to not follow logs once the container starts.
Note: Following ACI logs may have display issues especially when resizing a terminal that is following container logs. This is due to ACI providing raw log pulling but no streaming of logs. Logs are effectively pulled every 2 seconds when following logs.
A health check can be described using the flags prefixed by
--health-
. This is translated into
LivenessProbe
for ACI. If the health check fails then the container is considered unhealthy and terminated.
In order for the container to be restarted automatically, the container needs to be run with a restart policy (set by the
--restart
flag) other than
no
. Note that the default restart policy if one isnât set is
no
.
In order to restart automatically, the container also need to have a restart policy set with
--restart
(
docker run
defaults to no restart policy)
The Docker Azure Integration enables developers to use native Docker commands to run applications in Azure Container Instances (ACI) when building cloud-native applications. The new experience provides a tight integration between Docker Desktop and Microsoft Azure allowing developers to quickly run applications using the Docker CLI or VS Code extension, to switch seamlessly from local development to cloud deployment.
In addition, the integration between Docker and Microsoft developer technologies allow developers to use the Docker CLI to:
Also see the full list of container features supported by ACI and full list of compose features supported by ACI.
To deploy Docker containers on Azure, you must meet the following requirements:
Download and install the latest version of Docker Desktop.
Alternatively, install the Docker Compose CLI for Linux.
Ensure you have an Azure subscription. You can get started with an Azure free account.
Docker not only runs containers locally, but also enables developers to seamlessly deploy Docker containers on ACI using
docker run
or deploy multi-container applications defined in a Compose file using the
docker compose up
command.
The following sections contain instructions on how to deploy your Docker containers on ACI. Also see the full list of container features supported by ACI.
Run the following commands to log into Azure:
$ docker login azure
This opens your web browser and prompts you to enter your Azure login credentials. If the Docker CLI cannot open a browser, it will fall back to the Azure device code flow and lets you connect manually. Note that the Azure command line login is separated from the Docker CLI Azure login.
Alternatively, you can log in without interaction (typically in
scripts or continuous integration scenarios), using an Azure Service
Principal, with
docker login azure --client-id xx --client-secret yy --tenant-id zz
Note
Logging in through the Azure Service Provider obtains an access token valid for a short period (typically 1h), but it does not allow you to automatically and transparently refresh this token. You must manually re-login when the access token has expired when logging in with a Service Provider.
You can also use the
--tenant-id
option alone to specify a tenant, if
you have several ones available in Azure.
After you have logged in, you need to create a Docker context associated with ACI to deploy containers in ACI.
Creating an ACI context requires an Azure subscription, a resource group, and a region.
For example, let us create a new context called
myacicontext
:
$ docker context create aci myacicontext
This command automatically uses your Azure login credentials to identify your subscription IDs and resource groups. You can then interactively select the subscription and group that you would like to use. If you prefer, you can specify these options in the CLI using the following flags:
--subscription-id
,
--resource-group
, and
--location
.
If you donât have any existing resource groups in your Azure account, the
docker context create aci myacicontext
command creates one for you. You donât have to specify any additional options to do this.
After you have created an ACI context, you can list your Docker contexts by running the
docker context ls
command:
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
myacicontext aci myResourceGroupGTA@eastus
default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
Now that youâve logged in and created an ACI context, you can start using Docker commands to deploy containers on ACI.
There are two ways to use your new ACI context. You can use the
--context
flag with the Docker command to specify that you would like to run the command using your newly created ACI context.
$ docker --context myacicontext run -p 80:80 nginx
Or, you can change context using
docker context use
to select the ACI context to be your focus for running Docker commands. For example, we can use the
docker context use
command to deploy an Nginx container:
$ docker context use myacicontext
$ docker run -p 80:80 nginx
After youâve switched to the
myacicontext
context, you can use
docker ps
to list your containers running on ACI.
In the case of the demonstration Nginx container started above, the result of the ps command will display in column âPORTSâ the IP address and port on which the container is running. For example, it may show
52.154.202.35:80->80/tcp
, and you can view the Nginx welcome page by browsing
http://52.154.202.35
.
To view logs from your container, run:
$ docker logs <CONTAINER_ID>
To execute a command in a running container, run:
$ docker exec -t <CONTAINER_ID> COMMAND
To stop and remove a container from ACI, run:
$ docker stop <CONTAINER_ID>
$ docker rm <CONTAINER_ID>
You can remove containers using
docker rm
. To remove a running container, you must use the
--force
flag, or stop the container using
docker stop
before removing it.
Note
The semantics of restarting a container on ACI are different to those when using a local Docker context for local development. On ACI, the container will be reset to its initial state and started on a new node. This includes the containerâs filesystem so all state that is not stored in a volume will be lost on restart.
You can also deploy and manage multi-container applications defined in Compose files to ACI using the
docker compose
command.
All containers in the same Compose application are started in the same container group. Service discovery between the containers works using the service name specified in the Compose file.
Name resolution between containers is achieved by writing service names in the
/etc/hosts
file that is shared automatically by all containers in the container group.
Also see the full list of compose features supported by ACI.
Ensure you are using your ACI context. You can do this either by specifying the
--context myacicontext
flag or by setting the default context using the command
docker context use myacicontext
.
Run
docker compose up
and
docker compose down
to start and then stop a full Compose application.
By default,
docker compose up
uses the
docker-compose.yaml
file in the current folder. You can specify the working directory using the --workdir flag or specify the Compose file directly using
docker compose --file mycomposefile.yaml up
.
You can also specify a name for the Compose application using the
--project-name
flag during deployment. If no name is specified, a name will be derived from the working directory.
Containers started as part of Compose applications will be displayed along with single containers when using
docker ps
. Their container ID will be of the format:
<COMPOSE-PROJECT>_<SERVICE>
.
These containers cannot be stopped, started, or removed independently since they are all part of the same ACI container group.
You can view each containerâs logs with
docker logs
. You can list deployed Compose applications with
docker compose ls
. This will list only compose applications, not single containers started with
docker run
. You can remove a Compose application with
docker compose down
.
Note
The current Docker Azure integration does not allow fetching a combined log stream from all the containers that make up the Compose application.
From a deployed Compose application, you can update the application by re-deploying it with the same project name:
docker compose --project-name PROJECT up
.
Updating an application means the ACI node will be reused, and the application will keep the same IP address that was previously allocated to expose ports, if any. ACI has some limitations on what can be updated in an existing application (you will not be able to change CPU/memory reservation for example), in these cases, you need to deploy a new application from scratch.
Updating is the default behavior if you invoke
docker compose up
on an already deployed Compose file, as the Compose project name is derived from the directory where the Compose file is located by default. You need to explicitly execute
docker compose down
before running
docker compose up
again in order to totally reset a Compose application.
Single containers and Compose applications can be removed from ACI with
the
docker prune
command. The
docker prune
command removes deployments
that are not currently running. To remove running depoyments, you can specify
--force
. The
--dry-run
option lists deployments that are planned for
removal, but it doesnât actually remove them.
$ ./bin/docker --context acicontext prune --dry-run --force
Resources that would be deleted:
my-application
Total CPUs reclaimed: 2.01, total memory reclaimed: 2.30 GB
Single containers and Compose applications can optionally expose ports.
For single containers, this is done using the
--publish
(
-p
) flag of the
docker run
command :
docker run -p 80:80 nginx
.
For Compose applications, you must specify exposed ports in the Compose file service definition:
services:
nginx:
image: nginx
ports:
- "80:80"
Note
ACI does not allow port mapping (that is, changing port number while exposing port). Therefore, the source and target ports must be the same when deploying to ACI.
All containers in the same Compose application are deployed in the same ACI container group. Different containers in the same Compose application cannot expose the same port when deployed to ACI.
By default, when exposing ports for your application, a random public IP address is associated with the container group supporting the deployed application (single container or Compose application).
This IP address can be obtained when listing containers with
docker ps
or using
docker inspect
.
In addition to exposing ports on a random IP address, you can specify a DNS label name to expose your application on an FQDN of the form:
<NAME>.region.azurecontainer.io
.
You can set this name with the
--domainname
flag when performing a
docker run
, or by using the
domainname
field in the Compose file when performing a
docker compose up
:
services:
nginx:
image: nginx
domainname: "myapp"
ports:
- "80:80"
Note
The domain of a Compose application can only be set once, if you specify the
domainname
for several services, the value must be identical.The FQDN
<DOMAINNAME>.region.azurecontainer.io
must be available.
You can deploy containers or Compose applications that use persistent data stored in volumes. Azure File Share can be used to support volumes for ACI containers.
Using an existing Azure File Share with storage account name
mystorageaccount
and file share name
myfileshare
, you can specify a volume in your deployment
run
command as follows:
$ docker run -v mystorageaccount/myfileshare:/target/path myimage
The runtime container will see the file share content in
/target/path
.
In a Compose application, the volume specification must use the following syntax in the Compose file:
myservice:
image: nginx
volumes:
- mydata:/mount/testvolumes
volumes:
mydata:
driver: azure_file
driver_opts:
share_name: myfileshare
storage_account_name: mystorageaccount
Note
The volume short syntax in Compose files cannot be used as it is aimed at volume definition for local bind mounts. Using the volume driver and driver option syntax in Compose files makes the volume definition a lot more clear.
In single or multi-container deployments, the Docker CLI will use your Azure login to fetch the key to the storage account, and provide this key with the container deployment information, so that the container can access the volume.
Volumes can be used from any file share in any storage account you have access to with your Azure login. You can specify
rw
(read/write) or
ro
(read only) when mounting the volume (
rw
is the default).
To create a volume that you can use in containers or Compose applications when
using your ACI Docker context, you can use the
docker volume create
command,
and specify an Azure storage account name and the file share name:
$ docker --context aci volume create test-volume --storage-account mystorageaccount
[+] Running 2/2
â ¿ mystorageaccount Created 26.2s
â ¿ test-volume Created 0.9s
mystorageaccount/test-volume
By default, if the storage account does not already exist, this command creates a new storage account using the Standard LRS as a default SKU, and the resource group and location associated with your Docker ACI context.
If you specify an existing storage account, the command creates a new file share in the existing account:
$ docker --context aci volume create test-volume2 --storage-account mystorageaccount
[+] Running 2/2
â ¿ mystorageaccount Use existing 0.7s
â ¿ test-volume2 Created 0.7s
mystorageaccount/test-volume2
Alternatively, you can create an Azure storage account or a file share using the Azure
portal, or the
az
command line.
You can also list volumes that are available for use in containers or Compose applications:
$ docker --context aci volume ls
ID DESCRIPTION
mystorageaccount/test-volume Fileshare test-volume in mystorageaccount storage account
mystorageaccount/test-volume2 Fileshare test-volume2 in mystorageaccount storage account
To delete a volume and the corresponding Azure file share, use the
volume rm
command:
$ docker --context aci volume rm mystorageaccount/test-volume
mystorageaccount/test-volume
This permanently deletes the Azure file share and all its data.
When deleting a volume in Azure, the command checks whether the specified file share
is the only file share available in the storage account. If the storage account is
created with the
docker volume create
command,
docker volume rm
also
deletes the storage account when it does not have any file shares.
If you are using a storage account created without the
docker volume create
command
(through Azure portal or with the
az
command line for example),
docker volume rm
does not delete the storage account, even when it has zero remaining file shares.
When using
docker run
, you can pass the environment variables to ACI containers using the
--env
flag.
For Compose applications, you can specify the environment variables in the Compose file with the
environment
or
env-file
service field, or with the
--environment
command line flag.
You can specify a container health checks using either the
--healthcheck-
prefixed flags with
docker run
, or in a Compose file with the
healthcheck
section of the service.
Health checks are converted to ACI
LivenessProbe
s. ACI runs the health check command periodically, and if it fails, the container will be terminated.
Health checks must be used in addition to restart policies to ensure the container is then restarted on termination. The default restart policy for
docker run
is
no
which will not restart the container. The default restart policy for Compose is
any
which will always try restarting the service containers.
Example using
docker run
:
$ docker --context acicontext run -p 80:80 --restart always --health-cmd "curl http://localhost:80" --health-interval 3s nginx
Example using Compose files:
services:
web:
image: nginx
deploy:
restart_policy:
condition: on-failure
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:80"]
interval: 10s
You can deploy private images to ACI that are hosted by any container registry. You need to log into the relevant registry using
docker login
before running
docker run
or
docker compose up
. The Docker CLI will fetch your registry login for the deployed images and send the credentials along with the image deployment information to ACI.
In the case of the Azure Container Registry, the command line will try to automatically log you into ACR from your Azure login. You donât need to manually login to the ACR registry first, if your Azure login has access to the ACR.
You can create several Docker contexts associated with ACI. Each context must be associated with a unique Azure resource group. This allows you to use Docker contexts as namespaces. You can switch between namespaces using
docker context use <CONTEXT>
.
When you run the
docker ps
command, it only lists containers in your current Docker context. There wonât be any contention in container names or Compose application names between two Docker contexts.
The Docker Compose CLI adds support for running and managing containers on Azure Container Instances (ACI).
You can install the new CLI using the install script:
$ curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
You can download the Docker ACI Integration CLI from the latest release page.
You will then need to make it executable:
$ chmod +x docker-aci
To enable using the local Docker Engine and to use existing Docker contexts, you
must have the existing Docker CLI as
com.docker.cli
somewhere in your
PATH
. You can do this by creating a symbolic link from the existing Docker
CLI:
$ ln -s /path/to/existing/docker /directory/in/PATH/com.docker.cli
Note
The
PATH
environment variable is a colon-separated list of directories with priority from left to right. You can view it usingecho $PATH
. You can find the path to the existing Docker CLI usingwhich docker
. You may need root permissions to make this link.
On a fresh install of Ubuntu 20.04 with Docker Engine already installed:
$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
$ which docker
/usr/bin/docker
$ sudo ln -s /usr/bin/docker /usr/local/bin/com.docker.cli
You can verify that this is working by checking that the new CLI works with the default context:
$ ./docker-aci --context default ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ echo $?
0
To make this CLI with ACI integration your default Docker CLI, you must move it
to a directory in your
PATH
with higher priority than the existing Docker CLI.
Again, on a fresh Ubuntu 20.04:
$ which docker
/usr/bin/docker
$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
$ sudo mv docker-aci /usr/local/bin/docker
$ which docker
/usr/local/bin/docker
$ docker version
...
Azure integration 0.1.4
...
After you have installed the Docker ACI Integration CLI, run
--help
to see the current list of commands.
To remove the Docker Azure Integration CLI, you need to remove the binary you downloaded and
com.docker.cli
from your
PATH
. If you installed using the script, this can be done as follows:
$ sudo rm /usr/local/bin/docker /usr/local/bin/com.docker.cli
Thank you for trying out Docker Azure Integration. Your feedback is very important to us. Let us know your feedback by creating an issue in the compose-cli GitHub repository.