This is one stop global knowledge base where you can learn about all the products, solutions and support features.
The buildx Docker container driver allows creation of a managed and customizable BuildKit environment in a dedicated Docker container.
Using the Docker container driver has a couple of advantages over the default Docker driver. For example:
Run the following command to create a new builder, named
container
, that uses
the Docker container driver:
$ docker buildx create \
--name container \
--driver=docker-container \
--driver-opt=[key=value,...]
container
The following table describes the available driver-specific options that you can
pass to
--driver-opt
:
Parameter | Type | Default | Description |
---|---|---|---|
image
|
String | Â | Sets the image to use for running BuildKit. |
network
|
String | Â | Sets the network mode for running the BuildKit container. |
cgroup-parent
|
String |
/docker/buildx
|
Sets the cgroup parent of the BuildKit container if Docker is using the
cgroupfs
driver.
|
env.<key>
|
String | Â |
Sets the environment variable
key
to the specified
value
in the BuildKit container.
|
When you run a build, Buildx pulls the specified
image
(by default,
moby/buildkit
){:target=âblankâ rel=ânoopenerâ class=ââ}.
When the container has started, Buildx submits the build submitted to the
containerized build server.
$ docker buildx build -t <image> --builder=container .
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 1.9s done
#1 creating container buildx_buildkit_container0
#1 creating container buildx_buildkit_container0 0.5s done
#1 DONE 2.4s
...
Unlike when using the default
docker
driver, images built with the
docker-container
driver must be explicitly loaded into the local image store.
Use the
--load
flag:
$ docker buildx build --load -t <image> --builder=container .
...
=> exporting to oci image format 7.7s
=> => exporting layers 4.9s
=> => exporting manifest sha256:4e4ca161fa338be2c303445411900ebbc5fc086153a0b846ac12996960b479d3 0.0s
=> => exporting config sha256:adf3eec768a14b6e183a1010cb96d91155a82fd722a1091440c88f3747f1f53f 0.0s
=> => sending tarball 2.8s
=> importing to docker
The image becomes available in the image store when the build finishes:
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
<image> latest adf3eec768a1 2 minutes ago 197MB
The
docker-container
driver supports cache persistence, as it stores all the
BuildKit state and related cache into a dedicated Docker volume.
To persist the
docker-container
driverâs cache, even after recreating the
driver using
docker buildx rm
and
docker buildx create
, you can destroy the
builder using the
--keep-state
flag:
For example, to create a builder named
container
and then remove it while
persisting state:
# setup a builder
$ docker buildx create --name=container --driver=docker-container --use --bootstrap
container
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
container * docker-container
container0 desktop-linux running v0.10.5 linux/amd64
$ docker volume ls
DRIVER VOLUME NAME
local buildx_buildkit_container0_state
# remove the builder while persisting state
$ docker buildx rm --keep-state container
$ docker volume ls
DRIVER VOLUME NAME
local buildx_buildkit_container0_state
# the newly created driver with the same name will have all the state of the previous one!
$ docker buildx create --name=container --driver=docker-container --use --bootstrap
container
The
docker-container
driver supports using QEMU
(user mode) to build non-native platforms. Use the
--platform
flag to specify
which architectures that you want to build for.
For example, to build a Linux image for
amd64
and
arm64
:
$ docker buildx build \
--builder=container \
--platform=linux/amd64,linux/arm64 \
-t <registry>/<image> \
--push .
Warning
QEMU performs full-system emulation of non-native platforms, which is much slower than native builds. Compute-heavy tasks like compilation and compression/decompression will likely take a large performance hit.
You can customize the network that the builder container uses. This is useful if you need to use a specific network for your builds.
For example, letâs create a network
named
foonet
:
$ docker network create foonet
Now create a
docker-container
builder
that will use this network:
$ docker buildx create --use \
--name mybuilder \
--driver docker-container \
--driver-opt "network=foonet"
Boot and inspect
mybuilder
:
$ docker buildx inspect --bootstrap
Inspect the builder container and see what network is being used:
$ docker inspect buildx_buildkit_mybuilder0 --format={{.NetworkSettings.Networks}}
map[foonet:0xc00018c0c0]
For more information on the Docker container driver, see the buildx reference.
The Buildx Docker driver is the default driver. It uses the BuildKit server components built directly into the Docker engine. The Docker driver requires no configuration.
Unlike the other drivers, builders using the Docker driver canât be manually created. Theyâre only created automatically from the Docker context.
Images built with the Docker driver are automatically loaded to the local image store.
# The Docker driver is used by buildx by default
docker buildx build .
Itâs not possible to configure which BuildKit version to use, or to pass any additional BuildKit parameters to a builder using the Docker driver. The BuildKit version and parameters are preset by the Docker engine internally.
If you need additional configuration and flexibility, consider using the Docker container driver.
For more information on the Docker driver, see the buildx reference.
Buildx drivers are configurations for how and where the BuildKit backend runs. Driver settings are customizable and allows fine-grained control of the builder. Buildx supports the following drivers:
docker
: uses the BuildKit library bundled into the Docker daemon.
docker-container
: creates a dedicated BuildKit container using Docker.
kubernetes
: creates BuildKit pods in a Kubernetes cluster.
remote
: connects directly to a manually managed BuildKit daemon.
Different drivers support different use cases. The default
docker
driver
prioritizes simplicity and ease of use. It has limited support for advanced
features like caching and output formats, and isnât configurable. Other drivers
provide more flexibility and are better at handling advanced scenarios.
The following table outlines some differences between drivers.
Feature |
docker
|
docker-container
|
kubernetes
|
remote
|
---|---|---|---|---|
Automatically load image | â | Â | Â | Â |
Cache export | Inline only | â | â | â |
Tarball output | Â | â | â | â |
Multi-arch images | Â | â | â | â |
BuildKit configuration | Â | â | â | Managed externally |
Use
docker buildx ls
to see builder instances available on your system, and
the drivers theyâre using.
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default docker
default default running 20.10.17 linux/amd64, linux/386
Depending on your setup, you may find multiple builders in your list that use
the Docker driver. For example, on a system that runs both a manually installed
version of dockerd, as well as Docker Desktop, you might see the following
output from
docker buildx ls
:
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default docker
default default running 20.10.17 linux/amd64, linux/386
desktop-linux * docker
desktop-linux desktop-linux running 20.10.17 linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
This is because the Docker driver builders are automatically pulled from the
available Docker Contexts. When
you add new contexts using
docker context create
, these will appear in your
list of buildx builders.
The asterisk (
*
) next to the builder name indicates that this is the selected
builder which gets used by default, unless you specify a builder using the
--builder
option.
Use the
docker buildx create
command to create a builder, and specify the driver using the
--driver
option.
$ docker buildx create --name=<builder-name> --driver=<driver> --driver-opt=<driver-options>
This creates a new builder instance with a single build node. After creating a new builder you can also append new nodes to it.
To use a remote node for your builders, you can set the
DOCKER_HOST
environment variable or provide a remote context name when creating the builder.
To switch between different builders, use the
docker buildx use <name>
command. After running this command, the build commands will automatically use
this builder.
Read about each of the Buildx drivers to learn about how they work and how to use them:
The Buildx Kubernetes driver allows connecting your local development or CI environments to your Kubernetes cluster to allow access to more powerful and varied compute resources.
Run the following command to create a new builder, named
kube
, that uses the
Kubernetes driver:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=[key=value,...]
The following table describes the available driver-specific options that you can
pass to
--driver-opt
:
Parameter | Type | Default | Description |
---|---|---|---|
image
|
String | Â | Sets the image to use for running BuildKit. |
namespace
|
String | Namespace in current Kubernetes context | Sets the Kubernetes namespace. |
replicas
|
Integer | 1 | Sets the number of Pod replicas to create. See scaling BuildKit |
requests.cpu
|
CPU units | Â |
Sets the request CPU value specified in units of Kubernetes CPU. For example
requests.cpu=100m
or
requests.cpu=2
|
requests.memory
|
Memory size | Â |
Sets the request memory value specified in bytes or with a valid suffix. For example
requests.memory=500Mi
or
requests.memory=4G
|
limits.cpu
|
CPU units | Â |
Sets the limit CPU value specified in units of Kubernetes CPU. For example
requests.cpu=100m
or
requests.cpu=2
|
limits.memory
|
Memory size | Â |
Sets the limit memory value specified in bytes or with a valid suffix. For example
requests.memory=500Mi
or
requests.memory=4G
|
nodeselector
|
CSV string | Â |
Sets the podâs
nodeSelector
label(s). See node assignment.
|
tolerations
|
CSV string | Â | Configures the podâs taint toleration. See node assignment. |
rootless
|
true
,
false
|
false
|
Run the container as a non-root user. See rootless mode. |
loadbalance
|
sticky
,
random
|
sticky
|
Load-balancing strategy. If set to
sticky
, the pod is chosen using the hash of the context path.
|
qemu.install
|
true
,
false
|
 | Install QEMU emulation for multi platforms support. See QEMU. |
qemu.image
|
String |
tonistiigi/binfmt:latest
|
Sets the QEMU emulation image. See QEMU. |
One of the main advantages of the Kubernetes driver is that you can scale the number of builder replicas up and down to handle increased build load. Scaling is configurable using the following driver options:
replicas=N
This scales the number of BuildKit pods to the desired size. By default, it only creates a single pod. Increasing the number of replicas lets you take advantage of multiple nodes in your cluster.
requests.cpu
,
requests.memory
,
limits.cpu
,
limits.memory
These options allow requesting and limiting the resources available to each BuildKit pod according to the official Kubernetes documentation here.
For example, to create 4 replica BuildKit pods:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,replicas=4
Listing the pods, you get this:
$ kubectl -n buildkit get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kube0 4/4 4 4 8s
$ kubectl -n buildkit get pods
NAME READY STATUS RESTARTS AGE
kube0-6977cdcb75-48ld2 1/1 Running 0 8s
kube0-6977cdcb75-rkc6b 1/1 Running 0 8s
kube0-6977cdcb75-vb4ks 1/1 Running 0 8s
kube0-6977cdcb75-z4fzs 1/1 Running 0 8s
Additionally, you can use the
loadbalance=(sticky|random)
option to control
the load-balancing behavior when there are multiple replicas.
random
selects
random nodes from the node pool, providing an even workload distribution across
replicas.
sticky
(the default) attempts to connect the same build performed
multiple times to the same node each time, ensuring better use of local cache.
For more information on scalability, see the options for buildx create.
The Kubernetes driver allows you to control the scheduling of BuildKit pods
using the
nodeSelector
and
tolerations
driver options.
The value of the
nodeSelector
parameter is a comma-separated string of
key-value pairs, where the key is the node label and the value is the label
text. For example:
"nodeselector=kubernetes.io/arch=arm64"
The
tolerations
parameter is a semicolon-separated list of taints. It accepts
the same values as the Kubernetes manifest. Each
tolerations
entry specifies a
taint key and the value, operator, or effect. For example:
"tolerations=key=foo,value=bar;key=foo2,operator=exists;key=foo3,effect=NoSchedule"
Due to quoting rules for shell commands, you must wrap the
nodeselector
and
tolerations
parameters in single quotes. You can even wrap all of
--driver-opt
in single quotes, for example:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
'--driver-opt="nodeselector=label1=value1,label2=value2","tolerations=key=key1,value=value1"'
The Buildx Kubernetes driver has support for creating multi-platform images, either using QEMU or by leveraging the native architecture of nodes.
Like the
docker-container
driver, the Kubernetes driver also supports using
QEMU (user
mode) to build images for non-native platforms. Include the
--platform
flag
and specify which platforms you want to output to.
For example, to build a Linux image for
amd64
and
arm64
:
$ docker buildx build \
--builder=kube \
--platform=linux/amd64,linux/arm64 \
-t <user>/<image> \
--push .
Warning
QEMU performs full-system emulation of non-native platforms, which is much slower than native builds. Compute-heavy tasks like compilation and compression/decompression will likely take a large performance hit.
Using a custom BuildKit image or invoking non-native binaries in builds may
require that you explicitly turn on QEMU using the
qemu.install
option when
creating the builder:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,qemu.install=true
If you have access to cluster nodes of different architectures, the Kubernetes
driver can take advantage of these for native builds. To do this, use the
--append
flag of
docker buildx create
.
First, create your builder with explicit support for a single architecture, for
example
amd64
:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--platform=linux/amd64 \
--node=builder-amd64 \
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=amd64"
This creates a Buildx builder named
kube
, containing a single builder node
builder-amd64
. Note that the Buildx concept of a node isnât the same as the
Kubernetes concept of a node. A Buildx node in this case could connect multiple
Kubernetes nodes of the same architecture together.
With the
kube
builder created, you can now introduce another architecture into
the mix using
--append
. For example, to add
arm64
:
$ docker buildx create \
--append \
--bootstrap \
--name=kube \
--driver=kubernetes \
--platform=linux/arm64 \
--node=builder-arm64 \
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=arm64"
If you list builders now, you should be able to see both nodes present:
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
kube kubernetes
builder-amd64 kubernetes:///kube?deployment=builder-amd64&kubeconfig= running linux/amd64*, linux/amd64/v2, linux/amd64/v3, linux/386
builder-arm64 kubernetes:///kube?deployment=builder-arm64&kubeconfig= running linux/arm64*
You should now be able to build multi-arch images with
amd64
and
arm64
combined, by specifying those platforms together in your buildx command:
$ docker buildx build --builder=kube --platform=linux/amd64,linux/arm64 -t <user>/<image> --push .
You can repeat the
buildx create --append
command for as many architectures
that you want to support.
The Kubernetes driver supports rootless mode. For more information on how rootless mode works, and itâs requirements, see here.
To turn it on in your cluster, you can use the
rootless=true
driver option:
$ docker buildx create \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,rootless=true
This will create your pods without
securityContext.privileged
.
Requires Kubernetes version 1.19 or later. Using Ubuntu as the host kernel is recommended.
This guide shows you how to:
Prerequisites:
kubectl
command,
with the
KUBECONFIG
environment variable
set appropriately if necessary.
Create a
buildkit
namespace.
Creating a separate namespace helps keep your Buildx resources separate from other resources in the cluster.
$ kubectl create namespace buildkit
namespace/buildkit created
Create a new Buildx builder with the Kubernetes driver:
# Remember to specify the namespace in driver options
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
List available Buildx builders using
docker buildx ls
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
kube kubernetes
kube0-6977cdcb75-k9h9m running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
default * docker
default default running linux/amd64, linux/386
Inspect the running pods created by the Buildx driver with
kubectl
.
$ kubectl -n buildkit get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kube0 1/1 1 1 32s
$ kubectl -n buildkit get pods
NAME READY STATUS RESTARTS AGE
kube0-6977cdcb75-k9h9m 1/1 Running 0 32s
The buildx driver creates the necessary resources on your cluster in the
specified namespace (in this case,
buildkit
), while keeping your driver
configuration locally.
Use your new builder by including the
--builder
flag when running buildx
commands. For example: :
# Replace <registry> with your Docker username
# and <image> with the name of the image you want to build
docker buildx build \
--builder=kube \
-t <registry>/<image> \
--push .
Thatâs it! Youâve now built an image from a Kubernetes pod, using Buildx!
For more information on the Kubernetes driver, see the buildx reference.
The Buildx remote driver allows for more complex custom build workloads, allowing you to connect to externally managed BuildKit instances. This is useful for scenarios that require manual management of the BuildKit daemon, or where a BuildKit daemon is exposed from another source.
$ docker buildx create \
--name remote \
--driver remote \
tcp://localhost:1234
The following table describes the available driver-specific options that you can
pass to
--driver-opt
:
Parameter | Type | Default | Description |
---|---|---|---|
key
|
String | Â | Sets the TLS client key. |
cert
|
String | Â |
Absolute path to the TLS client certificate to present to
buildkitd
.
|
cacert
|
String | Â | Absolute path to the TLS certificate authority used for validation. |
servername
|
String | Endpoint hostname. | TLS server name used in requests. |
This guide shows you how to create a setup with a BuildKit daemon listening on a Unix socket, and have Buildx connect through it.
Ensure that BuildKit is installed.
For example, you can launch an instance of buildkitd with:
$ sudo ./buildkitd --group $(id -gn) --addr unix://$HOME/buildkitd.sock
Alternatively, see here for running buildkitd in rootless mode or here for examples of running it as a systemd service.
Check that you have a Unix socket that you can connect to.
$ ls -lh /home/user/buildkitd.sock
srw-rw---- 1 root user 0 May 5 11:04 /home/user/buildkitd.sock
Connect Buildx to it using the remote driver:
$ docker buildx create \
--name remote-unix \
--driver remote \
unix://$HOME/buildkitd.sock
List available builders with
docker buildx ls
. You should then see
remote-unix
among them:
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
remote-unix remote
remote-unix0 unix:///home/.../buildkitd.sock running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
default * docker
default default running linux/amd64, linux/386
You can switch to this new builder as the default using
docker buildx use remote-unix
, or specify it per build using
--builder
:
$ docker buildx build --builder=remote-unix -t test --load .
Remember that you need to use the
--load
flag if you want to load the build
result into the Docker daemon.
This guide will show you how to create setup similar to the
docker-container
driver, by manually booting a BuildKit Docker container and connecting to it
using the Buildx remote driver. This procedure will manually create a container
and access it via itâs exposed port. (Youâd probably be better of just using the
docker-container
driver that connects to BuildKit through the Docker daemon,
but this is for illustration purposes.)
Generate certificates for BuildKit.
You can use the create-certs.sh script as a starting point. Note that while itâs possible to expose BuildKit over TCP without using TLS, itâs not recommended. Doing so allows arbitrary access to BuildKit without credentials.
With certificates generated in
.certs/
, startup the container:
$ docker run -d --rm \
--name=remote-buildkitd \
--privileged \
-p 1234:1234 \
-v $PWD/.certs:/etc/buildkit/certs \
moby/buildkit:latest \
--addr tcp://0.0.0.0:1234 \
--tlscacert /etc/buildkit/certs/daemon/ca.pem \
--tlscert /etc/buildkit/certs/daemon/cert.pem \
--tlskey /etc/buildkit/certs/daemon/key.pem
This command starts a BuildKit container and exposes the daemonâs port 1234 to localhost.
Connect to this running container using Buildx:
$ docker buildx create \
--name remote-container \
--driver remote \
--driver-opt cacert=${PWD}/.certs/client/ca.pem,cert=${PWD}/.certs/client/cert.pem,key=${PWD}/.certs/client/key.pem,servername=<TLS_SERVER_NAME> \
tcp://localhost:1234
Alternatively, use the
docker-container://
URL scheme to connect to the
BuildKit container without specifying a port:
$ docker buildx create \
--name remote-container \
--driver remote \
docker-container://remote-container
This guide will show you how to create a setup similar to the
kubernetes
driver by manually creating a BuildKit
Deployment
. While the
kubernetes
driver will do this under-the-hood, it might sometimes be desirable to scale
BuildKit manually. Additionally, when executing builds from inside Kubernetes
pods, the Buildx builder will need to be recreated from within each pod or
copied between them.
Create a Kubernetes deployment of
buildkitd
, as per the instructions
here.
Following the guide, create certificates for the BuildKit daemon and client using create-certs.sh, and create a deployment of BuildKit pods with a service that connects to them.
Assuming that the service is called
buildkitd
, create a remote builder in
Buildx, ensuring that the listed certificate files are present:
$ docker buildx create \
--name remote-kubernetes \
--driver remote \
--driver-opt cacert=${PWD}/.certs/client/ca.pem,cert=${PWD}/.certs/client/cert.pem,key=${PWD}/.certs/client/key.pem \
tcp://buildkitd.default.svc:1234
Note that this will only work internally, within the cluster, since the BuildKit setup guide only creates a ClusterIP service. To configure the builder to be accessible remotely, you can use an appropriately configured ingress, which is outside the scope of this guide.
To access the service remotely, use the port forwarding mechanism of
kubectl
:
$ kubectl port-forward svc/buildkitd 1234:1234
Then you can point the remote driver at
tcp://localhost:1234
.
Alternatively, you can use the
kube-pod://
URL scheme to connect directly to a
BuildKit pod through the Kubernetes API. Note that this method only connects to
a single pod in the deployment:
$ kubectl get pods --selector=app=buildkitd -o json | jq -r '.items[].metadata.name
buildkitd-XXXXXXXXXX-xxxxx
$ docker buildx create \
--name remote-container \
--driver remote \
kube-pod://buildkitd-XXXXXXXXXX-xxxxx
The
image
exporter outputs the build result into a container image format. The
registry
exporter is identical, but it automatically pushes the result by
setting
push=true
.
Build a container image using the
image
and
registry
exporters:
$ docker buildx build --output type=image[,parameters] .
$ docker buildx build --output type=registry[,parameters] .
The following table describes the available parameters that you can pass to
--output
for
type=image
:
Parameter | Type | Default | Description |
---|---|---|---|
name
|
String | Â | Specify image name(s) |
push
|
true
,
false
|
false
|
Push after creating the image. |
push-by-digest
|
true
,
false
|
false
|
Push image without name. |
registry.insecure
|
true
,
false
|
false
|
Allow pushing to insecure registry. |
dangling-name-prefix
|
<value>
|
 |
Name image with
prefix@<digest>
, used for anonymous images
|
name-canonical
|
true
,
false
|
 |
Add additional canonical name
name@<digest>
|
compression
|
uncompressed
,
gzip
,
estargz
,
zstd
|
gzip
|
Compression type, see compression |
compression-level
|
0..22
|
 | Compression level, see compression |
force-compression
|
true
,
false
|
false
|
Forcefully apply compression, see compression |
oci-mediatypes
|
true
,
false
|
false
|
Use OCI media types in exporter manifests, see OCI Media types |
buildinfo
|
true
,
false
|
true
|
Attach inline build info |
buildinfo-attrs
|
true
,
false
|
false
|
Attach inline build info attributes |
unpack
|
true
,
false
|
false
|
Unpack image after creation (for use with containerd) |
store
|
true
,
false
|
true
|
Store the result images to the workerâs (for example, containerd) image store, and ensures that the image has all blobs in the content store. Ignored if the worker doesnât have image store (when using OCI workers, for example). |
annotation.<key>
|
String | Â |
Attach an annotation with the respective
key
and
value
to the built image,see annotations
|
These exporters support adding OCI annotation using
annotation.*
dot notation
parameter. The following example sets the
org.opencontainers.image.title
annotation for a build:
$ docker buildx build \
--output "type=<type>,name=<registry>/<image>,annotation.org.opencontainers.image.title=<title>" .
For more information about annotations, see BuildKit documentation.
For more information on the
image
or
registry
exporters, see the
BuildKit README.