This is one stop global knowledge base where you can learn about all the products, solutions and support features.
Buildx drivers are configurations for how and where the BuildKit backend runs. Driver settings are customizable and allows fine-grained control of the builder. Buildx supports the following drivers:
docker
: uses the BuildKit library bundled into the Docker daemon.
docker-container
: creates a dedicated BuildKit container using Docker.
kubernetes
: creates BuildKit pods in a Kubernetes cluster.
remote
: connects directly to a manually managed BuildKit daemon.
Different drivers support different use cases. The default
docker
driver
prioritizes simplicity and ease of use. It has limited support for advanced
features like caching and output formats, and isnât configurable. Other drivers
provide more flexibility and are better at handling advanced scenarios.
The following table outlines some differences between drivers.
Feature |
docker
|
docker-container
|
kubernetes
|
remote
|
---|---|---|---|---|
Automatically load image | â | Â | Â | Â |
Cache export | Inline only | â | â | â |
Tarball output | Â | â | â | â |
Multi-arch images | Â | â | â | â |
BuildKit configuration | Â | â | â | Managed externally |
Use
docker buildx ls
to see builder instances available on your system, and
the drivers theyâre using.
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default docker
default default running 20.10.17 linux/amd64, linux/386
Depending on your setup, you may find multiple builders in your list that use
the Docker driver. For example, on a system that runs both a manually installed
version of dockerd, as well as Docker Desktop, you might see the following
output from
docker buildx ls
:
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default docker
default default running 20.10.17 linux/amd64, linux/386
desktop-linux * docker
desktop-linux desktop-linux running 20.10.17 linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
This is because the Docker driver builders are automatically pulled from the
available Docker Contexts. When
you add new contexts using
docker context create
, these will appear in your
list of buildx builders.
The asterisk (
*
) next to the builder name indicates that this is the selected
builder which gets used by default, unless you specify a builder using the
--builder
option.
Use the
docker buildx create
command to create a builder, and specify the driver using the
--driver
option.
$ docker buildx create --name=<builder-name> --driver=<driver> --driver-opt=<driver-options>
This creates a new builder instance with a single build node. After creating a new builder you can also append new nodes to it.
To use a remote node for your builders, you can set the
DOCKER_HOST
environment variable or provide a remote context name when creating the builder.
To switch between different builders, use the
docker buildx use <name>
command. After running this command, the build commands will automatically use
this builder.
Read about each of the Buildx drivers to learn about how they work and how to use them:
The Buildx Kubernetes driver allows connecting your local development or CI environments to your Kubernetes cluster to allow access to more powerful and varied compute resources.
Run the following command to create a new builder, named
kube
, that uses the
Kubernetes driver:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=[key=value,...]
The following table describes the available driver-specific options that you can
pass to
--driver-opt
:
Parameter | Type | Default | Description |
---|---|---|---|
image
|
String | Â | Sets the image to use for running BuildKit. |
namespace
|
String | Namespace in current Kubernetes context | Sets the Kubernetes namespace. |
replicas
|
Integer | 1 | Sets the number of Pod replicas to create. See scaling BuildKit |
requests.cpu
|
CPU units | Â |
Sets the request CPU value specified in units of Kubernetes CPU. For example
requests.cpu=100m
or
requests.cpu=2
|
requests.memory
|
Memory size | Â |
Sets the request memory value specified in bytes or with a valid suffix. For example
requests.memory=500Mi
or
requests.memory=4G
|
limits.cpu
|
CPU units | Â |
Sets the limit CPU value specified in units of Kubernetes CPU. For example
requests.cpu=100m
or
requests.cpu=2
|
limits.memory
|
Memory size | Â |
Sets the limit memory value specified in bytes or with a valid suffix. For example
requests.memory=500Mi
or
requests.memory=4G
|
nodeselector
|
CSV string | Â |
Sets the podâs
nodeSelector
label(s). See node assignment.
|
tolerations
|
CSV string | Â | Configures the podâs taint toleration. See node assignment. |
rootless
|
true
,
false
|
false
|
Run the container as a non-root user. See rootless mode. |
loadbalance
|
sticky
,
random
|
sticky
|
Load-balancing strategy. If set to
sticky
, the pod is chosen using the hash of the context path.
|
qemu.install
|
true
,
false
|
 | Install QEMU emulation for multi platforms support. See QEMU. |
qemu.image
|
String |
tonistiigi/binfmt:latest
|
Sets the QEMU emulation image. See QEMU. |
One of the main advantages of the Kubernetes driver is that you can scale the number of builder replicas up and down to handle increased build load. Scaling is configurable using the following driver options:
replicas=N
This scales the number of BuildKit pods to the desired size. By default, it only creates a single pod. Increasing the number of replicas lets you take advantage of multiple nodes in your cluster.
requests.cpu
,
requests.memory
,
limits.cpu
,
limits.memory
These options allow requesting and limiting the resources available to each BuildKit pod according to the official Kubernetes documentation here.
For example, to create 4 replica BuildKit pods:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,replicas=4
Listing the pods, you get this:
$ kubectl -n buildkit get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kube0 4/4 4 4 8s
$ kubectl -n buildkit get pods
NAME READY STATUS RESTARTS AGE
kube0-6977cdcb75-48ld2 1/1 Running 0 8s
kube0-6977cdcb75-rkc6b 1/1 Running 0 8s
kube0-6977cdcb75-vb4ks 1/1 Running 0 8s
kube0-6977cdcb75-z4fzs 1/1 Running 0 8s
Additionally, you can use the
loadbalance=(sticky|random)
option to control
the load-balancing behavior when there are multiple replicas.
random
selects
random nodes from the node pool, providing an even workload distribution across
replicas.
sticky
(the default) attempts to connect the same build performed
multiple times to the same node each time, ensuring better use of local cache.
For more information on scalability, see the options for buildx create.
The Kubernetes driver allows you to control the scheduling of BuildKit pods
using the
nodeSelector
and
tolerations
driver options.
The value of the
nodeSelector
parameter is a comma-separated string of
key-value pairs, where the key is the node label and the value is the label
text. For example:
"nodeselector=kubernetes.io/arch=arm64"
The
tolerations
parameter is a semicolon-separated list of taints. It accepts
the same values as the Kubernetes manifest. Each
tolerations
entry specifies a
taint key and the value, operator, or effect. For example:
"tolerations=key=foo,value=bar;key=foo2,operator=exists;key=foo3,effect=NoSchedule"
Due to quoting rules for shell commands, you must wrap the
nodeselector
and
tolerations
parameters in single quotes. You can even wrap all of
--driver-opt
in single quotes, for example:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
'--driver-opt="nodeselector=label1=value1,label2=value2","tolerations=key=key1,value=value1"'
The Buildx Kubernetes driver has support for creating multi-platform images, either using QEMU or by leveraging the native architecture of nodes.
Like the
docker-container
driver, the Kubernetes driver also supports using
QEMU (user
mode) to build images for non-native platforms. Include the
--platform
flag
and specify which platforms you want to output to.
For example, to build a Linux image for
amd64
and
arm64
:
$ docker buildx build \
--builder=kube \
--platform=linux/amd64,linux/arm64 \
-t <user>/<image> \
--push .
Warning
QEMU performs full-system emulation of non-native platforms, which is much slower than native builds. Compute-heavy tasks like compilation and compression/decompression will likely take a large performance hit.
Using a custom BuildKit image or invoking non-native binaries in builds may
require that you explicitly turn on QEMU using the
qemu.install
option when
creating the builder:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,qemu.install=true
If you have access to cluster nodes of different architectures, the Kubernetes
driver can take advantage of these for native builds. To do this, use the
--append
flag of
docker buildx create
.
First, create your builder with explicit support for a single architecture, for
example
amd64
:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--platform=linux/amd64 \
--node=builder-amd64 \
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=amd64"
This creates a Buildx builder named
kube
, containing a single builder node
builder-amd64
. Note that the Buildx concept of a node isnât the same as the
Kubernetes concept of a node. A Buildx node in this case could connect multiple
Kubernetes nodes of the same architecture together.
With the
kube
builder created, you can now introduce another architecture into
the mix using
--append
. For example, to add
arm64
:
$ docker buildx create \
--append \
--bootstrap \
--name=kube \
--driver=kubernetes \
--platform=linux/arm64 \
--node=builder-arm64 \
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=arm64"
If you list builders now, you should be able to see both nodes present:
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
kube kubernetes
builder-amd64 kubernetes:///kube?deployment=builder-amd64&kubeconfig= running linux/amd64*, linux/amd64/v2, linux/amd64/v3, linux/386
builder-arm64 kubernetes:///kube?deployment=builder-arm64&kubeconfig= running linux/arm64*
You should now be able to build multi-arch images with
amd64
and
arm64
combined, by specifying those platforms together in your buildx command:
$ docker buildx build --builder=kube --platform=linux/amd64,linux/arm64 -t <user>/<image> --push .
You can repeat the
buildx create --append
command for as many architectures
that you want to support.
The Kubernetes driver supports rootless mode. For more information on how rootless mode works, and itâs requirements, see here.
To turn it on in your cluster, you can use the
rootless=true
driver option:
$ docker buildx create \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,rootless=true
This will create your pods without
securityContext.privileged
.
Requires Kubernetes version 1.19 or later. Using Ubuntu as the host kernel is recommended.
This guide shows you how to:
Prerequisites:
kubectl
command,
with the
KUBECONFIG
environment variable
set appropriately if necessary.
Create a
buildkit
namespace.
Creating a separate namespace helps keep your Buildx resources separate from other resources in the cluster.
$ kubectl create namespace buildkit
namespace/buildkit created
Create a new Buildx builder with the Kubernetes driver:
# Remember to specify the namespace in driver options
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
List available Buildx builders using
docker buildx ls
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
kube kubernetes
kube0-6977cdcb75-k9h9m running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
default * docker
default default running linux/amd64, linux/386
Inspect the running pods created by the Buildx driver with
kubectl
.
$ kubectl -n buildkit get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kube0 1/1 1 1 32s
$ kubectl -n buildkit get pods
NAME READY STATUS RESTARTS AGE
kube0-6977cdcb75-k9h9m 1/1 Running 0 32s
The buildx driver creates the necessary resources on your cluster in the
specified namespace (in this case,
buildkit
), while keeping your driver
configuration locally.
Use your new builder by including the
--builder
flag when running buildx
commands. For example: :
# Replace <registry> with your Docker username
# and <image> with the name of the image you want to build
docker buildx build \
--builder=kube \
-t <registry>/<image> \
--push .
Thatâs it! Youâve now built an image from a Kubernetes pod, using Buildx!
For more information on the Kubernetes driver, see the buildx reference.
The Buildx remote driver allows for more complex custom build workloads, allowing you to connect to externally managed BuildKit instances. This is useful for scenarios that require manual management of the BuildKit daemon, or where a BuildKit daemon is exposed from another source.
$ docker buildx create \
--name remote \
--driver remote \
tcp://localhost:1234
The following table describes the available driver-specific options that you can
pass to
--driver-opt
:
Parameter | Type | Default | Description |
---|---|---|---|
key
|
String | Â | Sets the TLS client key. |
cert
|
String | Â |
Absolute path to the TLS client certificate to present to
buildkitd
.
|
cacert
|
String | Â | Absolute path to the TLS certificate authority used for validation. |
servername
|
String | Endpoint hostname. | TLS server name used in requests. |
This guide shows you how to create a setup with a BuildKit daemon listening on a Unix socket, and have Buildx connect through it.
Ensure that BuildKit is installed.
For example, you can launch an instance of buildkitd with:
$ sudo ./buildkitd --group $(id -gn) --addr unix://$HOME/buildkitd.sock
Alternatively, see here for running buildkitd in rootless mode or here for examples of running it as a systemd service.
Check that you have a Unix socket that you can connect to.
$ ls -lh /home/user/buildkitd.sock
srw-rw---- 1 root user 0 May 5 11:04 /home/user/buildkitd.sock
Connect Buildx to it using the remote driver:
$ docker buildx create \
--name remote-unix \
--driver remote \
unix://$HOME/buildkitd.sock
List available builders with
docker buildx ls
. You should then see
remote-unix
among them:
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
remote-unix remote
remote-unix0 unix:///home/.../buildkitd.sock running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
default * docker
default default running linux/amd64, linux/386
You can switch to this new builder as the default using
docker buildx use remote-unix
, or specify it per build using
--builder
:
$ docker buildx build --builder=remote-unix -t test --load .
Remember that you need to use the
--load
flag if you want to load the build
result into the Docker daemon.
This guide will show you how to create setup similar to the
docker-container
driver, by manually booting a BuildKit Docker container and connecting to it
using the Buildx remote driver. This procedure will manually create a container
and access it via itâs exposed port. (Youâd probably be better of just using the
docker-container
driver that connects to BuildKit through the Docker daemon,
but this is for illustration purposes.)
Generate certificates for BuildKit.
You can use the create-certs.sh script as a starting point. Note that while itâs possible to expose BuildKit over TCP without using TLS, itâs not recommended. Doing so allows arbitrary access to BuildKit without credentials.
With certificates generated in
.certs/
, startup the container:
$ docker run -d --rm \
--name=remote-buildkitd \
--privileged \
-p 1234:1234 \
-v $PWD/.certs:/etc/buildkit/certs \
moby/buildkit:latest \
--addr tcp://0.0.0.0:1234 \
--tlscacert /etc/buildkit/certs/daemon/ca.pem \
--tlscert /etc/buildkit/certs/daemon/cert.pem \
--tlskey /etc/buildkit/certs/daemon/key.pem
This command starts a BuildKit container and exposes the daemonâs port 1234 to localhost.
Connect to this running container using Buildx:
$ docker buildx create \
--name remote-container \
--driver remote \
--driver-opt cacert=${PWD}/.certs/client/ca.pem,cert=${PWD}/.certs/client/cert.pem,key=${PWD}/.certs/client/key.pem,servername=<TLS_SERVER_NAME> \
tcp://localhost:1234
Alternatively, use the
docker-container://
URL scheme to connect to the
BuildKit container without specifying a port:
$ docker buildx create \
--name remote-container \
--driver remote \
docker-container://remote-container
This guide will show you how to create a setup similar to the
kubernetes
driver by manually creating a BuildKit
Deployment
. While the
kubernetes
driver will do this under-the-hood, it might sometimes be desirable to scale
BuildKit manually. Additionally, when executing builds from inside Kubernetes
pods, the Buildx builder will need to be recreated from within each pod or
copied between them.
Create a Kubernetes deployment of
buildkitd
, as per the instructions
here.
Following the guide, create certificates for the BuildKit daemon and client using create-certs.sh, and create a deployment of BuildKit pods with a service that connects to them.
Assuming that the service is called
buildkitd
, create a remote builder in
Buildx, ensuring that the listed certificate files are present:
$ docker buildx create \
--name remote-kubernetes \
--driver remote \
--driver-opt cacert=${PWD}/.certs/client/ca.pem,cert=${PWD}/.certs/client/cert.pem,key=${PWD}/.certs/client/key.pem \
tcp://buildkitd.default.svc:1234
Note that this will only work internally, within the cluster, since the BuildKit setup guide only creates a ClusterIP service. To configure the builder to be accessible remotely, you can use an appropriately configured ingress, which is outside the scope of this guide.
To access the service remotely, use the port forwarding mechanism of
kubectl
:
$ kubectl port-forward svc/buildkitd 1234:1234
Then you can point the remote driver at
tcp://localhost:1234
.
Alternatively, you can use the
kube-pod://
URL scheme to connect directly to a
BuildKit pod through the Kubernetes API. Note that this method only connects to
a single pod in the deployment:
$ kubectl get pods --selector=app=buildkitd -o json | jq -r '.items[].metadata.name
buildkitd-XXXXXXXXXX-xxxxx
$ docker buildx create \
--name remote-container \
--driver remote \
kube-pod://buildkitd-XXXXXXXXXX-xxxxx
The
image
exporter outputs the build result into a container image format. The
registry
exporter is identical, but it automatically pushes the result by
setting
push=true
.
Build a container image using the
image
and
registry
exporters:
$ docker buildx build --output type=image[,parameters] .
$ docker buildx build --output type=registry[,parameters] .
The following table describes the available parameters that you can pass to
--output
for
type=image
:
Parameter | Type | Default | Description |
---|---|---|---|
name
|
String | Â | Specify image name(s) |
push
|
true
,
false
|
false
|
Push after creating the image. |
push-by-digest
|
true
,
false
|
false
|
Push image without name. |
registry.insecure
|
true
,
false
|
false
|
Allow pushing to insecure registry. |
dangling-name-prefix
|
<value>
|
 |
Name image with
prefix@<digest>
, used for anonymous images
|
name-canonical
|
true
,
false
|
 |
Add additional canonical name
name@<digest>
|
compression
|
uncompressed
,
gzip
,
estargz
,
zstd
|
gzip
|
Compression type, see compression |
compression-level
|
0..22
|
 | Compression level, see compression |
force-compression
|
true
,
false
|
false
|
Forcefully apply compression, see compression |
oci-mediatypes
|
true
,
false
|
false
|
Use OCI media types in exporter manifests, see OCI Media types |
buildinfo
|
true
,
false
|
true
|
Attach inline build info |
buildinfo-attrs
|
true
,
false
|
false
|
Attach inline build info attributes |
unpack
|
true
,
false
|
false
|
Unpack image after creation (for use with containerd) |
store
|
true
,
false
|
true
|
Store the result images to the workerâs (for example, containerd) image store, and ensures that the image has all blobs in the content store. Ignored if the worker doesnât have image store (when using OCI workers, for example). |
annotation.<key>
|
String | Â |
Attach an annotation with the respective
key
and
value
to the built image,see annotations
|
These exporters support adding OCI annotation using
annotation.*
dot notation
parameter. The following example sets the
org.opencontainers.image.title
annotation for a build:
$ docker buildx build \
--output "type=<type>,name=<registry>/<image>,annotation.org.opencontainers.image.title=<title>" .
For more information about annotations, see BuildKit documentation.
For more information on the
image
or
registry
exporters, see the
BuildKit README.
Exporters save your build results to a specified output type. You specify the
exporter to use with the
--output
CLI option.
Buildx supports the following exporters:
image
: exports the build result to a container image.
registry
: exports the build result into a container image, and pushes it to
the specified registry.
local
: exports the build root filesystem into a local directory.
tar
: packs the build root filesystem into a local tarball.
oci
: exports the build result to the local filesystem in the
OCI image layout
format.
docker
: exports the build result to the local filesystem in the
Docker image
format.
cacheonly
: doesnât export a build output, but runs the build and creates a
cache.
To specify an exporter, use the following command syntax:
$ docker buildx build --tag <registry>/<image> \
--output type=<TYPE> .
Most common use cases doesnât require you donât need to specify which exporter
to use explicitly. You only need to specify the exporter if you intend to
customize the output somehow, or if you want to save it to disk. The
--load
and
--push
options allow Buildx to infer the exporter settings to use.
For example, if you use the
--push
option in combination with
--tag
, Buildx
automatically uses the
image
exporter, and configures the exporter to push the
results to the specified registry.
To get the full flexibility out of the various exporters BuildKit has to offer,
you use the
--output
flag that lets you configure exporter options.
Each exporter type is designed for different use cases. The following sections describe some common scenarios, and how you can use exporters to generate the output that you need.
Buildx is often used to build container images that can be loaded to an image
store. Thatâs where the
docker
exporter comes in. The following example shows
how to build an image using the
docker
exporter, and have that image loaded to
the local image store, using the
--output
option:
$ docker buildx build \
--output type=docker,name=<registry>/<image> .
Buildx CLI will automatically use the
docker
exporter and load it to the image
store if you supply the
--tag
and
--load
options:
$ docker buildx build --tag <registry>/<image> --load .
Building images using the
docker
driver are automatically loaded to the local
image store.
Images loaded to the image store are available to for
docker run
immediately
after the build finishes, and youâll see them in the list of images when you run
the
docker images
command.
To push a built image to a container registry, you can use the
registry
or
image
exporters.
When you pass the
--push
option to the Buildx CLI, you instruct BuildKit to
push the built image to the specified registry:
$ docker buildx build --tag <registry>/<image> --push .
Under the hood, this uses the
image
exporter, and sets the
push
parameter.
Itâs the same as using the following long-form command using the
--output
option:
$ docker buildx build \
--output type=image,name=<registry>/<image>,push=true .
You can also use the
registry
exporter, which does the same thing:
$ docker buildx build \
--output type=registry,name=<registry>/<image> .
You can use either the
oci
or
docker
exporters to save the build results to
image layout on your local filesystem. Both of these exporters generate a tar
archive file containing the corresponding image layout. The
dest
parameter
defines the target output path for the tarball.
$ docker buildx build --output type=oci,dest=./image.tar .
[+] Building 0.8s (7/7) FINISHED
...
=> exporting to oci image format 0.0s
=> exporting layers 0.0s
=> exporting manifest sha256:c1ef01a0a0ef94a7064d5cbce408075730410060e253ff8525d1e5f7e27bc900 0.0s
=> exporting config sha256:eadab326c1866dd247efb52cb715ba742bd0f05b6a205439f107cf91b3abc853 0.0s
=> sending tarball 0.0s
$ mkdir -p out && tar -C out -xf ./image.tar
$ tree out
out
âââ blobs
â  âââ sha256
â  âââ 9b18e9b68314027565b90ff6189d65942c0f7986da80df008b8431276885218e
â  âââ c78795f3c329dbbbfb14d0d32288dea25c3cd12f31bd0213be694332a70c7f13
â  âââ d1cf38078fa218d15715e2afcf71588ee482352d697532cf316626164699a0e2
â  âââ e84fa1df52d2abdfac52165755d5d1c7621d74eda8e12881f6b0d38a36e01775
â  âââ fe9e23793a27fe30374308988283d40047628c73f91f577432a0d05ab0160de7
âââ index.json
âââ manifest.json
âââ oci-layout
If you donât want to build an image from your build results, but instead export
the filesystem that was built, you can use the
local
and
tar
exporters.
The
local
exporter unpacks the filesystem into a directory structure in the
specified location. The
tar
exporter creates a tarball archive file.
$ docker buildx build --output type=tar,dest=<path/to/output> .
The
local
exporter is useful in multi-stage builds
since it allows you to export only a minimal number of build artifacts. For example,
self-contained binaries.
The
cacheonly
exporter can be used if you just want to run a build, without
exporting any output. This can be useful if, for example, you want to run a test
build. Or, if you want to run the build first, and create exports using
subsequent commands. The
cacheonly
exporter creates a build cache, so any
successive builds are instant.
$ docker buildx build --output type=cacheonly
If you donât specify an exporter, and you donât provide short-hand options like
--load
that automatically selects the appropriate exporter, Buildx defaults to
using the
cacheonly
exporter. Except if you build using the
docker
driver,
in which case you use the
docker
exporter.
Buildx logs a warning message when using
cacheonly
as a default:
$ docker buildx build .
WARNING: No output specified with docker-container driver.
Build result will only remain in the build cache.
To push result image into registry use --push or
to load image into docker use --load
You can only specify a single exporter for any given build (see this pull request for details){:target=âblankâ rel=ânoopenerâ class=â_â}. But you can perform multiple builds one after another to export the same content twice. BuildKit caches the build, so unless any of the layers change, all successive builds following the first are instant.
The following example shows how to run the same build twice, first using the
image
, followed by the
local
.
$ docker buildx build --output type=image,tag=<registry>/<image> .
$ docker buildx build --output type=local,dest=<path/to/output> .
This section describes some configuration options available for exporters.
The options described here are common for at least two or more exporter types. Additionally, the different exporters types support specific parameters as well. See the detailed page about each exporter for more information about which configuration parameters apply.
The common parameters described here are:
When you export a compressed output, you can configure the exact compression algorithm and level to use. While the default values provide a good out-of-the-box experience, you may wish to tweak the parameters to optimize for storage vs compute costs. Changing the compression parameters can reduce storage space required, and improve image download times, but will increase build times.
To select the compression algorithm, you can use the
compression
option. For
example, to build an
image
with
compression=zstd
:
$ docker buildx build \
--output type=image,name=<registry>/<image>,push=true,compression=zstd .
Use the
compression-level=<value>
option alongside the
compression
parameter
to choose a compression level for the algorithms which support it:
gzip
and
estargz
zstd
As a general rule, the higher the number, the smaller the resulting file will be, and the longer the compression will take to run.
Use the
force-compression=true
option to force re-compressing layers imported
from a previous image, if the requested compression algorithm is different from
the previous compression algorithm.
Note
The
gzip
andestargz
compression methods use thecompress/gzip
package, whilezstd
uses thegithub.com/klauspost/compress/zstd
package.
Exporters that output container images, support creating images with either
Docker media types (the default) or with OCI media types. This is supported by
the
image
,
registry
,
oci
and
docker
exporters.
To export images with OCI media types set, use the
oci-mediatypes
property.
For example, with the
image
exporter:
$ docker buildx build \
--output type=image,name=<registry>/<image>,push=true,oci-mediatypes=true .
Exporters that output container images, allow embedding information about the
build, including information on the original build request and sources used
during the build. This is supported by the
image
,
registry
,
oci
and
docker
exporters.
This build info is attached to the image configuration:
{
"moby.buildkit.buildinfo.v0": "<base64>"
}
By default, build dependencies are attached to the image configuration. You can
turn off this behavior by setting
buildinfo=false
.
Read about each of the exporters to learn about how they work and how to use them:
The
local
and
tar
exporters output the root filesystem of the build result
into a local directory. Theyâre useful for producing artifacts that arenât
container images.
local
exports files and directories.
tar
exports the same, but bundles the export into a tarball.
Build a container image using the
local
exporter:
$ docker buildx build --output type=local[,parameters] .
$ docker buildx build --output type=tar[,parameters] .
The following table describes the available parameters:
Parameter | Type | Default | Description |
---|---|---|---|
dest
|
String | Â | Path to copy files to |
For more information on the
local
or
tar
exporters, see the
BuildKit README.