Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
DevOps-Docker
Kubernetes driver

Kubernetes driver

The Buildx Kubernetes driver allows connecting your local development or CI environments to your Kubernetes cluster to allow access to more powerful and varied compute resources.

Synopsis

Run the following command to create a new builder, named kube , that uses the Kubernetes driver:

$ docker buildx create \
  --bootstrap \
  --name=kube \
  --driver=kubernetes \
  --driver-opt=[key=value,...]

The following table describes the available driver-specific options that you can pass to --driver-opt :

Parameter Type Default Description
image String  Sets the image to use for running BuildKit.
namespace String Namespace in current Kubernetes context Sets the Kubernetes namespace.
replicas Integer 1 Sets the number of Pod replicas to create. See scaling BuildKit
requests.cpu CPU units  Sets the request CPU value specified in units of Kubernetes CPU. For example requests.cpu=100m or requests.cpu=2
requests.memory Memory size  Sets the request memory value specified in bytes or with a valid suffix. For example requests.memory=500Mi or requests.memory=4G
limits.cpu CPU units  Sets the limit CPU value specified in units of Kubernetes CPU. For example requests.cpu=100m or requests.cpu=2
limits.memory Memory size  Sets the limit memory value specified in bytes or with a valid suffix. For example requests.memory=500Mi or requests.memory=4G
nodeselector CSV string  Sets the pod’s nodeSelector label(s). See node assignment.
tolerations CSV string  Configures the pod’s taint toleration. See node assignment.
rootless true , false false Run the container as a non-root user. See rootless mode.
loadbalance sticky , random sticky Load-balancing strategy. If set to sticky , the pod is chosen using the hash of the context path.
qemu.install true , false  Install QEMU emulation for multi platforms support. See QEMU.
qemu.image String tonistiigi/binfmt:latest Sets the QEMU emulation image. See QEMU.

Scaling BuildKit

One of the main advantages of the Kubernetes driver is that you can scale the number of builder replicas up and down to handle increased build load. Scaling is configurable using the following driver options:

  • replicas=N

    This scales the number of BuildKit pods to the desired size. By default, it only creates a single pod. Increasing the number of replicas lets you take advantage of multiple nodes in your cluster.

  • requests.cpu , requests.memory , limits.cpu , limits.memory

    These options allow requesting and limiting the resources available to each BuildKit pod according to the official Kubernetes documentation here.

For example, to create 4 replica BuildKit pods:

$ docker buildx create \
  --bootstrap \
  --name=kube \
  --driver=kubernetes \
  --driver-opt=namespace=buildkit,replicas=4

Listing the pods, you get this:

$ kubectl -n buildkit get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
kube0   4/4     4            4           8s

$ kubectl -n buildkit get pods
NAME                     READY   STATUS    RESTARTS   AGE
kube0-6977cdcb75-48ld2   1/1     Running   0          8s
kube0-6977cdcb75-rkc6b   1/1     Running   0          8s
kube0-6977cdcb75-vb4ks   1/1     Running   0          8s
kube0-6977cdcb75-z4fzs   1/1     Running   0          8s

Additionally, you can use the loadbalance=(sticky|random) option to control the load-balancing behavior when there are multiple replicas. random selects random nodes from the node pool, providing an even workload distribution across replicas. sticky (the default) attempts to connect the same build performed multiple times to the same node each time, ensuring better use of local cache.

For more information on scalability, see the options for buildx create.

Node assignment

The Kubernetes driver allows you to control the scheduling of BuildKit pods using the nodeSelector and tolerations driver options.

The value of the nodeSelector parameter is a comma-separated string of key-value pairs, where the key is the node label and the value is the label text. For example: "nodeselector=kubernetes.io/arch=arm64"

The tolerations parameter is a semicolon-separated list of taints. It accepts the same values as the Kubernetes manifest. Each tolerations entry specifies a taint key and the value, operator, or effect. For example: "tolerations=key=foo,value=bar;key=foo2,operator=exists;key=foo3,effect=NoSchedule"

Due to quoting rules for shell commands, you must wrap the nodeselector and tolerations parameters in single quotes. You can even wrap all of --driver-opt in single quotes, for example:

$ docker buildx create \
  --bootstrap \
  --name=kube \
  --driver=kubernetes \
  '--driver-opt="nodeselector=label1=value1,label2=value2","tolerations=key=key1,value=value1"'

Multi-platform builds

The Buildx Kubernetes driver has support for creating multi-platform images, either using QEMU or by leveraging the native architecture of nodes.

QEMU

Like the docker-container driver, the Kubernetes driver also supports using QEMU (user mode) to build images for non-native platforms. Include the --platform flag and specify which platforms you want to output to.

For example, to build a Linux image for amd64 and arm64 :

$ docker buildx build \
  --builder=kube \
  --platform=linux/amd64,linux/arm64 \
  -t <user>/<image> \
  --push .

Warning

QEMU performs full-system emulation of non-native platforms, which is much slower than native builds. Compute-heavy tasks like compilation and compression/decompression will likely take a large performance hit.

Using a custom BuildKit image or invoking non-native binaries in builds may require that you explicitly turn on QEMU using the qemu.install option when creating the builder:

$ docker buildx create \
  --bootstrap \
  --name=kube \
  --driver=kubernetes \
  --driver-opt=namespace=buildkit,qemu.install=true

Native

If you have access to cluster nodes of different architectures, the Kubernetes driver can take advantage of these for native builds. To do this, use the --append flag of docker buildx create .

First, create your builder with explicit support for a single architecture, for example amd64 :

$ docker buildx create \
  --bootstrap \
  --name=kube \
  --driver=kubernetes \
  --platform=linux/amd64 \
  --node=builder-amd64 \
  --driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=amd64"

This creates a Buildx builder named kube , containing a single builder node builder-amd64 . Note that the Buildx concept of a node isn’t the same as the Kubernetes concept of a node. A Buildx node in this case could connect multiple Kubernetes nodes of the same architecture together.

With the kube builder created, you can now introduce another architecture into the mix using --append . For example, to add arm64 :

$ docker buildx create \
  --append \
  --bootstrap \
  --name=kube \
  --driver=kubernetes \
  --platform=linux/arm64 \
  --node=builder-arm64 \
  --driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=arm64"

If you list builders now, you should be able to see both nodes present:

$ docker buildx ls
NAME/NODE       DRIVER/ENDPOINT                                         STATUS   PLATFORMS
kube            kubernetes
  builder-amd64 kubernetes:///kube?deployment=builder-amd64&kubeconfig= running  linux/amd64*, linux/amd64/v2, linux/amd64/v3, linux/386
  builder-arm64 kubernetes:///kube?deployment=builder-arm64&kubeconfig= running  linux/arm64*

You should now be able to build multi-arch images with amd64 and arm64 combined, by specifying those platforms together in your buildx command:

$ docker buildx build --builder=kube --platform=linux/amd64,linux/arm64 -t <user>/<image> --push .

You can repeat the buildx create --append command for as many architectures that you want to support.

Rootless mode

The Kubernetes driver supports rootless mode. For more information on how rootless mode works, and it’s requirements, see here.

To turn it on in your cluster, you can use the rootless=true driver option:

$ docker buildx create \
  --name=kube \
  --driver=kubernetes \
  --driver-opt=namespace=buildkit,rootless=true

This will create your pods without securityContext.privileged .

Requires Kubernetes version 1.19 or later. Using Ubuntu as the host kernel is recommended.

Example: Creating a Buildx builder in Kubernetes

This guide shows you how to:

  • Create a namespace for your Buildx resources
  • Create a Kubernetes builder.
  • List the available builders
  • Build an image using your Kubernetes builders

Prerequisites:

  • You have an existing Kubernetes cluster. If you don’t already have one, you can follow along by installing minikube.
  • The cluster you want to connect to is accessible via the kubectl command, with the KUBECONFIG environment variable set appropriately if necessary.
  1. Create a buildkit namespace.

    Creating a separate namespace helps keep your Buildx resources separate from other resources in the cluster.

    $ kubectl create namespace buildkit
    namespace/buildkit created
    
  2. Create a new Buildx builder with the Kubernetes driver:

    # Remember to specify the namespace in driver options
    $ docker buildx create \
      --bootstrap \
      --name=kube \
      --driver=kubernetes \
    
  3. List available Buildx builders using docker buildx ls

    $ docker buildx ls
    NAME/NODE                DRIVER/ENDPOINT STATUS  PLATFORMS
    kube                     kubernetes
      kube0-6977cdcb75-k9h9m                 running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
    default *                docker
      default                default         running linux/amd64, linux/386
    
  4. Inspect the running pods created by the Buildx driver with kubectl .

    $ kubectl -n buildkit get deployments
    NAME    READY   UP-TO-DATE   AVAILABLE   AGE
    kube0   1/1     1            1           32s
    
    $ kubectl -n buildkit get pods
    NAME                     READY   STATUS    RESTARTS   AGE
    kube0-6977cdcb75-k9h9m   1/1     Running   0          32s
    

    The buildx driver creates the necessary resources on your cluster in the specified namespace (in this case, buildkit ), while keeping your driver configuration locally.

  5. Use your new builder by including the --builder flag when running buildx commands. For example: :

    # Replace <registry> with your Docker username
    # and <image> with the name of the image you want to build
    docker buildx build \
      --builder=kube \
      -t <registry>/<image> \
      --push .
    

That’s it! You’ve now built an image from a Kubernetes pod, using Buildx!

Further reading

For more information on the Kubernetes driver, see the buildx reference.

Remote driver

Remote driver

The Buildx remote driver allows for more complex custom build workloads, allowing you to connect to externally managed BuildKit instances. This is useful for scenarios that require manual management of the BuildKit daemon, or where a BuildKit daemon is exposed from another source.

Synopsis

$ docker buildx create \
  --name remote \
  --driver remote \
  tcp://localhost:1234

The following table describes the available driver-specific options that you can pass to --driver-opt :

Parameter Type Default Description
key String  Sets the TLS client key.
cert String  Absolute path to the TLS client certificate to present to buildkitd .
cacert String  Absolute path to the TLS certificate authority used for validation.
servername String Endpoint hostname. TLS server name used in requests.

Example: Remote BuildKit over Unix sockets

This guide shows you how to create a setup with a BuildKit daemon listening on a Unix socket, and have Buildx connect through it.

  1. Ensure that BuildKit is installed.

    For example, you can launch an instance of buildkitd with:

    $ sudo ./buildkitd --group $(id -gn) --addr unix://$HOME/buildkitd.sock
    

    Alternatively, see here for running buildkitd in rootless mode or here for examples of running it as a systemd service.

  2. Check that you have a Unix socket that you can connect to.

    $ ls -lh /home/user/buildkitd.sock
    srw-rw---- 1 root user 0 May  5 11:04 /home/user/buildkitd.sock
    
  3. Connect Buildx to it using the remote driver:

    $ docker buildx create \
      --name remote-unix \
      --driver remote \
      unix://$HOME/buildkitd.sock
    
  4. List available builders with docker buildx ls . You should then see remote-unix among them:

    $ docker buildx ls
    NAME/NODE           DRIVER/ENDPOINT                        STATUS  PLATFORMS
    remote-unix         remote
      remote-unix0      unix:///home/.../buildkitd.sock        running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
    default *           docker
      default           default                                running linux/amd64, linux/386
    

You can switch to this new builder as the default using docker buildx use remote-unix , or specify it per build using --builder :

$ docker buildx build --builder=remote-unix -t test --load .

Remember that you need to use the --load flag if you want to load the build result into the Docker daemon.

Example: Remote BuildKit in Docker container

This guide will show you how to create setup similar to the docker-container driver, by manually booting a BuildKit Docker container and connecting to it using the Buildx remote driver. This procedure will manually create a container and access it via it’s exposed port. (You’d probably be better of just using the docker-container driver that connects to BuildKit through the Docker daemon, but this is for illustration purposes.)

  1. Generate certificates for BuildKit.

    You can use the create-certs.sh script as a starting point. Note that while it’s possible to expose BuildKit over TCP without using TLS, it’s not recommended. Doing so allows arbitrary access to BuildKit without credentials.

  2. With certificates generated in .certs/ , startup the container:

    $ docker run -d --rm \
      --name=remote-buildkitd \
      --privileged \
      -p 1234:1234 \
      -v $PWD/.certs:/etc/buildkit/certs \
      moby/buildkit:latest \
      --addr tcp://0.0.0.0:1234 \
      --tlscacert /etc/buildkit/certs/daemon/ca.pem \
      --tlscert /etc/buildkit/certs/daemon/cert.pem \
      --tlskey /etc/buildkit/certs/daemon/key.pem
    

    This command starts a BuildKit container and exposes the daemon’s port 1234 to localhost.

  3. Connect to this running container using Buildx:

    $ docker buildx create \
      --name remote-container \
      --driver remote \
      --driver-opt cacert=${PWD}/.certs/client/ca.pem,cert=${PWD}/.certs/client/cert.pem,key=${PWD}/.certs/client/key.pem,servername=<TLS_SERVER_NAME> \
      tcp://localhost:1234
    

    Alternatively, use the docker-container:// URL scheme to connect to the BuildKit container without specifying a port:

    $ docker buildx create \
      --name remote-container \
      --driver remote \
      docker-container://remote-container
    

Example: Remote BuildKit in Kubernetes

This guide will show you how to create a setup similar to the kubernetes driver by manually creating a BuildKit Deployment . While the kubernetes driver will do this under-the-hood, it might sometimes be desirable to scale BuildKit manually. Additionally, when executing builds from inside Kubernetes pods, the Buildx builder will need to be recreated from within each pod or copied between them.

  1. Create a Kubernetes deployment of buildkitd , as per the instructions here.

    Following the guide, create certificates for the BuildKit daemon and client using create-certs.sh, and create a deployment of BuildKit pods with a service that connects to them.

  2. Assuming that the service is called buildkitd , create a remote builder in Buildx, ensuring that the listed certificate files are present:

    $ docker buildx create \
      --name remote-kubernetes \
      --driver remote \
      --driver-opt cacert=${PWD}/.certs/client/ca.pem,cert=${PWD}/.certs/client/cert.pem,key=${PWD}/.certs/client/key.pem \
      tcp://buildkitd.default.svc:1234
    

Note that this will only work internally, within the cluster, since the BuildKit setup guide only creates a ClusterIP service. To configure the builder to be accessible remotely, you can use an appropriately configured ingress, which is outside the scope of this guide.

To access the service remotely, use the port forwarding mechanism of kubectl :

$ kubectl port-forward svc/buildkitd 1234:1234

Then you can point the remote driver at tcp://localhost:1234 .

Alternatively, you can use the kube-pod:// URL scheme to connect directly to a BuildKit pod through the Kubernetes API. Note that this method only connects to a single pod in the deployment:

$ kubectl get pods --selector=app=buildkitd -o json | jq -r '.items[].metadata.name
buildkitd-XXXXXXXXXX-xxxxx
$ docker buildx create \
  --name remote-container \
  --driver remote \
  kube-pod://buildkitd-XXXXXXXXXX-xxxxx
Read article
Image and registry exporters

Image and registry exporters

The image exporter outputs the build result into a container image format. The registry exporter is identical, but it automatically pushes the result by setting push=true .

Synopsis

Build a container image using the image and registry exporters:

$ docker buildx build --output type=image[,parameters] .
$ docker buildx build --output type=registry[,parameters] .

The following table describes the available parameters that you can pass to --output for type=image :

Parameter Type Default Description
name String  Specify image name(s)
push true , false false Push after creating the image.
push-by-digest true , false false Push image without name.
registry.insecure true , false false Allow pushing to insecure registry.
dangling-name-prefix <value> Â Name image with prefix@<digest> , used for anonymous images
name-canonical true , false  Add additional canonical name name@<digest>
compression uncompressed , gzip , estargz , zstd gzip Compression type, see compression
compression-level 0..22 Â Compression level, see compression
force-compression true , false false Forcefully apply compression, see compression
oci-mediatypes true , false false Use OCI media types in exporter manifests, see OCI Media types
buildinfo true , false true Attach inline build info
buildinfo-attrs true , false false Attach inline build info attributes
unpack true , false false Unpack image after creation (for use with containerd)
store true , false true Store the result images to the worker’s (for example, containerd) image store, and ensures that the image has all blobs in the content store. Ignored if the worker doesn’t have image store (when using OCI workers, for example).
annotation.<key> String  Attach an annotation with the respective key and value to the built image,see annotations

Annotations

These exporters support adding OCI annotation using annotation.* dot notation parameter. The following example sets the org.opencontainers.image.title annotation for a build:

$ docker buildx build \
    --output "type=<type>,name=<registry>/<image>,annotation.org.opencontainers.image.title=<title>" .

For more information about annotations, see BuildKit documentation.

Further reading

For more information on the image or registry exporters, see the BuildKit README.

Read article
Exporters overview

Exporters overview

Exporters save your build results to a specified output type. You specify the exporter to use with the --output CLI option. Buildx supports the following exporters:

  • image : exports the build result to a container image.
  • registry : exports the build result into a container image, and pushes it to the specified registry.
  • local : exports the build root filesystem into a local directory.
  • tar : packs the build root filesystem into a local tarball.
  • oci : exports the build result to the local filesystem in the OCI image layout format.
  • docker : exports the build result to the local filesystem in the Docker image format.
  • cacheonly : doesn’t export a build output, but runs the build and creates a cache.

Using exporters

To specify an exporter, use the following command syntax:

$ docker buildx build --tag <registry>/<image> \
  --output type=<TYPE> .

Most common use cases doesn’t require you don’t need to specify which exporter to use explicitly. You only need to specify the exporter if you intend to customize the output somehow, or if you want to save it to disk. The --load and --push options allow Buildx to infer the exporter settings to use.

For example, if you use the --push option in combination with --tag , Buildx automatically uses the image exporter, and configures the exporter to push the results to the specified registry.

To get the full flexibility out of the various exporters BuildKit has to offer, you use the --output flag that lets you configure exporter options.

Use cases

Each exporter type is designed for different use cases. The following sections describe some common scenarios, and how you can use exporters to generate the output that you need.

Load to image store

Buildx is often used to build container images that can be loaded to an image store. That’s where the docker exporter comes in. The following example shows how to build an image using the docker exporter, and have that image loaded to the local image store, using the --output option:

$ docker buildx build \
  --output type=docker,name=<registry>/<image> .

Buildx CLI will automatically use the docker exporter and load it to the image store if you supply the --tag and --load options:

$ docker buildx build --tag <registry>/<image> --load .

Building images using the docker driver are automatically loaded to the local image store.

Images loaded to the image store are available to for docker run immediately after the build finishes, and you’ll see them in the list of images when you run the docker images command.

Push to registry

To push a built image to a container registry, you can use the registry or image exporters.

When you pass the --push option to the Buildx CLI, you instruct BuildKit to push the built image to the specified registry:

$ docker buildx build --tag <registry>/<image> --push .

Under the hood, this uses the image exporter, and sets the push parameter. It’s the same as using the following long-form command using the --output option:

$ docker buildx build \
  --output type=image,name=<registry>/<image>,push=true .

You can also use the registry exporter, which does the same thing:

$ docker buildx build \
  --output type=registry,name=<registry>/<image> .

Export image layout to file

You can use either the oci or docker exporters to save the build results to image layout on your local filesystem. Both of these exporters generate a tar archive file containing the corresponding image layout. The dest parameter defines the target output path for the tarball.

$ docker buildx build --output type=oci,dest=./image.tar .
[+] Building 0.8s (7/7) FINISHED
 ...
 => exporting to oci image format                                                                     0.0s
 => exporting layers                                                                                  0.0s
 => exporting manifest sha256:c1ef01a0a0ef94a7064d5cbce408075730410060e253ff8525d1e5f7e27bc900        0.0s
 => exporting config sha256:eadab326c1866dd247efb52cb715ba742bd0f05b6a205439f107cf91b3abc853          0.0s
 => sending tarball                                                                                   0.0s
$ mkdir -p out && tar -C out -xf ./image.tar
$ tree out
out
├── blobs
│   └── sha256
│       ├── 9b18e9b68314027565b90ff6189d65942c0f7986da80df008b8431276885218e
│       ├── c78795f3c329dbbbfb14d0d32288dea25c3cd12f31bd0213be694332a70c7f13
│       ├── d1cf38078fa218d15715e2afcf71588ee482352d697532cf316626164699a0e2
│       ├── e84fa1df52d2abdfac52165755d5d1c7621d74eda8e12881f6b0d38a36e01775
│       └── fe9e23793a27fe30374308988283d40047628c73f91f577432a0d05ab0160de7
├── index.json
├── manifest.json
└── oci-layout

Export filesystem

If you don’t want to build an image from your build results, but instead export the filesystem that was built, you can use the local and tar exporters.

The local exporter unpacks the filesystem into a directory structure in the specified location. The tar exporter creates a tarball archive file.

$ docker buildx build --output type=tar,dest=<path/to/output> .

The local exporter is useful in multi-stage builds since it allows you to export only a minimal number of build artifacts. For example, self-contained binaries.

Cache-only export

The cacheonly exporter can be used if you just want to run a build, without exporting any output. This can be useful if, for example, you want to run a test build. Or, if you want to run the build first, and create exports using subsequent commands. The cacheonly exporter creates a build cache, so any successive builds are instant.

$ docker buildx build --output type=cacheonly

If you don’t specify an exporter, and you don’t provide short-hand options like --load that automatically selects the appropriate exporter, Buildx defaults to using the cacheonly exporter. Except if you build using the docker driver, in which case you use the docker exporter.

Buildx logs a warning message when using cacheonly as a default:

$ docker buildx build .
WARNING: No output specified with docker-container driver.
         Build result will only remain in the build cache.
         To push result image into registry use --push or
         to load image into docker use --load

Multiple exporters

You can only specify a single exporter for any given build (see this pull request for details){:target=”blank” rel=”noopener” class=”_”}. But you can perform multiple builds one after another to export the same content twice. BuildKit caches the build, so unless any of the layers change, all successive builds following the first are instant.

The following example shows how to run the same build twice, first using the image , followed by the local .

$ docker buildx build --output type=image,tag=<registry>/<image> .
$ docker buildx build --output type=local,dest=<path/to/output> .

Configuration options

This section describes some configuration options available for exporters.

The options described here are common for at least two or more exporter types. Additionally, the different exporters types support specific parameters as well. See the detailed page about each exporter for more information about which configuration parameters apply.

The common parameters described here are:

  • Compression
  • OCI media type

Compression

When you export a compressed output, you can configure the exact compression algorithm and level to use. While the default values provide a good out-of-the-box experience, you may wish to tweak the parameters to optimize for storage vs compute costs. Changing the compression parameters can reduce storage space required, and improve image download times, but will increase build times.

To select the compression algorithm, you can use the compression option. For example, to build an image with compression=zstd :

$ docker buildx build \
  --output type=image,name=<registry>/<image>,push=true,compression=zstd .

Use the compression-level=<value> option alongside the compression parameter to choose a compression level for the algorithms which support it:

  • 0-9 for gzip and estargz
  • 0-22 for zstd

As a general rule, the higher the number, the smaller the resulting file will be, and the longer the compression will take to run.

Use the force-compression=true option to force re-compressing layers imported from a previous image, if the requested compression algorithm is different from the previous compression algorithm.

Note

The gzip and estargz compression methods use the compress/gzip package, while zstd uses the github.com/klauspost/compress/zstd package.

OCI media types

Exporters that output container images, support creating images with either Docker media types (the default) or with OCI media types. This is supported by the image , registry , oci and docker exporters.

To export images with OCI media types set, use the oci-mediatypes property. For example, with the image exporter:

$ docker buildx build \
  --output type=image,name=<registry>/<image>,push=true,oci-mediatypes=true .

Build info

Exporters that output container images, allow embedding information about the build, including information on the original build request and sources used during the build. This is supported by the image , registry , oci and docker exporters.

This build info is attached to the image configuration:

{
  "moby.buildkit.buildinfo.v0": "<base64>"
}

By default, build dependencies are attached to the image configuration. You can turn off this behavior by setting buildinfo=false .

What’s next

Read about each of the exporters to learn about how they work and how to use them:

  • Image and registry exporters
  • OCI and Docker exporters.
  • Local and tar exporters
Read article
Local and tar exporters

Local and tar exporters

The local and tar exporters output the root filesystem of the build result into a local directory. They’re useful for producing artifacts that aren’t container images.

  • local exports files and directories.
  • tar exports the same, but bundles the export into a tarball.

Synopsis

Build a container image using the local exporter:

$ docker buildx build --output type=local[,parameters] .
$ docker buildx build --output type=tar[,parameters] .

The following table describes the available parameters:

Parameter Type Default Description
dest String  Path to copy files to

Further reading

For more information on the local or tar exporters, see the BuildKit README.

Read article
OCI and Docker exporters

OCI and Docker exporters

The oci exporter outputs the build result into an OCI image layout tarball. The docker exporter behaves the same way, except it exports a Docker image layout instead.

The docker driver doesn’t support these exporters. You must use docker-container or some other driver if you want to generate these outputs.

Synopsis

Build a container image using the oci and docker exporters:

$ docker buildx build --output type=oci[,parameters] .
$ docker buildx build --output type=docker[,parameters] .

The following table describes the available parameters:

Parameter Type Default Description
name String  Specify image name(s)
dest String  Path
tar true , false true Bundle the output into a tarball layout
compression uncompressed , gzip , estargz , zstd gzip Compression type, see compression
compression-level 0..22 Â Compression level, see compression
force-compression true , false false Forcefully apply compression, see compression
oci-mediatypes true , false  Use OCI media types in exporter manifests. Defaults to true for type=oci , and false for type=docker . See OCI Media types
buildinfo true , false true Attach inline build info
buildinfo-attrs true , false false Attach inline build info attributes
annotation.<key> String  Attach an annotation with the respective key and value to the built image,see annotations

Annotations

These exporters support adding OCI annotation using annotation.* dot notation parameter. The following example sets the org.opencontainers.image.title annotation for a build:

$ docker buildx build \
    --output "type=<type>,name=<registry>/<image>,annotation.org.opencontainers.image.title=<title>" .

For more information about annotations, see BuildKit documentation.

Further reading

For more information on the oci or docker exporters, see the BuildKit README.

Read article