Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
DevOps-Docker
Continuous integration with Docker

Continuous integration with Docker

Continuous Integration (CI) is the part of the development process where you’re looking to get your code changes merged with the main branch of the project. At this point, development teams run tests and builds to vet that the code changes don’t cause any unwanted or unexpected behaviors.

Git branches about to get merged

There are several uses for Docker at this stage of development, even if you don’t end up packaging your application as a container image.

Docker as a build environment

Containers are reproducible, isolated environments that yield predictable results. Building and testing your application in a Docker container makes it easier to prevent unexpected behaviors from occurring. Using a Dockerfile, you define the exact requirements for the build environment, including programming runtimes, operating system, binaries, and more.

Using Docker to manage your build environment also eases maintenance. For example, updating to a new version of a programming runtime can be as simple as changing a tag or digest in a Dockerfile. No need to SSH into a pet VM to manually reinstall a newer version and update the related configuration files.

Additionally, just as you expect third-party open source packages to be secure, the same should go for your build environment. You can scan and index a builder image, just like you would for any other containerized application.

The following links provide instructions for how you can get started using Docker for building your applications in CI:

  • GitHub Actions
  • GitLab
  • Circle CI
  • Render

Docker in Docker

You can also use a Dockerized build environment to build container images using Docker. That is, your build environment runs inside a container which itself is equipped to run Docker builds. This method is referred to as “Docker in Docker”.

Docker provides an official Docker image that you can use for this purpose.

What’s next

Docker maintains a set of official GitHub Actions that you can use to build, annotate, and push container images on the GitHub Actions platform. See Introduction to GitHub Actions to learn more and get started.

Docker container driver

Docker container driver

The buildx Docker container driver allows creation of a managed and customizable BuildKit environment in a dedicated Docker container.

Using the Docker container driver has a couple of advantages over the default Docker driver. For example:

  • Specify custom BuildKit versions to use.
  • Build multi-arch images, see QEMU
  • Advanced options for cache import and export

Synopsis

Run the following command to create a new builder, named container , that uses the Docker container driver:

$ docker buildx create \
  --name container \
  --driver=docker-container \
  --driver-opt=[key=value,...]
container

The following table describes the available driver-specific options that you can pass to --driver-opt :

Parameter Type Default Description
image String  Sets the image to use for running BuildKit.
network String  Sets the network mode for running the BuildKit container.
cgroup-parent String /docker/buildx Sets the cgroup parent of the BuildKit container if Docker is using the cgroupfs driver.
env.<key> String  Sets the environment variable key to the specified value in the BuildKit container.

Usage

When you run a build, Buildx pulls the specified image (by default, moby/buildkit ){:target=”blank” rel=”noopener” class=””}. When the container has started, Buildx submits the build submitted to the containerized build server.

$ docker buildx build -t <image> --builder=container .
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 1.9s done
#1 creating container buildx_buildkit_container0
#1 creating container buildx_buildkit_container0 0.5s done
#1 DONE 2.4s
...

Loading to local image store

Unlike when using the default docker driver, images built with the docker-container driver must be explicitly loaded into the local image store. Use the --load flag:

$ docker buildx build --load -t <image> --builder=container .
...
 => exporting to oci image format                                                                                                      7.7s
 => => exporting layers                                                                                                                4.9s
 => => exporting manifest sha256:4e4ca161fa338be2c303445411900ebbc5fc086153a0b846ac12996960b479d3                                      0.0s
 => => exporting config sha256:adf3eec768a14b6e183a1010cb96d91155a82fd722a1091440c88f3747f1f53f                                        0.0s
 => => sending tarball                                                                                                                 2.8s
 => importing to docker

The image becomes available in the image store when the build finishes:

$ docker image ls
REPOSITORY                       TAG               IMAGE ID       CREATED             SIZE
<image>                          latest            adf3eec768a1   2 minutes ago       197MB

Cache persistence

The docker-container driver supports cache persistence, as it stores all the BuildKit state and related cache into a dedicated Docker volume.

To persist the docker-container driver’s cache, even after recreating the driver using docker buildx rm and docker buildx create , you can destroy the builder using the --keep-state flag:

For example, to create a builder named container and then remove it while persisting state:

# setup a builder
$ docker buildx create --name=container --driver=docker-container --use --bootstrap
container
$ docker buildx ls
NAME/NODE       DRIVER/ENDPOINT              STATUS   BUILDKIT PLATFORMS
container *     docker-container
  container0    desktop-linux                running  v0.10.5  linux/amd64
$ docker volume ls
DRIVER    VOLUME NAME
local     buildx_buildkit_container0_state

# remove the builder while persisting state
$ docker buildx rm --keep-state container
$ docker volume ls
DRIVER    VOLUME NAME
local     buildx_buildkit_container0_state

# the newly created driver with the same name will have all the state of the previous one!
$ docker buildx create --name=container --driver=docker-container --use --bootstrap
container

QEMU

The docker-container driver supports using QEMU (user mode) to build non-native platforms. Use the --platform flag to specify which architectures that you want to build for.

For example, to build a Linux image for amd64 and arm64 :

$ docker buildx build \
  --builder=container \
  --platform=linux/amd64,linux/arm64 \
  -t <registry>/<image> \
  --push .

Warning

QEMU performs full-system emulation of non-native platforms, which is much slower than native builds. Compute-heavy tasks like compilation and compression/decompression will likely take a large performance hit.

Custom network

You can customize the network that the builder container uses. This is useful if you need to use a specific network for your builds.

For example, let’s create a network named foonet :

$ docker network create foonet

Now create a docker-container builder that will use this network:

$ docker buildx create --use \
  --name mybuilder \
  --driver docker-container \
  --driver-opt "network=foonet"

Boot and inspect mybuilder :

$ docker buildx inspect --bootstrap

Inspect the builder container and see what network is being used:

$ docker inspect buildx_buildkit_mybuilder0 --format={{.NetworkSettings.Networks}}
map[foonet:0xc00018c0c0]

Further reading

For more information on the Docker container driver, see the buildx reference.

Read article
Docker driver

Docker driver

The Buildx Docker driver is the default driver. It uses the BuildKit server components built directly into the Docker engine. The Docker driver requires no configuration.

Unlike the other drivers, builders using the Docker driver can’t be manually created. They’re only created automatically from the Docker context.

Images built with the Docker driver are automatically loaded to the local image store.

Synopsis

# The Docker driver is used by buildx by default
docker buildx build .

It’s not possible to configure which BuildKit version to use, or to pass any additional BuildKit parameters to a builder using the Docker driver. The BuildKit version and parameters are preset by the Docker engine internally.

If you need additional configuration and flexibility, consider using the Docker container driver.

Further reading

For more information on the Docker driver, see the buildx reference.

Read article
Drivers overview

Drivers overview

Buildx drivers are configurations for how and where the BuildKit backend runs. Driver settings are customizable and allows fine-grained control of the builder. Buildx supports the following drivers:

  • docker : uses the BuildKit library bundled into the Docker daemon.
  • docker-container : creates a dedicated BuildKit container using Docker.
  • kubernetes : creates BuildKit pods in a Kubernetes cluster.
  • remote : connects directly to a manually managed BuildKit daemon.

Different drivers support different use cases. The default docker driver prioritizes simplicity and ease of use. It has limited support for advanced features like caching and output formats, and isn’t configurable. Other drivers provide more flexibility and are better at handling advanced scenarios.

The following table outlines some differences between drivers.

Feature docker docker-container kubernetes remote
Automatically load image ✠  Â
Cache export Inline only ✠✠âœ
Tarball output  ✠✠âœ
Multi-arch images  ✠✠âœ
BuildKit configuration  ✠✠Managed externally

List available builders

Use docker buildx ls to see builder instances available on your system, and the drivers they’re using.

$ docker buildx ls
NAME/NODE       DRIVER/ENDPOINT      STATUS   BUILDKIT PLATFORMS
default         docker
  default       default              running  20.10.17 linux/amd64, linux/386

Depending on your setup, you may find multiple builders in your list that use the Docker driver. For example, on a system that runs both a manually installed version of dockerd, as well as Docker Desktop, you might see the following output from docker buildx ls :

NAME/NODE       DRIVER/ENDPOINT STATUS  BUILDKIT PLATFORMS
default         docker
  default       default         running 20.10.17 linux/amd64, linux/386
desktop-linux * docker
  desktop-linux desktop-linux   running 20.10.17 linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6

This is because the Docker driver builders are automatically pulled from the available Docker Contexts. When you add new contexts using docker context create , these will appear in your list of buildx builders.

The asterisk ( * ) next to the builder name indicates that this is the selected builder which gets used by default, unless you specify a builder using the --builder option.

Create a new builder

Use the docker buildx create command to create a builder, and specify the driver using the --driver option.

$ docker buildx create --name=<builder-name> --driver=<driver> --driver-opt=<driver-options>

This creates a new builder instance with a single build node. After creating a new builder you can also append new nodes to it.

To use a remote node for your builders, you can set the DOCKER_HOST environment variable or provide a remote context name when creating the builder.

Switch between builders

To switch between different builders, use the docker buildx use <name> command. After running this command, the build commands will automatically use this builder.

What’s next

Read about each of the Buildx drivers to learn about how they work and how to use them:

  • Docker driver
  • Docker container driver
  • Kubernetes driver
  • Remote driver
Read article
Kubernetes driver

Kubernetes driver

The Buildx Kubernetes driver allows connecting your local development or CI environments to your Kubernetes cluster to allow access to more powerful and varied compute resources.

Synopsis

Run the following command to create a new builder, named kube , that uses the Kubernetes driver:

$ docker buildx create \
  --bootstrap \
  --name=kube \
  --driver=kubernetes \
  --driver-opt=[key=value,...]

The following table describes the available driver-specific options that you can pass to --driver-opt :

Parameter Type Default Description
image String  Sets the image to use for running BuildKit.
namespace String Namespace in current Kubernetes context Sets the Kubernetes namespace.
replicas Integer 1 Sets the number of Pod replicas to create. See scaling BuildKit
requests.cpu CPU units  Sets the request CPU value specified in units of Kubernetes CPU. For example requests.cpu=100m or requests.cpu=2
requests.memory Memory size  Sets the request memory value specified in bytes or with a valid suffix. For example requests.memory=500Mi or requests.memory=4G
limits.cpu CPU units  Sets the limit CPU value specified in units of Kubernetes CPU. For example requests.cpu=100m or requests.cpu=2
limits.memory Memory size  Sets the limit memory value specified in bytes or with a valid suffix. For example requests.memory=500Mi or requests.memory=4G
nodeselector CSV string  Sets the pod’s nodeSelector label(s). See node assignment.
tolerations CSV string  Configures the pod’s taint toleration. See node assignment.
rootless true , false false Run the container as a non-root user. See rootless mode.
loadbalance sticky , random sticky Load-balancing strategy. If set to sticky , the pod is chosen using the hash of the context path.
qemu.install true , false  Install QEMU emulation for multi platforms support. See QEMU.
qemu.image String tonistiigi/binfmt:latest Sets the QEMU emulation image. See QEMU.

Scaling BuildKit

One of the main advantages of the Kubernetes driver is that you can scale the number of builder replicas up and down to handle increased build load. Scaling is configurable using the following driver options:

  • replicas=N

    This scales the number of BuildKit pods to the desired size. By default, it only creates a single pod. Increasing the number of replicas lets you take advantage of multiple nodes in your cluster.

  • requests.cpu , requests.memory , limits.cpu , limits.memory

    These options allow requesting and limiting the resources available to each BuildKit pod according to the official Kubernetes documentation here.

For example, to create 4 replica BuildKit pods:

$ docker buildx create \
  --bootstrap \
  --name=kube \
  --driver=kubernetes \
  --driver-opt=namespace=buildkit,replicas=4

Listing the pods, you get this:

$ kubectl -n buildkit get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
kube0   4/4     4            4           8s

$ kubectl -n buildkit get pods
NAME                     READY   STATUS    RESTARTS   AGE
kube0-6977cdcb75-48ld2   1/1     Running   0          8s
kube0-6977cdcb75-rkc6b   1/1     Running   0          8s
kube0-6977cdcb75-vb4ks   1/1     Running   0          8s
kube0-6977cdcb75-z4fzs   1/1     Running   0          8s

Additionally, you can use the loadbalance=(sticky|random) option to control the load-balancing behavior when there are multiple replicas. random selects random nodes from the node pool, providing an even workload distribution across replicas. sticky (the default) attempts to connect the same build performed multiple times to the same node each time, ensuring better use of local cache.

For more information on scalability, see the options for buildx create.

Node assignment

The Kubernetes driver allows you to control the scheduling of BuildKit pods using the nodeSelector and tolerations driver options.

The value of the nodeSelector parameter is a comma-separated string of key-value pairs, where the key is the node label and the value is the label text. For example: "nodeselector=kubernetes.io/arch=arm64"

The tolerations parameter is a semicolon-separated list of taints. It accepts the same values as the Kubernetes manifest. Each tolerations entry specifies a taint key and the value, operator, or effect. For example: "tolerations=key=foo,value=bar;key=foo2,operator=exists;key=foo3,effect=NoSchedule"

Due to quoting rules for shell commands, you must wrap the nodeselector and tolerations parameters in single quotes. You can even wrap all of --driver-opt in single quotes, for example:

$ docker buildx create \
  --bootstrap \
  --name=kube \
  --driver=kubernetes \
  '--driver-opt="nodeselector=label1=value1,label2=value2","tolerations=key=key1,value=value1"'

Multi-platform builds

The Buildx Kubernetes driver has support for creating multi-platform images, either using QEMU or by leveraging the native architecture of nodes.

QEMU

Like the docker-container driver, the Kubernetes driver also supports using QEMU (user mode) to build images for non-native platforms. Include the --platform flag and specify which platforms you want to output to.

For example, to build a Linux image for amd64 and arm64 :

$ docker buildx build \
  --builder=kube \
  --platform=linux/amd64,linux/arm64 \
  -t <user>/<image> \
  --push .

Warning

QEMU performs full-system emulation of non-native platforms, which is much slower than native builds. Compute-heavy tasks like compilation and compression/decompression will likely take a large performance hit.

Using a custom BuildKit image or invoking non-native binaries in builds may require that you explicitly turn on QEMU using the qemu.install option when creating the builder:

$ docker buildx create \
  --bootstrap \
  --name=kube \
  --driver=kubernetes \
  --driver-opt=namespace=buildkit,qemu.install=true

Native

If you have access to cluster nodes of different architectures, the Kubernetes driver can take advantage of these for native builds. To do this, use the --append flag of docker buildx create .

First, create your builder with explicit support for a single architecture, for example amd64 :

$ docker buildx create \
  --bootstrap \
  --name=kube \
  --driver=kubernetes \
  --platform=linux/amd64 \
  --node=builder-amd64 \
  --driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=amd64"

This creates a Buildx builder named kube , containing a single builder node builder-amd64 . Note that the Buildx concept of a node isn’t the same as the Kubernetes concept of a node. A Buildx node in this case could connect multiple Kubernetes nodes of the same architecture together.

With the kube builder created, you can now introduce another architecture into the mix using --append . For example, to add arm64 :

$ docker buildx create \
  --append \
  --bootstrap \
  --name=kube \
  --driver=kubernetes \
  --platform=linux/arm64 \
  --node=builder-arm64 \
  --driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=arm64"

If you list builders now, you should be able to see both nodes present:

$ docker buildx ls
NAME/NODE       DRIVER/ENDPOINT                                         STATUS   PLATFORMS
kube            kubernetes
  builder-amd64 kubernetes:///kube?deployment=builder-amd64&kubeconfig= running  linux/amd64*, linux/amd64/v2, linux/amd64/v3, linux/386
  builder-arm64 kubernetes:///kube?deployment=builder-arm64&kubeconfig= running  linux/arm64*

You should now be able to build multi-arch images with amd64 and arm64 combined, by specifying those platforms together in your buildx command:

$ docker buildx build --builder=kube --platform=linux/amd64,linux/arm64 -t <user>/<image> --push .

You can repeat the buildx create --append command for as many architectures that you want to support.

Rootless mode

The Kubernetes driver supports rootless mode. For more information on how rootless mode works, and it’s requirements, see here.

To turn it on in your cluster, you can use the rootless=true driver option:

$ docker buildx create \
  --name=kube \
  --driver=kubernetes \
  --driver-opt=namespace=buildkit,rootless=true

This will create your pods without securityContext.privileged .

Requires Kubernetes version 1.19 or later. Using Ubuntu as the host kernel is recommended.

Example: Creating a Buildx builder in Kubernetes

This guide shows you how to:

  • Create a namespace for your Buildx resources
  • Create a Kubernetes builder.
  • List the available builders
  • Build an image using your Kubernetes builders

Prerequisites:

  • You have an existing Kubernetes cluster. If you don’t already have one, you can follow along by installing minikube.
  • The cluster you want to connect to is accessible via the kubectl command, with the KUBECONFIG environment variable set appropriately if necessary.
  1. Create a buildkit namespace.

    Creating a separate namespace helps keep your Buildx resources separate from other resources in the cluster.

    $ kubectl create namespace buildkit
    namespace/buildkit created
    
  2. Create a new Buildx builder with the Kubernetes driver:

    # Remember to specify the namespace in driver options
    $ docker buildx create \
      --bootstrap \
      --name=kube \
      --driver=kubernetes \
    
  3. List available Buildx builders using docker buildx ls

    $ docker buildx ls
    NAME/NODE                DRIVER/ENDPOINT STATUS  PLATFORMS
    kube                     kubernetes
      kube0-6977cdcb75-k9h9m                 running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
    default *                docker
      default                default         running linux/amd64, linux/386
    
  4. Inspect the running pods created by the Buildx driver with kubectl .

    $ kubectl -n buildkit get deployments
    NAME    READY   UP-TO-DATE   AVAILABLE   AGE
    kube0   1/1     1            1           32s
    
    $ kubectl -n buildkit get pods
    NAME                     READY   STATUS    RESTARTS   AGE
    kube0-6977cdcb75-k9h9m   1/1     Running   0          32s
    

    The buildx driver creates the necessary resources on your cluster in the specified namespace (in this case, buildkit ), while keeping your driver configuration locally.

  5. Use your new builder by including the --builder flag when running buildx commands. For example: :

    # Replace <registry> with your Docker username
    # and <image> with the name of the image you want to build
    docker buildx build \
      --builder=kube \
      -t <registry>/<image> \
      --push .
    

That’s it! You’ve now built an image from a Kubernetes pod, using Buildx!

Further reading

For more information on the Kubernetes driver, see the buildx reference.

Read article
Remote driver

Remote driver

The Buildx remote driver allows for more complex custom build workloads, allowing you to connect to externally managed BuildKit instances. This is useful for scenarios that require manual management of the BuildKit daemon, or where a BuildKit daemon is exposed from another source.

Synopsis

$ docker buildx create \
  --name remote \
  --driver remote \
  tcp://localhost:1234

The following table describes the available driver-specific options that you can pass to --driver-opt :

Parameter Type Default Description
key String  Sets the TLS client key.
cert String  Absolute path to the TLS client certificate to present to buildkitd .
cacert String  Absolute path to the TLS certificate authority used for validation.
servername String Endpoint hostname. TLS server name used in requests.

Example: Remote BuildKit over Unix sockets

This guide shows you how to create a setup with a BuildKit daemon listening on a Unix socket, and have Buildx connect through it.

  1. Ensure that BuildKit is installed.

    For example, you can launch an instance of buildkitd with:

    $ sudo ./buildkitd --group $(id -gn) --addr unix://$HOME/buildkitd.sock
    

    Alternatively, see here for running buildkitd in rootless mode or here for examples of running it as a systemd service.

  2. Check that you have a Unix socket that you can connect to.

    $ ls -lh /home/user/buildkitd.sock
    srw-rw---- 1 root user 0 May  5 11:04 /home/user/buildkitd.sock
    
  3. Connect Buildx to it using the remote driver:

    $ docker buildx create \
      --name remote-unix \
      --driver remote \
      unix://$HOME/buildkitd.sock
    
  4. List available builders with docker buildx ls . You should then see remote-unix among them:

    $ docker buildx ls
    NAME/NODE           DRIVER/ENDPOINT                        STATUS  PLATFORMS
    remote-unix         remote
      remote-unix0      unix:///home/.../buildkitd.sock        running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
    default *           docker
      default           default                                running linux/amd64, linux/386
    

You can switch to this new builder as the default using docker buildx use remote-unix , or specify it per build using --builder :

$ docker buildx build --builder=remote-unix -t test --load .

Remember that you need to use the --load flag if you want to load the build result into the Docker daemon.

Example: Remote BuildKit in Docker container

This guide will show you how to create setup similar to the docker-container driver, by manually booting a BuildKit Docker container and connecting to it using the Buildx remote driver. This procedure will manually create a container and access it via it’s exposed port. (You’d probably be better of just using the docker-container driver that connects to BuildKit through the Docker daemon, but this is for illustration purposes.)

  1. Generate certificates for BuildKit.

    You can use the create-certs.sh script as a starting point. Note that while it’s possible to expose BuildKit over TCP without using TLS, it’s not recommended. Doing so allows arbitrary access to BuildKit without credentials.

  2. With certificates generated in .certs/ , startup the container:

    $ docker run -d --rm \
      --name=remote-buildkitd \
      --privileged \
      -p 1234:1234 \
      -v $PWD/.certs:/etc/buildkit/certs \
      moby/buildkit:latest \
      --addr tcp://0.0.0.0:1234 \
      --tlscacert /etc/buildkit/certs/daemon/ca.pem \
      --tlscert /etc/buildkit/certs/daemon/cert.pem \
      --tlskey /etc/buildkit/certs/daemon/key.pem
    

    This command starts a BuildKit container and exposes the daemon’s port 1234 to localhost.

  3. Connect to this running container using Buildx:

    $ docker buildx create \
      --name remote-container \
      --driver remote \
      --driver-opt cacert=${PWD}/.certs/client/ca.pem,cert=${PWD}/.certs/client/cert.pem,key=${PWD}/.certs/client/key.pem,servername=<TLS_SERVER_NAME> \
      tcp://localhost:1234
    

    Alternatively, use the docker-container:// URL scheme to connect to the BuildKit container without specifying a port:

    $ docker buildx create \
      --name remote-container \
      --driver remote \
      docker-container://remote-container
    

Example: Remote BuildKit in Kubernetes

This guide will show you how to create a setup similar to the kubernetes driver by manually creating a BuildKit Deployment . While the kubernetes driver will do this under-the-hood, it might sometimes be desirable to scale BuildKit manually. Additionally, when executing builds from inside Kubernetes pods, the Buildx builder will need to be recreated from within each pod or copied between them.

  1. Create a Kubernetes deployment of buildkitd , as per the instructions here.

    Following the guide, create certificates for the BuildKit daemon and client using create-certs.sh, and create a deployment of BuildKit pods with a service that connects to them.

  2. Assuming that the service is called buildkitd , create a remote builder in Buildx, ensuring that the listed certificate files are present:

    $ docker buildx create \
      --name remote-kubernetes \
      --driver remote \
      --driver-opt cacert=${PWD}/.certs/client/ca.pem,cert=${PWD}/.certs/client/cert.pem,key=${PWD}/.certs/client/key.pem \
      tcp://buildkitd.default.svc:1234
    

Note that this will only work internally, within the cluster, since the BuildKit setup guide only creates a ClusterIP service. To configure the builder to be accessible remotely, you can use an appropriately configured ingress, which is outside the scope of this guide.

To access the service remotely, use the port forwarding mechanism of kubectl :

$ kubectl port-forward svc/buildkitd 1234:1234

Then you can point the remote driver at tcp://localhost:1234 .

Alternatively, you can use the kube-pod:// URL scheme to connect directly to a BuildKit pod through the Kubernetes API. Note that this method only connects to a single pod in the deployment:

$ kubectl get pods --selector=app=buildkitd -o json | jq -r '.items[].metadata.name
buildkitd-XXXXXXXXXX-xxxxx
$ docker buildx create \
  --name remote-container \
  --driver remote \
  kube-pod://buildkitd-XXXXXXXXXX-xxxxx
Read article