Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All

DevOps-Docker

BuildKit TOML configuration

BuildKit TOML configuration

The TOML file used to configure the buildkitd daemon settings has a short list of global settings followed by a series of sections for specific areas of daemon configuration.

The file path is /etc/buildkit/buildkitd.toml for rootful mode, ~/.config/buildkit/buildkitd.toml for rootless mode.

The following is a complete buildkitd.toml configuration example, please note some configuration is only good for edge cases, please take care of it carefully.

debug = true
# root is where all buildkit state is stored.
root = "/var/lib/buildkit"
# insecure-entitlements allows insecure entitlements, disabled by default.
insecure-entitlements = [ "network.host", "security.insecure" ]

[grpc]
  address = [ "tcp://0.0.0.0:1234" ]
  # debugAddress is address for attaching go profiles and debuggers.
  debugAddress = "0.0.0.0:6060"
  uid = 0
  gid = 0
  [grpc.tls]
    cert = "/etc/buildkit/tls.crt"
    key = "/etc/buildkit/tls.key"
    ca = "/etc/buildkit/tlsca.crt"

# config for build history API that stores information about completed build commands
[history]
  # maxAge is the maximum age of history entries to keep, in seconds.
  maxAge = 172800
  # maxEntries is the maximum number of history entries to keep.
  maxEntries = 50

[worker.oci]
  enabled = true
  # platforms is manually configure platforms, detected automatically if unset.
  platforms = [ "linux/amd64", "linux/arm64" ]
  snapshotter = "auto" # overlayfs or native, default value is "auto".
  rootless = false # see docs/rootless.md for the details on rootless mode.
  # Whether run subprocesses in main pid namespace or not, this is useful for
  # running rootless buildkit inside a container.
  noProcessSandbox = false
  gc = true
  gckeepstorage = 9000
  # alternate OCI worker binary name(example 'crun'), by default either 
  # buildkit-runc or runc binary is used
  binary = ""
  # name of the apparmor profile that should be used to constrain build containers.
  # the profile should already be loaded (by a higher level system) before creating a worker.
  apparmor-profile = ""
  # limit the number of parallel build steps that can run at the same time
  max-parallelism = 4
  # maintain a pool of reusable CNI network namespaces to amortize the overhead
  # of allocating and releasing the namespaces
  cniPoolSize = 16

  [worker.oci.labels]
    "foo" = "bar"

  [[worker.oci.gcpolicy]]
    keepBytes = 512000000
    keepDuration = 172800
    filters = [ "type==source.local", "type==exec.cachemount", "type==source.git.checkout"]
  [[worker.oci.gcpolicy]]
    all = true
    keepBytes = 1024000000

[worker.containerd]
  address = "/run/containerd/containerd.sock"
  enabled = true
  platforms = [ "linux/amd64", "linux/arm64" ]
  namespace = "buildkit"
  gc = true
  # gckeepstorage sets storage limit for default gc profile, in MB.
  gckeepstorage = 9000
  # maintain a pool of reusable CNI network namespaces to amortize the overhead
  # of allocating and releasing the namespaces
  cniPoolSize = 16

  [worker.containerd.labels]
    "foo" = "bar"

  [[worker.containerd.gcpolicy]]
    keepBytes = 512000000
    keepDuration = 172800 # in seconds
    filters = [ "type==source.local", "type==exec.cachemount", "type==source.git.checkout"]
  [[worker.containerd.gcpolicy]]
    all = true
    keepBytes = 1024000000

# registry configures a new Docker register used for cache import or output.
[registry."docker.io"]
  # mirror configuration to handle path in case a mirror registry requires a /project path rather than just a host:port
  mirrors = ["yourmirror.local:5000", "core.harbor.domain/proxy.docker.io"]
  http = true
  insecure = true
  ca=["/etc/config/myca.pem"]
  [[registry."docker.io".keypair]]
    key="/etc/config/key.pem"
    cert="/etc/config/cert.pem"

# optionally mirror configuration can be done by defining it as a registry.
[registry."yourmirror.local:5000"]
  http = true

Azure Blob Storage cache

Azure Blob Storage cache

Warning

This cache backend is unreleased. You can use it today, by using the moby/buildkit:master image in your Buildx driver.

The azblob cache store uploads your resulting build cache to Azure’s blob storage service.

Note

This cache storage backend requires using a different driver than the default docker driver - see more information on selecting a driver here. To create a new driver (which can act as a simple drop-in replacement):

$ docker buildx create --use --driver=docker-container

Synopsis

$ docker buildx build --push -t <registry>/<image> \
  --cache-to type=azblob,name=<cache-image>[,parameters...] \
  --cache-from type=azblob,name=<cache-image>[,parameters...] .

The following table describes the available CSV parameters that you can pass to --cache-to and --cache-from .

Name Option Type Default Description
name cache-to , cache-from String  Required. The name of the cache image.
account_url cache-to , cache-from String  Base URL of the storage account.
secret_access_key cache-to , cache-from String  Blob storage account key, see authentication.
mode cache-to min , max min Cache layers to export, see cache mode.

Authentication

The secret_access_key , if left unspecified, is read from environment variables on the BuildKit server following the scheme for the Azure Go SDK. The environment variables are read from the server, not the Buildx client.

Further reading

For an introduction to caching see Optimizing builds with cache.

For more information on the azblob cache backend, see the BuildKit README.

Read article

GitHub Actions cache

GitHub Actions cache

Warning

The GitHub Actions cache is a beta feature. You can use it today, in current releases of Buildx and BuildKit. However, the interface and behavior are unstable and may change in future releases.

The GitHub Actions cache utilizes the GitHub-provided Action’s cache available from within your CI execution environment. This is the recommended cache to use inside your GitHub action pipelines, as long as your use case falls within the size and usage limits set by GitHub.

Note

This cache storage backend requires using a different driver than the default docker driver - see more information on selecting a driver here. To create a new driver (which can act as a simple drop-in replacement):

$ docker buildx create --use --driver=docker-container

Synopsis

$ docker buildx build --push -t <registry>/<image> \
  --cache-to type=gha[,parameters...] \
  --cache-from type=gha[,parameters...] .

The following table describes the available CSV parameters that you can pass to --cache-to and --cache-from .

Name Option Type Default Description
url cache-to , cache-from String $ACTIONS_CACHE_URL Cache server URL, see authentication.
token cache-to , cache-from String $ACTIONS_RUNTIME_TOKEN Access token, see authentication.
scope cache-to , cache-from String Name of the current Git branch. Cache scope, see scope
mode cache-to min , max min Cache layers to export, see cache mode.

Authentication

If the url or token parameters are left unspecified, the gha cache backend will fall back to using environment variables. If you invoke the docker buildx command manually from an inline step, then the variables must be manually exposed (using crazy-max/ghaction-github-runtime , for example).

Scope

By default, cache is scoped per Git branch. This ensures a separate cache environment for the main branch and each feature branch. If you build multiple images on the same branch, each build will overwrite the cache of the previous, leaving only the final cache.

To preserve the cache for multiple builds on the same branch, you can manually specify a cache scope name using the scope parameter. In the following example, the cache is set to a combination of the branch name and the image name, to ensure each image gets its own cache):

$ docker buildx build --push -t <registry>/<image> \
  --cache-to type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image \
  --cache-from type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image .
$ docker buildx build --push -t <registry>/<image2> \
  --cache-to type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image2 \
  --cache-from type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image2 .

GitHub’s cache access restrictions, still apply. Only the cache for the current branch, the base branch and the default branch is accessible by a workflow.

Using docker/build-push-action

When using the docker/build-push-action , the url and token parameters are automatically populated. No need to manually specify them, or include any additional workarounds.

For example:

- name: Build and push
  uses: docker/build-push-action@v3
  with:
    context: .
    push: true
    tags: "<registry>/<image>:latest"
    cache-from: type=gha
    cache-to: type=gha,mode=max

Further reading

For an introduction to caching see Optimizing builds with cache.

For more information on the gha cache backend, see the BuildKit README.

For more information about using GitHub Actions with Docker, see Introduction to GitHub Actions

Read article

Cache storage backends

Cache storage backends

To ensure fast builds, BuildKit automatically caches the build result in its own internal cache. Additionally, BuildKit also supports exporting build cache to an external location, making it possible to import in future builds.

An external cache becomes almost essential in CI/CD build environments. Such environments usually have little-to-no persistence between runs, but it’s still important to keep the runtime of image builds as low as possible.

Warning

If you use secrets or credentials inside your build process, ensure you manipulate them using the dedicated --secret option. Manually managing secrets using COPY or ARG could result in leaked credentials.

Backends

Buildx supports the following cache storage backends:

  • inline : embeds the build cache into the image.

    The inline cache gets pushed to the same location as the main output result. Note that this only works for the image exporter.

  • registry : embeds the build cache into a separate image, and pushes to a dedicated location separate from the main output.

  • local : writes the build cache to a local directory on the filesystem.

  • gha : uploads the build cache to GitHub Actions cache (beta).

  • s3 : uploads the build cache to an AWS S3 bucket (unreleased).

  • azblob : uploads the build cache to Azure Blob Storage (unreleased).

Command syntax

To use any of the cache backends, you first need to specify it on build with the --cache-to option to export the cache to your storage backend of choice. Then, use the --cache-from option to import the cache from the storage backend into the current build. Unlike the local BuildKit cache (which is always enabled), all of the cache storage backends must be explicitly exported to, and explicitly imported from. All cache exporters except for the inline cache requires that you select an alternative Buildx driver.

Example buildx command using the registry backend, using import and export cache:

$ docker buildx build --push -t <registry>/<image> \
  --cache-to type=registry,ref=<registry>/<cache-image>[,parameters...] \
  --cache-from type=registry,ref=<registry>/<cache-image>[,parameters...] .

Warning

As a general rule, each cache writes to some location. No location can be written to twice, without overwriting the previously cached data. If you want to maintain multiple scoped caches (for example, a cache per Git branch), then ensure that you use different locations for exported cache.

Multiple caches

BuildKit currently only supports a single cache exporter. But you can import from as many remote caches as you like. For example, a common pattern is to use the cache of both the current branch and the main branch. The following example shows importing cache from multiple locations using the registry cache backend:

$ docker buildx build --push -t <registry>/<image> \
  --cache-to type=registry,ref=<registry>/<cache-image>:<branch> \
  --cache-from type=registry,ref=<registry>/<cache-image>:<branch> \
  --cache-from type=registry,ref=<registry>/<cache-image>:main .

Configuration options

This section describes some configuration options available when generating cache exports. The options described here are common for at least two or more backend types. Additionally, the different backend types support specific parameters as well. See the detailed page about each backend type for more information about which configuration parameters apply.

The common parameters described here are:

  • Cache mode
  • Cache compression
  • OCI media type

Cache mode

When generating a cache output, the --cache-to argument accepts a mode option for defining which layers to include in the exported cache. This is supported by all cache backends except for the inline cache.

Mode can be set to either of two options: mode=min or mode=max . For example, to build the cache with mode=max with the registry backend:

$ docker buildx build --push -t <registry>/<image> \
  --cache-to type=registry,ref=<registry>/<cache-image>,mode=max \
  --cache-from type=registry,ref=<registry>/<cache-image> .

This option is only set when exporting a cache, using --cache-to . When importing a cache ( --cache-from ) the relevant parameters are automatically detected.

In min cache mode (the default), only layers that are exported into the resulting image are cached, while in max cache mode, all layers are cached, even those of intermediate steps.

While min cache is typically smaller (which speeds up import/export times, and reduces storage costs), max cache is more likely to get more cache hits. Depending on the complexity and location of your build, you should experiment with both parameters to find the results that work best for you.

Cache compression

The cache compression options are the same as the exporter compression options. This is supported by the local and registry cache backends.

For example, to compress the registry cache with zstd compression:

$ docker buildx build --push -t <registry>/<image> \
  --cache-to type=registry,ref=<registry>/<cache-image>,compression=zstd \
  --cache-from type=registry,ref=<registry>/<cache-image> .

OCI media types

The cache OCI options are the same as the exporter OCI options. These are supported by the local and registry cache backends.

For example, to export OCI media type cache, use the oci-mediatypes property:

$ docker buildx build --push -t <registry>/<image> \
  --cache-to type=registry,ref=<registry>/<cache-image>,oci-mediatypes=true \
  --cache-from type=registry,ref=<registry>/<cache-image> .

This property is only meaningful with the --cache-to flag. When fetching cache, BuildKit will auto-detect the correct media types to use.

Read article

Inline cache

Inline cache

The inline cache storage backend is the simplest way to get an external cache and is easy to get started using if you’re already building and pushing an image. However, it doesn’t scale as well to multi-stage builds as well as the other drivers do. It also doesn’t offer separation between your output artifacts and your cache output. This means that if you’re using a particularly complex build flow, or not exporting your images directly to a registry, then you may want to consider the registry cache.

Synopsis

$ docker buildx build --push -t <registry>/<image> \
  --cache-to type=inline \
  --cache-from type=registry,ref=<registry>/image .

No additional parameters are supported for the inline cache.

To export cache using inline storage, pass type=inline to the --cache-to option:

$ docker buildx build --push -t <registry>/<image> \
  --cache-to type=inline .

Alternatively, you can also export inline cache by setting the build argument BUILDKIT_INLINE_CACHE=1 , instead of using the --cache-to flag:

$ docker buildx build --push -t <registry>/<image> \
  --arg BUILDKIT_INLINE_CACHE=1 .

To import the resulting cache on a future build, pass type=registry to --cache-from which lets you extract the cache from inside a Docker image in the specified registry:

$ docker buildx build --push -t <registry>/<image> \
  --cache-from type=registry,ref=<registry>/<image> .

Further reading

For an introduction to caching see Optimizing builds with cache.

For more information on the inline cache backend, see the BuildKit README.

Read article

Local cache

Local cache

The local cache store is a simple cache option that stores your cache as files in a directory on your filesystem, using an OCI image layout for the underlying directory structure. Local cache is a good choice if you’re just testing, or if you want the flexibility to self-manage a shared storage solution.

Note

This cache storage backend requires using a different driver than the default docker driver - see more information on selecting a driver here. To create a new driver (which can act as a simple drop-in replacement):

$ docker buildx create --use --driver=docker-container

Synopsis

$ docker buildx build --push -t <registry>/<image> \
  --cache-to type=local,dest=path/to/local/dir[,parameters...] \
  --cache-from type=local,src=path/to/local/dir .

The following table describes the available CSV parameters that you can pass to --cache-to and --cache-from .

Name Option Type Default Description
src cache-from String  Path of the local directory where cache gets imported from.
digest cache-from String  Digest of manifest to import, see cache versioning.
dest cache-to String  Path of the local directory where cache gets exported to.
mode cache-to min , max min Cache layers to export, see cache mode.
oci-mediatypes cache-to true , false true Use OCI media types in exported manifests, see OCI media types.
compression cache-to gzip , estargz , zstd gzip Compression type, see cache compression.
compression-level cache-to 0..22 Â Compression level, see cache compression.
force-compression cache-to true , false false Forcibly apply compression, see cache compression.

If the src cache doesn’t exist, then the cache import step will fail, but the build will continue.

Cache versioning

This section describes how versioning works for caches on a local filesystem, and how you can use the digest parameter to use older versions of cache.

If you inspect the cache directory manually, you can see the resulting OCI image layout:

$ ls cache
blobs  index.json  ingest
$ cat cache/index.json | jq
{
  "schemaVersion": 2,
  "manifests": [
    {
      "mediaType": "application/vnd.oci.image.index.v1+json",
      "digest": "sha256:6982c70595cb91769f61cd1e064cf5f41d5357387bab6b18c0164c5f98c1f707",
      "size": 1560,
      "annotations": {
        "org.opencontainers.image.ref.name": "latest"
      }
    }
  ]
}

Like other cache types, local cache gets replaced on export, by replacing the contents of the index.json file. However, previous caches will still be available in the blobs directory. These old caches are addressable by digest, and kept indefinitely. Therefore, the size of the local cache will continue to grow (see moby/buildkit#1896 for more information).

When importing cache using --cache-to , you can specify the digest parameter to force loading an older version of the cache, for example:

$ docker buildx build --push -t <registry>/<image> \
  --cache-to type=local,dest=path/to/local/dir \
  --cache-from type=local,ref=path/to/local/dir,digest=sha256:6982c70595cb91769f61cd1e064cf5f41d5357387bab6b18c0164c5f98c1f707 .

Further reading

For an introduction to caching see Optimizing builds with cache.

For more information on the local cache backend, see the BuildKit README.

Read article