This is one stop global knowledge base where you can learn about all the products, solutions and support features.
To ensure fast builds, BuildKit automatically caches the build result in its own internal cache. Additionally, BuildKit also supports exporting build cache to an external location, making it possible to import in future builds.
An external cache becomes almost essential in CI/CD build environments. Such environments usually have little-to-no persistence between runs, but itâs still important to keep the runtime of image builds as low as possible.
Warning
If you use secrets or credentials inside your build process, ensure you manipulate them using the dedicated
--secret
option. Manually managing secrets usingCOPY
orARG
could result in leaked credentials.
Buildx supports the following cache storage backends:
inline
: embeds the build cache into the image.
The inline cache gets pushed to the same location as the main output result.
Note that this only works for the
image
exporter.
registry
: embeds the build cache into a separate image, and pushes to a
dedicated location separate from the main output.
local
: writes the build cache to a local directory on the filesystem.
gha
: uploads the build cache to
GitHub Actions cache (beta).
s3
: uploads the build cache to an
AWS S3 bucket (unreleased).
azblob
: uploads the build cache to
Azure Blob Storage
(unreleased).
To use any of the cache backends, you first need to specify it on build with the
--cache-to
option
to export the cache to your storage backend of choice. Then, use the
--cache-from
option
to import the cache from the storage backend into the current build. Unlike the
local BuildKit cache (which is always enabled), all of the cache storage
backends must be explicitly exported to, and explicitly imported from. All cache
exporters except for the
inline
cache requires that you
select an alternative Buildx driver.
Example
buildx
command using the
registry
backend, using import and export
cache:
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>[,parameters...] \
--cache-from type=registry,ref=<registry>/<cache-image>[,parameters...] .
Warning
As a general rule, each cache writes to some location. No location can be written to twice, without overwriting the previously cached data. If you want to maintain multiple scoped caches (for example, a cache per Git branch), then ensure that you use different locations for exported cache.
BuildKit currently only supports a single cache exporter. But you can import from as many remote caches as you like. For example, a common pattern is to use the cache of both the current branch and the main branch. The following example shows importing cache from multiple locations using the registry cache backend:
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>:<branch> \
--cache-from type=registry,ref=<registry>/<cache-image>:<branch> \
--cache-from type=registry,ref=<registry>/<cache-image>:main .
This section describes some configuration options available when generating cache exports. The options described here are common for at least two or more backend types. Additionally, the different backend types support specific parameters as well. See the detailed page about each backend type for more information about which configuration parameters apply.
The common parameters described here are:
When generating a cache output, the
--cache-to
argument accepts a
mode
option for defining which layers to include in the exported cache. This is
supported by all cache backends except for the
inline
cache.
Mode can be set to either of two options:
mode=min
or
mode=max
. For example,
to build the cache with
mode=max
with the registry backend:
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>,mode=max \
--cache-from type=registry,ref=<registry>/<cache-image> .
This option is only set when exporting a cache, using
--cache-to
. When
importing a cache (
--cache-from
) the relevant parameters are automatically
detected.
In
min
cache mode (the default), only layers that are exported into the
resulting image are cached, while in
max
cache mode, all layers are cached,
even those of intermediate steps.
While
min
cache is typically smaller (which speeds up import/export times, and
reduces storage costs),
max
cache is more likely to get more cache hits.
Depending on the complexity and location of your build, you should experiment
with both parameters to find the results that work best for you.
The cache compression options are the same as the
exporter compression options. This is
supported by the
local
and
registry
cache backends.
For example, to compress the
registry
cache with
zstd
compression:
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>,compression=zstd \
--cache-from type=registry,ref=<registry>/<cache-image> .
The cache OCI options are the same as the
exporter OCI options. These are
supported by the
local
and
registry
cache backends.
For example, to export OCI media type cache, use the
oci-mediatypes
property:
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>,oci-mediatypes=true \
--cache-from type=registry,ref=<registry>/<cache-image> .
This property is only meaningful with the
--cache-to
flag. When fetching
cache, BuildKit will auto-detect the correct media types to use.
The
inline
cache storage backend is the simplest way to get an external cache
and is easy to get started using if youâre already building and pushing an
image. However, it doesnât scale as well to multi-stage builds as well as the
other drivers do. It also doesnât offer separation between your output artifacts
and your cache output. This means that if youâre using a particularly complex
build flow, or not exporting your images directly to a registry, then you may
want to consider the registry cache.
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=inline \
--cache-from type=registry,ref=<registry>/image .
No additional parameters are supported for the
inline
cache.
To export cache using
inline
storage, pass
type=inline
to the
--cache-to
option:
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=inline .
Alternatively, you can also export inline cache by setting the build argument
BUILDKIT_INLINE_CACHE=1
, instead of using the
--cache-to
flag:
$ docker buildx build --push -t <registry>/<image> \
--arg BUILDKIT_INLINE_CACHE=1 .
To import the resulting cache on a future build, pass
type=registry
to
--cache-from
which lets you extract the cache from inside a Docker image in
the specified registry:
$ docker buildx build --push -t <registry>/<image> \
--cache-from type=registry,ref=<registry>/<image> .
For an introduction to caching see Optimizing builds with cache.
For more information on the
inline
cache backend, see the
BuildKit README.
The
local
cache store is a simple cache option that stores your cache as files
in a directory on your filesystem, using an
OCI image layout
for the underlying directory structure. Local cache is a good choice if youâre
just testing, or if you want the flexibility to self-manage a shared storage
solution.
Note
This cache storage backend requires using a different driver than the default
docker
driver - see more information on selecting a driver here. To create a new driver (which can act as a simple drop-in replacement):$ docker buildx create --use --driver=docker-container
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=local,dest=path/to/local/dir[,parameters...] \
--cache-from type=local,src=path/to/local/dir .
The following table describes the available CSV parameters that you can pass to
--cache-to
and
--cache-from
.
Name | Option | Type | Default | Description |
---|---|---|---|---|
src
|
cache-from
|
String | Â | Path of the local directory where cache gets imported from. |
digest
|
cache-from
|
String | Â | Digest of manifest to import, see cache versioning. |
dest
|
cache-to
|
String | Â | Path of the local directory where cache gets exported to. |
mode
|
cache-to
|
min
,
max
|
min
|
Cache layers to export, see cache mode. |
oci-mediatypes
|
cache-to
|
true
,
false
|
true
|
Use OCI media types in exported manifests, see OCI media types. |
compression
|
cache-to
|
gzip
,
estargz
,
zstd
|
gzip
|
Compression type, see cache compression. |
compression-level
|
cache-to
|
0..22
|
 | Compression level, see cache compression. |
force-compression
|
cache-to
|
true
,
false
|
false
|
Forcibly apply compression, see cache compression. |
If the
src
cache doesnât exist, then the cache import step will fail, but the
build will continue.
This section describes how versioning works for caches on a local filesystem,
and how you can use the
digest
parameter to use older versions of cache.
If you inspect the cache directory manually, you can see the resulting OCI image layout:
$ ls cache
blobs index.json ingest
$ cat cache/index.json | jq
{
"schemaVersion": 2,
"manifests": [
{
"mediaType": "application/vnd.oci.image.index.v1+json",
"digest": "sha256:6982c70595cb91769f61cd1e064cf5f41d5357387bab6b18c0164c5f98c1f707",
"size": 1560,
"annotations": {
"org.opencontainers.image.ref.name": "latest"
}
}
]
}
Like other cache types, local cache gets replaced on export, by replacing the
contents of the
index.json
file. However, previous caches will still be
available in the
blobs
directory. These old caches are addressable by digest,
and kept indefinitely. Therefore, the size of the local cache will continue to
grow (see
moby/buildkit#1896
for more information).
When importing cache using
--cache-to
, you can specify the
digest
parameter
to force loading an older version of the cache, for example:
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=local,dest=path/to/local/dir \
--cache-from type=local,ref=path/to/local/dir,digest=sha256:6982c70595cb91769f61cd1e064cf5f41d5357387bab6b18c0164c5f98c1f707 .
For an introduction to caching see Optimizing builds with cache.
For more information on the
local
cache backend, see the
BuildKit README.
The
registry
cache storage can be thought of as an extension to the
inline
cache. Unlike the
inline
cache, the
registry
cache is entirely separate from
the image, which allows for more flexible usage -
registry
-backed cache can do
everything that the inline cache can do, and more:
max
mode, instead of only the
final stage.
image
exporter.
Note
This cache storage backend requires using a different driver than the default
docker
driver - see more information on selecting a driver here. To create a new driver (which can act as a simple drop-in replacement):$ docker buildx create --use --driver=docker-container
Unlike the simpler
inline
cache, the
registry
cache supports several
configuration parameters:
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>[,parameters...] \
--cache-from type=registry,ref=<registry>/<cache-image> .
The following table describes the available CSV parameters that you can pass to
--cache-to
and
--cache-from
.
Name | Option | Type | Default | Description |
---|---|---|---|---|
ref
|
cache-to
,
cache-from
|
String | Â | Full name of the cache image to import. |
dest
|
cache-to
|
String | Â | Path of the local directory where cache gets exported to. |
mode
|
cache-to
|
min
,
max
|
min
|
Cache layers to export, see cache mode. |
oci-mediatypes
|
cache-to
|
true
,
false
|
true
|
Use OCI media types in exported manifests, see OCI media types. |
compression
|
cache-to
|
gzip
,
estargz
,
zstd
|
gzip
|
Compression type, see cache compression. |
compression-level
|
cache-to
|
0..22
|
 | Compression level, see cache compression. |
force-compression
|
cache-to
|
true
,
false
|
false
|
Forcibly apply compression, see cache compression. |
You can choose any valid value for
ref
, as long as itâs not the same as the
target location that you push your image to. You might choose different tags
(e.g.
foo/bar:latest
and
foo/bar:build-cache
), separate image names (e.g.
foo/bar
and
foo/bar-cache
), or even different repositories (e.g.
docker.io/foo/bar
and
ghcr.io/foo/bar
). Itâs up to you to decide the
strategy that you want to use for separating your image from your cache images.
If the
--cache-from
target doesnât exist, then the cache import step will
fail, but the build will continue.
For an introduction to caching see Optimizing builds with cache.
For more information on the
registry
cache backend, see the
BuildKit README.
Warning
This cache backend is unreleased. You can use it today, by using the
moby/buildkit:master
image in your Buildx driver.
The
s3
cache storage uploads your resulting build cache to
Amazon S3 file storage service,
into a specified bucket.
Note
This cache storage backend requires using a different driver than the default
docker
driver - see more information on selecting a driver here. To create a new driver (which can act as a simple drop-in replacement):$ docker buildx create --use --driver=docker-container
$ docker buildx build --push -t <user>/<image> \
--cache-to type=s3,region=<region>,bucket=<bucket>,name=<cache-image>[,parameters...] \
--cache-from type=s3,region=<region>,bucket=<bucket>,name=<cache-image> .
The following table describes the available CSV parameters that you can pass to
--cache-to
and
--cache-from
.
Name | Option | Type | Default | Description |
---|---|---|---|---|
region
|
cache-to
,
cache-from
|
String | Â | Geographic location. |
bucket
|
cache-to
,
cache-from
|
String | Â | Name of the S3 bucket used for caching |
name
|
cache-to
,
cache-from
|
String | Â | Name of the cache image |
access_key_id
|
cache-to
,
cache-from
|
String | Â | See authentication |
secret_access_key
|
cache-to
,
cache-from
|
String | Â | See authentication |
session_token
|
cache-to
,
cache-from
|
String | Â | See authentication |
mode
|
cache-to
|
min
,
max
|
min
|
Cache layers to export, see cache mode. |
access_key_id
,
secret_access_key
, and
session_token
, if left unspecified,
are read from environment variables on the BuildKit server following the scheme
for the AWS Go SDK.
The environment variables are read from the server, not the Buildx client.
For an introduction to caching see Optimizing builds with cache.
For more information on the
s3
cache backend, see the
BuildKit README.
While
docker builder prune
or
docker buildx prune
commands run at once, garbage collection runs periodically and follows an
ordered list of prune policies.
Garbage collection runs in the BuildKit daemon. The daemon clears the build cache when the cache size becomes too big, or when the cache age expires. The following sections describe how you can configure both the size and age parameters by defining garbage collection policies.
Depending on the driver used by your builder instance, the garbage collection will use a different configuration file.
If youâre using the
docker
driver, garbage collection
can be configured in the Docker Daemon configuration.
file:
{
"builder": {
"gc": {
"enabled": true,
"defaultKeepStorage": "10GB",
"policy": [
{"keepStorage": "10GB", "filter": ["unused-for=2200h"]},
{"keepStorage": "50GB", "filter": ["unused-for=3300h"]},
{"keepStorage": "100GB", "all": true}
]
}
}
}
For other drivers, garbage collection can be configured using the BuildKit configuration file:
[worker.oci]
gc = true
gckeepstorage = 10000
[[worker.oci.gcpolicy]]
keepBytes = 512000000
keepDuration = 172800
filters = [ "type==source.local", "type==exec.cachemount", "type==source.git.checkout"]
[[worker.oci.gcpolicy]]
all = true
keepBytes = 1024000000
Default garbage collection policies are applied to all builders if not already set:
GC Policy rule#0:
All: false
Filters: type==source.local,type==exec.cachemount,type==source.git.checkout
Keep Duration: 48h0m0s
Keep Bytes: 512MB
GC Policy rule#1:
All: false
Keep Duration: 1440h0m0s
Keep Bytes: 26GB
GC Policy rule#2:
All: false
Keep Bytes: 26GB
GC Policy rule#3:
All: true
Keep Bytes: 26GB
rule#0
: if build cache uses more than 512MB delete the most easily
reproducible data after it has not been used for 2 days.
rule#1
: remove any data not used for 60 days.
rule#2
: keep the unshared build cache under cap.
rule#3
: if previous policies were insufficient start deleting internal data
to keep build cache under cap.
Note
âKeep bytesâ defaults to 10% of the size of the disk. If the disk size cannot be determined, it defaults to 2GB.