This is one stop global knowledge base where you can learn about all the products, solutions and support features.
The
local
cache store is a simple cache option that stores your cache as files
in a directory on your filesystem, using an
OCI image layout
for the underlying directory structure. Local cache is a good choice if youâre
just testing, or if you want the flexibility to self-manage a shared storage
solution.
Note
This cache storage backend requires using a different driver than the default
docker
driver - see more information on selecting a driver here. To create a new driver (which can act as a simple drop-in replacement):$ docker buildx create --use --driver=docker-container
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=local,dest=path/to/local/dir[,parameters...] \
--cache-from type=local,src=path/to/local/dir .
The following table describes the available CSV parameters that you can pass to
--cache-to
and
--cache-from
.
Name | Option | Type | Default | Description |
---|---|---|---|---|
src
|
cache-from
|
String | Â | Path of the local directory where cache gets imported from. |
digest
|
cache-from
|
String | Â | Digest of manifest to import, see cache versioning. |
dest
|
cache-to
|
String | Â | Path of the local directory where cache gets exported to. |
mode
|
cache-to
|
min
,
max
|
min
|
Cache layers to export, see cache mode. |
oci-mediatypes
|
cache-to
|
true
,
false
|
true
|
Use OCI media types in exported manifests, see OCI media types. |
compression
|
cache-to
|
gzip
,
estargz
,
zstd
|
gzip
|
Compression type, see cache compression. |
compression-level
|
cache-to
|
0..22
|
 | Compression level, see cache compression. |
force-compression
|
cache-to
|
true
,
false
|
false
|
Forcibly apply compression, see cache compression. |
If the
src
cache doesnât exist, then the cache import step will fail, but the
build will continue.
This section describes how versioning works for caches on a local filesystem,
and how you can use the
digest
parameter to use older versions of cache.
If you inspect the cache directory manually, you can see the resulting OCI image layout:
$ ls cache
blobs index.json ingest
$ cat cache/index.json | jq
{
"schemaVersion": 2,
"manifests": [
{
"mediaType": "application/vnd.oci.image.index.v1+json",
"digest": "sha256:6982c70595cb91769f61cd1e064cf5f41d5357387bab6b18c0164c5f98c1f707",
"size": 1560,
"annotations": {
"org.opencontainers.image.ref.name": "latest"
}
}
]
}
Like other cache types, local cache gets replaced on export, by replacing the
contents of the
index.json
file. However, previous caches will still be
available in the
blobs
directory. These old caches are addressable by digest,
and kept indefinitely. Therefore, the size of the local cache will continue to
grow (see
moby/buildkit#1896
for more information).
When importing cache using
--cache-to
, you can specify the
digest
parameter
to force loading an older version of the cache, for example:
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=local,dest=path/to/local/dir \
--cache-from type=local,ref=path/to/local/dir,digest=sha256:6982c70595cb91769f61cd1e064cf5f41d5357387bab6b18c0164c5f98c1f707 .
For an introduction to caching see Optimizing builds with cache.
For more information on the
local
cache backend, see the
BuildKit README.
The
registry
cache storage can be thought of as an extension to the
inline
cache. Unlike the
inline
cache, the
registry
cache is entirely separate from
the image, which allows for more flexible usage -
registry
-backed cache can do
everything that the inline cache can do, and more:
max
mode, instead of only the
final stage.
image
exporter.
Note
This cache storage backend requires using a different driver than the default
docker
driver - see more information on selecting a driver here. To create a new driver (which can act as a simple drop-in replacement):$ docker buildx create --use --driver=docker-container
Unlike the simpler
inline
cache, the
registry
cache supports several
configuration parameters:
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>[,parameters...] \
--cache-from type=registry,ref=<registry>/<cache-image> .
The following table describes the available CSV parameters that you can pass to
--cache-to
and
--cache-from
.
Name | Option | Type | Default | Description |
---|---|---|---|---|
ref
|
cache-to
,
cache-from
|
String | Â | Full name of the cache image to import. |
dest
|
cache-to
|
String | Â | Path of the local directory where cache gets exported to. |
mode
|
cache-to
|
min
,
max
|
min
|
Cache layers to export, see cache mode. |
oci-mediatypes
|
cache-to
|
true
,
false
|
true
|
Use OCI media types in exported manifests, see OCI media types. |
compression
|
cache-to
|
gzip
,
estargz
,
zstd
|
gzip
|
Compression type, see cache compression. |
compression-level
|
cache-to
|
0..22
|
 | Compression level, see cache compression. |
force-compression
|
cache-to
|
true
,
false
|
false
|
Forcibly apply compression, see cache compression. |
You can choose any valid value for
ref
, as long as itâs not the same as the
target location that you push your image to. You might choose different tags
(e.g.
foo/bar:latest
and
foo/bar:build-cache
), separate image names (e.g.
foo/bar
and
foo/bar-cache
), or even different repositories (e.g.
docker.io/foo/bar
and
ghcr.io/foo/bar
). Itâs up to you to decide the
strategy that you want to use for separating your image from your cache images.
If the
--cache-from
target doesnât exist, then the cache import step will
fail, but the build will continue.
For an introduction to caching see Optimizing builds with cache.
For more information on the
registry
cache backend, see the
BuildKit README.
Warning
This cache backend is unreleased. You can use it today, by using the
moby/buildkit:master
image in your Buildx driver.
The
s3
cache storage uploads your resulting build cache to
Amazon S3 file storage service,
into a specified bucket.
Note
This cache storage backend requires using a different driver than the default
docker
driver - see more information on selecting a driver here. To create a new driver (which can act as a simple drop-in replacement):$ docker buildx create --use --driver=docker-container
$ docker buildx build --push -t <user>/<image> \
--cache-to type=s3,region=<region>,bucket=<bucket>,name=<cache-image>[,parameters...] \
--cache-from type=s3,region=<region>,bucket=<bucket>,name=<cache-image> .
The following table describes the available CSV parameters that you can pass to
--cache-to
and
--cache-from
.
Name | Option | Type | Default | Description |
---|---|---|---|---|
region
|
cache-to
,
cache-from
|
String | Â | Geographic location. |
bucket
|
cache-to
,
cache-from
|
String | Â | Name of the S3 bucket used for caching |
name
|
cache-to
,
cache-from
|
String | Â | Name of the cache image |
access_key_id
|
cache-to
,
cache-from
|
String | Â | See authentication |
secret_access_key
|
cache-to
,
cache-from
|
String | Â | See authentication |
session_token
|
cache-to
,
cache-from
|
String | Â | See authentication |
mode
|
cache-to
|
min
,
max
|
min
|
Cache layers to export, see cache mode. |
access_key_id
,
secret_access_key
, and
session_token
, if left unspecified,
are read from environment variables on the BuildKit server following the scheme
for the AWS Go SDK.
The environment variables are read from the server, not the Buildx client.
For an introduction to caching see Optimizing builds with cache.
For more information on the
s3
cache backend, see the
BuildKit README.
While
docker builder prune
or
docker buildx prune
commands run at once, garbage collection runs periodically and follows an
ordered list of prune policies.
Garbage collection runs in the BuildKit daemon. The daemon clears the build cache when the cache size becomes too big, or when the cache age expires. The following sections describe how you can configure both the size and age parameters by defining garbage collection policies.
Depending on the driver used by your builder instance, the garbage collection will use a different configuration file.
If youâre using the
docker
driver, garbage collection
can be configured in the Docker Daemon configuration.
file:
{
"builder": {
"gc": {
"enabled": true,
"defaultKeepStorage": "10GB",
"policy": [
{"keepStorage": "10GB", "filter": ["unused-for=2200h"]},
{"keepStorage": "50GB", "filter": ["unused-for=3300h"]},
{"keepStorage": "100GB", "all": true}
]
}
}
}
For other drivers, garbage collection can be configured using the BuildKit configuration file:
[worker.oci]
gc = true
gckeepstorage = 10000
[[worker.oci.gcpolicy]]
keepBytes = 512000000
keepDuration = 172800
filters = [ "type==source.local", "type==exec.cachemount", "type==source.git.checkout"]
[[worker.oci.gcpolicy]]
all = true
keepBytes = 1024000000
Default garbage collection policies are applied to all builders if not already set:
GC Policy rule#0:
All: false
Filters: type==source.local,type==exec.cachemount,type==source.git.checkout
Keep Duration: 48h0m0s
Keep Bytes: 512MB
GC Policy rule#1:
All: false
Keep Duration: 1440h0m0s
Keep Bytes: 26GB
GC Policy rule#2:
All: false
Keep Bytes: 26GB
GC Policy rule#3:
All: true
Keep Bytes: 26GB
rule#0
: if build cache uses more than 512MB delete the most easily
reproducible data after it has not been used for 2 days.
rule#1
: remove any data not used for 60 days.
rule#2
: keep the unshared build cache under cap.
rule#3
: if previous policies were insufficient start deleting internal data
to keep build cache under cap.
Note
âKeep bytesâ defaults to 10% of the size of the disk. If the disk size cannot be determined, it defaults to 2GB.
You will likely find yourself rebuilding the same Docker image over and over again. Whether itâs for the next release of your software, or locally during development. Because building images is a common task, Docker provides several tools that speed up builds.
The most important feature for improving build speeds is Dockerâs build cache.
Understanding Dockerâs build cache helps you write better Dockerfiles that result in faster builds.
Have a look at the following example, which shows a simple Dockerfile for a program written in C.
# syntax=docker/dockerfile:1
FROM ubuntu:latest
RUN apt-get update && apt-get install -y build-essentials
COPY main.c Makefile /src/
WORKDIR /src/
RUN make build
Each instruction in this Dockerfile translates (roughly) to a layer in your final image. You can think of image layers as a stack, with each layer adding more content on top of the layers that came before it:
Whenever a layer changes, that layer will need to be re-built. For example,
suppose you make a change to your program in the
main.c
file. After this
change, the
COPY
command will have to run again in order for those changes to
appear in the image. In other words, Docker will invalidate the cache for this
layer.
If a layer changes, all other layers that come after it are also affected. When
the layer with the
COPY
command gets invalidated, all layers that follow will
need to run again, too:
And thatâs the Docker build cache in a nutshell. Once a layer changes, then all downstream layers need to be rebuilt as well. Even if they wouldnât build anything differently, they still need to re-run.
Note
Suppose you have a
RUN apt-get update && apt-get upgrade -y
step in your Dockerfile to upgrade all the software packages in your Debian-based image to the latest version.This doesnât mean that the images you build are always up to date. Rebuilding the image on the same host one week later will still get you the same packages as before. The only way to force a rebuild is by making sure that a layer before it has changed, or by clearing the build cache using
docker builder prune
.
Now that you understand how the cache works, you can begin to use the cache to
your advantage. While the cache will automatically work on any
docker build
that you run, you can often refactor your Dockerfile to get even better
performance. These optimizations can save precious seconds (or even minutes) off
of your builds.
Putting the commands in your Dockerfile into a logical order is a great place to start. Because a change causes a rebuild for steps that follow, try to make expensive steps appear near the beginning of the Dockerfile. Steps that change often should appear near the end of the Dockerfile, to avoid triggering rebuilds of layers that havenât changed.
Consider the following example. A Dockerfile snippet that runs a JavaScript build from the source files in the current directory:
# syntax=docker/dockerfile:1
FROM node
WORKDIR /app
COPY . . # Copy over all files in the current directory
RUN npm install # Install dependencies
RUN npm build # Run build
This Dockerfile is rather inefficient. Updating any file causes a reinstall of all dependencies every time you build the Docker image &emdash; even if the dependencies didnât change since last time!
Instead, the
COPY
command can be split in two. First, copy over the package
management files (in this case,
package.json
and
yarn.lock
). Then, install
the dependencies. Finally, copy over the project source code, which is subject
to frequent change.
# syntax=docker/dockerfile:1
FROM node
WORKDIR /app
COPY package.json yarn.lock . # Copy package management files
RUN npm install # Install dependencies
COPY . . # Copy over project files
RUN npm build # Run build
By installing dependencies in earlier layers of the Dockerfile, there is no need to rebuild those layers when a project file has changed.
One of the best things you can do to speed up image building is to just put less stuff into your build. Fewer parts means the cache stay smaller, but also that there should be fewer things that could be out-of-date and need rebuilding.
To get started, here are a few tips and tricks:
Be considerate of what files you add to the image.
Running a command like
COPY . /src
will
COPY
your entire build context
into the image. If youâve got logs, package manager artifacts, or even previous
build results in your current directory, those will also be copied over. This
could make your image larger than it needs to be, especially as those files are
usually not useful.
Avoid adding unnecessary files to your builds by explicitly stating the files or
directories you intend to copy over. For example, you might only want to add a
Makefile
and your
src
directory to the image filesystem. In that case,
consider adding this to your Dockerfile:
COPY ./src ./Makefile /src
As opposed to this:
COPY . /src
You can also create a
.dockerignore
file,
and use that to specify which files and directories to exclude from the build
context.
Most Docker image builds involve using a package manager to help install
software into the image. Debian has
apt
, Alpine has
apk
, Python has
pip
,
NodeJS has
npm
, and so on.
When installing packages, be considerate. Make sure to only install the packages that you need. If youâre not going to use them, donât install them. Remember that this might be a different list for your local development environment and your production environment. You can use multi-stage builds to split these up efficiently.
RUN
cache
The
RUN
command supports a specialized cache, which you can use when you need
a more fine-grained cache between runs. For example, when installing packages,
you donât always need to fetch all of your packages from the internet each time.
You only need the ones that have changed.
To solve this problem, you can use
RUN --mount type=cache
. For example, for
your Debian-based image you might use the following:
RUN \
--mount=type=cache,target=/var/cache/apt \
apt-get update && apt-get install -y git
Using the explicit cache with the
--mount
flag keeps the contents of the
target
directory preserved between builds. When this layer needs to be
rebuilt, then itâll use the
apt
cache in
/var/cache/apt
.
Keeping your layers small is a good first step, and the logical next step is to reduce the number of layers that you have. Fewer layers mean that you have less to rebuild, when something in your Dockerfile changes, so your build will complete faster.
The following sections outline some tips you can use to keep the number of layers to a minimum.
Docker provides over 170 pre-built official images
for almost every common development scenario. For example, if youâre building a
Java web server, use a dedicated image such as
openjdk
.
Even when thereâs not an official image for what you might want, Docker provides
images from verified publishers
and open source partners
that can help you on your way. The Docker community often produces third-party
images to use as well.
Using official images saves you time and ensures you stay up to date and secure by default.
Multi-stage builds let you split up your Dockerfile into multiple distinct stages. Each stage completes a step in the build process, and you can bridge the different stages to create your final image at the end. The Docker builder will work out dependencies between the stages and run them using the most efficient strategy. This even allows you to run multiple builds concurrently.
Multi-stage builds use two or more
FROM
commands. The following example
illustrates building a simple web server that serves HTML from your
docs
directory in Git:
# syntax=docker/dockerfile:1
# stage 1
FROM alpine as git
RUN apk add git
# stage 2
FROM git as fetch
WORKDIR /repo
RUN git clone https://github.com/your/repository.git .
# stage 3
FROM nginx as site
COPY --from=fetch /repo/docs/ /usr/share/nginx/html
This build has 3 stages:
git
,
fetch
and
site
. In this example,
git
is
the base for the
fetch
stage. It uses the
COPY --from
flag to copy the data
from the
docs/
directory into the Nginx server directory.
Each stage has only a few instructions, and when possible, Docker will run these
stages in parallel. Only the instructions in the
site
stage will end up as
layers in the final image. The entire
git
history doesnât get embedded into
the final result, which helps keep the image small and secure.
Most Dockerfile commands, and
RUN
commands in particular, can often be joined
together. For example, instead of using
RUN
like this:
RUN echo "the first command"
RUN echo "the second command"
Itâs possible to run both of these commands inside a single
RUN
, which means
that they will share the same cache! This can is achievable using the
&&
shell
operator to run one command after another:
RUN echo "the first command" && echo "the second command"
# or to split to multiple lines
RUN echo "the first command" && \
echo "the second command"
Another shell feature that allows you to simplify and concatenate commands in a
neat way are
heredocs
.
It enables you to create multi-line scripts with good readability:
RUN <<EOF
set -e
echo "the first command"
echo "the second command"
EOF
(Note the
set -e
command to exit immediately after any command fails, instead
of continuing.)
For more information on using cache to do efficient builds, see:
This page contains instructions on configuring your BuildKit instances when using our Setup Buildx Action.
To display BuildKit container logs when using the
docker-container
driver,
you must either enable step debug logging,
or set the
--debug
buildkitd flag in the Docker Setup Buildx action:
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
buildkitd-flags: --debug
-
name: Build
uses: docker/build-push-action@v3
with:
context: .
Logs will be available at the end of a job:
You can provide a BuildKit configuration
to your builder if youâre using the
docker-container
driver
(default) with the
config
or
config-inline
inputs:
You can configure a registry mirror using an inline block directly in your
workflow with the
config-inline
input:
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
config-inline: |
[registry."docker.io"]
mirrors = ["mirror.gcr.io"]
For more information about using a registry mirror, see Registry mirror.
You can limit the parallelism of the BuildKit solver which is particularly useful for low-powered machines.
You can use the
config-inline
input like the previous example, or you can use
a dedicated BuildKit config file from your repository if you want with the
config
input:
# .github/buildkitd.toml
[worker.oci]
max-parallelism = 4
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
config: .github/buildkitd.toml
Buildx supports running builds on multiple machines. This is useful for building multi-platform images on native nodes for more complicated cases that arenât handled by QEMU. Building on native nodes generally has better performance, and allows you to distribute the build across multiple machines.
You can append nodes to the builder youâre creating using the
append
option.
It takes input in the form of a YAML string document to remove limitations
intrinsically linked to GitHub Actions: you can only use strings in the input
fields:
Name | Type | Description |
---|---|---|
name
|
String | Name of the node. If empty, itâs the name of the builder it belongs to, with an index number suffix. This is useful to set it if you want to modify/remove a node in an underlying step of you workflow. |
endpoint
|
String | Docker context or endpoint of the node to add to the builder |
driver-opts
|
List | List of additional driver-specific options |
buildkitd-flags
|
String | Flags for buildkitd daemon |
platforms
|
String | Fixed platforms for the node. If not empty, values take priority over the detected ones. |
Here is an example using remote nodes with the
remote
driver
and TLS authentication:
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
driver: remote
endpoint: tcp://oneprovider:1234
append: |
- endpoint: tcp://graviton2:1234
platforms: linux/arm64
- endpoint: tcp://linuxone:1234
platforms: linux/s390x
env:
BUILDER_NODE_0_AUTH_TLS_CACERT: ${{ secrets.ONEPROVIDER_CA }}
BUILDER_NODE_0_AUTH_TLS_CERT: ${{ secrets.ONEPROVIDER_CERT }}
BUILDER_NODE_0_AUTH_TLS_KEY: ${{ secrets.ONEPROVIDER_KEY }}
BUILDER_NODE_1_AUTH_TLS_CACERT: ${{ secrets.GRAVITON2_CA }}
BUILDER_NODE_1_AUTH_TLS_CERT: ${{ secrets.GRAVITON2_CERT }}
BUILDER_NODE_1_AUTH_TLS_KEY: ${{ secrets.GRAVITON2_KEY }}
BUILDER_NODE_2_AUTH_TLS_CACERT: ${{ secrets.LINUXONE_CA }}
BUILDER_NODE_2_AUTH_TLS_CERT: ${{ secrets.LINUXONE_CERT }}
BUILDER_NODE_2_AUTH_TLS_KEY: ${{ secrets.LINUXONE_KEY }}
The following examples show how to handle authentication for remote builders, using SSH or TLS.
To be able to connect to an SSH endpoint using the
docker-container
driver,
you have to set up the SSH private key and configuration on the GitHub Runner:
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Set up SSH
uses: MrSquaare/ssh-setup-action@523473d91581ccbf89565e12b40faba93f2708bd # v1.1.0
with:
host: graviton2
private-key: ${{ secrets.SSH_PRIVATE_KEY }}
private-key-name: aws_graviton2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
endpoint: ssh://me@graviton2
You can also set up a remote BuildKit instance
using the remote driver. To ease the integration in your workflow, you can use
an environment variables that sets up authentication using the BuildKit client
certificates for the
tcp://
:
BUILDER_NODE_<idx>_AUTH_TLS_CACERT
BUILDER_NODE_<idx>_AUTH_TLS_CERT
BUILDER_NODE_<idx>_AUTH_TLS_KEY
The
<idx>
placeholder is the position of the node in the list of nodes.
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
driver: remote
endpoint: tcp://graviton2:1234
env:
BUILDER_NODE_0_AUTH_TLS_CACERT: ${{ secrets.GRAVITON2_CA }}
BUILDER_NODE_0_AUTH_TLS_CERT: ${{ secrets.GRAVITON2_CERT }}
BUILDER_NODE_0_AUTH_TLS_KEY: ${{ secrets.GRAVITON2_KEY }}
If you donât have the Docker CLI installed on the GitHub Runner, the Buildx
binary gets invoked directly, instead of calling it as a Docker CLI plugin. This
can be useful if you want to use the
kubernetes
driver in your self-hosted
runner:
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
driver: kubernetes
-
name: Build
run: |
buildx build .
The following example shows how you can select different builders for different jobs.
An example scenario where this might be useful is when you are using a monorepo, and you want to pinpoint different packages to specific builders. For example, some packages may be particularly resource-intensive to build and require more compute. Or they require a builder equipped with a particular capability or hardware.
For more information about remote builder, see
remote
driver
and the append builder nodes example.
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
uses: docker/setup-buildx-action@v2
id: builder1
-
uses: docker/setup-buildx-action@v2
id: builder2
-
name: Builder 1 name
run: echo ${{ steps.builder1.outputs.name }}
-
name: Builder 2 name
run: echo ${{ steps.builder2.outputs.name }}
-
name: Build against builder1
uses: docker/build-push-action@v3
with:
builder: ${{ steps.builder1.outputs.name }}
context: .
target: mytarget1
-
name: Build against builder2
uses: docker/build-push-action@v3
with:
builder: ${{ steps.builder2.outputs.name }}
context: .
target: mytarget2