This is one stop global knowledge base where you can learn about all the products, solutions and support features.
The
registry
cache storage can be thought of as an extension to the
inline
cache. Unlike the
inline
cache, the
registry
cache is entirely separate from
the image, which allows for more flexible usage -
registry
-backed cache can do
everything that the inline cache can do, and more:
max
mode, instead of only the
final stage.
image
exporter.
Note
This cache storage backend requires using a different driver than the default
docker
driver - see more information on selecting a driver here. To create a new driver (which can act as a simple drop-in replacement):$ docker buildx create --use --driver=docker-container
Unlike the simpler
inline
cache, the
registry
cache supports several
configuration parameters:
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>[,parameters...] \
--cache-from type=registry,ref=<registry>/<cache-image> .
The following table describes the available CSV parameters that you can pass to
--cache-to
and
--cache-from
.
Name | Option | Type | Default | Description |
---|---|---|---|---|
ref
|
cache-to
,
cache-from
|
String | Â | Full name of the cache image to import. |
dest
|
cache-to
|
String | Â | Path of the local directory where cache gets exported to. |
mode
|
cache-to
|
min
,
max
|
min
|
Cache layers to export, see cache mode. |
oci-mediatypes
|
cache-to
|
true
,
false
|
true
|
Use OCI media types in exported manifests, see OCI media types. |
compression
|
cache-to
|
gzip
,
estargz
,
zstd
|
gzip
|
Compression type, see cache compression. |
compression-level
|
cache-to
|
0..22
|
 | Compression level, see cache compression. |
force-compression
|
cache-to
|
true
,
false
|
false
|
Forcibly apply compression, see cache compression. |
You can choose any valid value for
ref
, as long as itâs not the same as the
target location that you push your image to. You might choose different tags
(e.g.
foo/bar:latest
and
foo/bar:build-cache
), separate image names (e.g.
foo/bar
and
foo/bar-cache
), or even different repositories (e.g.
docker.io/foo/bar
and
ghcr.io/foo/bar
). Itâs up to you to decide the
strategy that you want to use for separating your image from your cache images.
If the
--cache-from
target doesnât exist, then the cache import step will
fail, but the build will continue.
For an introduction to caching see Optimizing builds with cache.
For more information on the
registry
cache backend, see the
BuildKit README.
Warning
This cache backend is unreleased. You can use it today, by using the
moby/buildkit:master
image in your Buildx driver.
The
s3
cache storage uploads your resulting build cache to
Amazon S3 file storage service,
into a specified bucket.
Note
This cache storage backend requires using a different driver than the default
docker
driver - see more information on selecting a driver here. To create a new driver (which can act as a simple drop-in replacement):$ docker buildx create --use --driver=docker-container
$ docker buildx build --push -t <user>/<image> \
--cache-to type=s3,region=<region>,bucket=<bucket>,name=<cache-image>[,parameters...] \
--cache-from type=s3,region=<region>,bucket=<bucket>,name=<cache-image> .
The following table describes the available CSV parameters that you can pass to
--cache-to
and
--cache-from
.
Name | Option | Type | Default | Description |
---|---|---|---|---|
region
|
cache-to
,
cache-from
|
String | Â | Geographic location. |
bucket
|
cache-to
,
cache-from
|
String | Â | Name of the S3 bucket used for caching |
name
|
cache-to
,
cache-from
|
String | Â | Name of the cache image |
access_key_id
|
cache-to
,
cache-from
|
String | Â | See authentication |
secret_access_key
|
cache-to
,
cache-from
|
String | Â | See authentication |
session_token
|
cache-to
,
cache-from
|
String | Â | See authentication |
mode
|
cache-to
|
min
,
max
|
min
|
Cache layers to export, see cache mode. |
access_key_id
,
secret_access_key
, and
session_token
, if left unspecified,
are read from environment variables on the BuildKit server following the scheme
for the AWS Go SDK.
The environment variables are read from the server, not the Buildx client.
For an introduction to caching see Optimizing builds with cache.
For more information on the
s3
cache backend, see the
BuildKit README.
While
docker builder prune
or
docker buildx prune
commands run at once, garbage collection runs periodically and follows an
ordered list of prune policies.
Garbage collection runs in the BuildKit daemon. The daemon clears the build cache when the cache size becomes too big, or when the cache age expires. The following sections describe how you can configure both the size and age parameters by defining garbage collection policies.
Depending on the driver used by your builder instance, the garbage collection will use a different configuration file.
If youâre using the
docker
driver, garbage collection
can be configured in the Docker Daemon configuration.
file:
{
"builder": {
"gc": {
"enabled": true,
"defaultKeepStorage": "10GB",
"policy": [
{"keepStorage": "10GB", "filter": ["unused-for=2200h"]},
{"keepStorage": "50GB", "filter": ["unused-for=3300h"]},
{"keepStorage": "100GB", "all": true}
]
}
}
}
For other drivers, garbage collection can be configured using the BuildKit configuration file:
[worker.oci]
gc = true
gckeepstorage = 10000
[[worker.oci.gcpolicy]]
keepBytes = 512000000
keepDuration = 172800
filters = [ "type==source.local", "type==exec.cachemount", "type==source.git.checkout"]
[[worker.oci.gcpolicy]]
all = true
keepBytes = 1024000000
Default garbage collection policies are applied to all builders if not already set:
GC Policy rule#0:
All: false
Filters: type==source.local,type==exec.cachemount,type==source.git.checkout
Keep Duration: 48h0m0s
Keep Bytes: 512MB
GC Policy rule#1:
All: false
Keep Duration: 1440h0m0s
Keep Bytes: 26GB
GC Policy rule#2:
All: false
Keep Bytes: 26GB
GC Policy rule#3:
All: true
Keep Bytes: 26GB
rule#0
: if build cache uses more than 512MB delete the most easily
reproducible data after it has not been used for 2 days.
rule#1
: remove any data not used for 60 days.
rule#2
: keep the unshared build cache under cap.
rule#3
: if previous policies were insufficient start deleting internal data
to keep build cache under cap.
Note
âKeep bytesâ defaults to 10% of the size of the disk. If the disk size cannot be determined, it defaults to 2GB.
You will likely find yourself rebuilding the same Docker image over and over again. Whether itâs for the next release of your software, or locally during development. Because building images is a common task, Docker provides several tools that speed up builds.
The most important feature for improving build speeds is Dockerâs build cache.
Understanding Dockerâs build cache helps you write better Dockerfiles that result in faster builds.
Have a look at the following example, which shows a simple Dockerfile for a program written in C.
# syntax=docker/dockerfile:1
FROM ubuntu:latest
RUN apt-get update && apt-get install -y build-essentials
COPY main.c Makefile /src/
WORKDIR /src/
RUN make build
Each instruction in this Dockerfile translates (roughly) to a layer in your final image. You can think of image layers as a stack, with each layer adding more content on top of the layers that came before it:
Whenever a layer changes, that layer will need to be re-built. For example,
suppose you make a change to your program in the
main.c
file. After this
change, the
COPY
command will have to run again in order for those changes to
appear in the image. In other words, Docker will invalidate the cache for this
layer.
If a layer changes, all other layers that come after it are also affected. When
the layer with the
COPY
command gets invalidated, all layers that follow will
need to run again, too:
And thatâs the Docker build cache in a nutshell. Once a layer changes, then all downstream layers need to be rebuilt as well. Even if they wouldnât build anything differently, they still need to re-run.
Note
Suppose you have a
RUN apt-get update && apt-get upgrade -y
step in your Dockerfile to upgrade all the software packages in your Debian-based image to the latest version.This doesnât mean that the images you build are always up to date. Rebuilding the image on the same host one week later will still get you the same packages as before. The only way to force a rebuild is by making sure that a layer before it has changed, or by clearing the build cache using
docker builder prune
.
Now that you understand how the cache works, you can begin to use the cache to
your advantage. While the cache will automatically work on any
docker build
that you run, you can often refactor your Dockerfile to get even better
performance. These optimizations can save precious seconds (or even minutes) off
of your builds.
Putting the commands in your Dockerfile into a logical order is a great place to start. Because a change causes a rebuild for steps that follow, try to make expensive steps appear near the beginning of the Dockerfile. Steps that change often should appear near the end of the Dockerfile, to avoid triggering rebuilds of layers that havenât changed.
Consider the following example. A Dockerfile snippet that runs a JavaScript build from the source files in the current directory:
# syntax=docker/dockerfile:1
FROM node
WORKDIR /app
COPY . . # Copy over all files in the current directory
RUN npm install # Install dependencies
RUN npm build # Run build
This Dockerfile is rather inefficient. Updating any file causes a reinstall of all dependencies every time you build the Docker image &emdash; even if the dependencies didnât change since last time!
Instead, the
COPY
command can be split in two. First, copy over the package
management files (in this case,
package.json
and
yarn.lock
). Then, install
the dependencies. Finally, copy over the project source code, which is subject
to frequent change.
# syntax=docker/dockerfile:1
FROM node
WORKDIR /app
COPY package.json yarn.lock . # Copy package management files
RUN npm install # Install dependencies
COPY . . # Copy over project files
RUN npm build # Run build
By installing dependencies in earlier layers of the Dockerfile, there is no need to rebuild those layers when a project file has changed.
One of the best things you can do to speed up image building is to just put less stuff into your build. Fewer parts means the cache stay smaller, but also that there should be fewer things that could be out-of-date and need rebuilding.
To get started, here are a few tips and tricks:
Be considerate of what files you add to the image.
Running a command like
COPY . /src
will
COPY
your entire build context
into the image. If youâve got logs, package manager artifacts, or even previous
build results in your current directory, those will also be copied over. This
could make your image larger than it needs to be, especially as those files are
usually not useful.
Avoid adding unnecessary files to your builds by explicitly stating the files or
directories you intend to copy over. For example, you might only want to add a
Makefile
and your
src
directory to the image filesystem. In that case,
consider adding this to your Dockerfile:
COPY ./src ./Makefile /src
As opposed to this:
COPY . /src
You can also create a
.dockerignore
file,
and use that to specify which files and directories to exclude from the build
context.
Most Docker image builds involve using a package manager to help install
software into the image. Debian has
apt
, Alpine has
apk
, Python has
pip
,
NodeJS has
npm
, and so on.
When installing packages, be considerate. Make sure to only install the packages that you need. If youâre not going to use them, donât install them. Remember that this might be a different list for your local development environment and your production environment. You can use multi-stage builds to split these up efficiently.
RUN
cache
The
RUN
command supports a specialized cache, which you can use when you need
a more fine-grained cache between runs. For example, when installing packages,
you donât always need to fetch all of your packages from the internet each time.
You only need the ones that have changed.
To solve this problem, you can use
RUN --mount type=cache
. For example, for
your Debian-based image you might use the following:
RUN \
--mount=type=cache,target=/var/cache/apt \
apt-get update && apt-get install -y git
Using the explicit cache with the
--mount
flag keeps the contents of the
target
directory preserved between builds. When this layer needs to be
rebuilt, then itâll use the
apt
cache in
/var/cache/apt
.
Keeping your layers small is a good first step, and the logical next step is to reduce the number of layers that you have. Fewer layers mean that you have less to rebuild, when something in your Dockerfile changes, so your build will complete faster.
The following sections outline some tips you can use to keep the number of layers to a minimum.
Docker provides over 170 pre-built official images
for almost every common development scenario. For example, if youâre building a
Java web server, use a dedicated image such as
openjdk
.
Even when thereâs not an official image for what you might want, Docker provides
images from verified publishers
and open source partners
that can help you on your way. The Docker community often produces third-party
images to use as well.
Using official images saves you time and ensures you stay up to date and secure by default.
Multi-stage builds let you split up your Dockerfile into multiple distinct stages. Each stage completes a step in the build process, and you can bridge the different stages to create your final image at the end. The Docker builder will work out dependencies between the stages and run them using the most efficient strategy. This even allows you to run multiple builds concurrently.
Multi-stage builds use two or more
FROM
commands. The following example
illustrates building a simple web server that serves HTML from your
docs
directory in Git:
# syntax=docker/dockerfile:1
# stage 1
FROM alpine as git
RUN apk add git
# stage 2
FROM git as fetch
WORKDIR /repo
RUN git clone https://github.com/your/repository.git .
# stage 3
FROM nginx as site
COPY --from=fetch /repo/docs/ /usr/share/nginx/html
This build has 3 stages:
git
,
fetch
and
site
. In this example,
git
is
the base for the
fetch
stage. It uses the
COPY --from
flag to copy the data
from the
docs/
directory into the Nginx server directory.
Each stage has only a few instructions, and when possible, Docker will run these
stages in parallel. Only the instructions in the
site
stage will end up as
layers in the final image. The entire
git
history doesnât get embedded into
the final result, which helps keep the image small and secure.
Most Dockerfile commands, and
RUN
commands in particular, can often be joined
together. For example, instead of using
RUN
like this:
RUN echo "the first command"
RUN echo "the second command"
Itâs possible to run both of these commands inside a single
RUN
, which means
that they will share the same cache! This can is achievable using the
&&
shell
operator to run one command after another:
RUN echo "the first command" && echo "the second command"
# or to split to multiple lines
RUN echo "the first command" && \
echo "the second command"
Another shell feature that allows you to simplify and concatenate commands in a
neat way are
heredocs
.
It enables you to create multi-line scripts with good readability:
RUN <<EOF
set -e
echo "the first command"
echo "the second command"
EOF
(Note the
set -e
command to exit immediately after any command fails, instead
of continuing.)
For more information on using cache to do efficient builds, see:
This page contains instructions on configuring your BuildKit instances when using our Setup Buildx Action.
To display BuildKit container logs when using the
docker-container
driver,
you must either enable step debug logging,
or set the
--debug
buildkitd flag in the Docker Setup Buildx action:
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
buildkitd-flags: --debug
-
name: Build
uses: docker/build-push-action@v3
with:
context: .
Logs will be available at the end of a job:
You can provide a BuildKit configuration
to your builder if youâre using the
docker-container
driver
(default) with the
config
or
config-inline
inputs:
You can configure a registry mirror using an inline block directly in your
workflow with the
config-inline
input:
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
config-inline: |
[registry."docker.io"]
mirrors = ["mirror.gcr.io"]
For more information about using a registry mirror, see Registry mirror.
You can limit the parallelism of the BuildKit solver which is particularly useful for low-powered machines.
You can use the
config-inline
input like the previous example, or you can use
a dedicated BuildKit config file from your repository if you want with the
config
input:
# .github/buildkitd.toml
[worker.oci]
max-parallelism = 4
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
config: .github/buildkitd.toml
Buildx supports running builds on multiple machines. This is useful for building multi-platform images on native nodes for more complicated cases that arenât handled by QEMU. Building on native nodes generally has better performance, and allows you to distribute the build across multiple machines.
You can append nodes to the builder youâre creating using the
append
option.
It takes input in the form of a YAML string document to remove limitations
intrinsically linked to GitHub Actions: you can only use strings in the input
fields:
Name | Type | Description |
---|---|---|
name
|
String | Name of the node. If empty, itâs the name of the builder it belongs to, with an index number suffix. This is useful to set it if you want to modify/remove a node in an underlying step of you workflow. |
endpoint
|
String | Docker context or endpoint of the node to add to the builder |
driver-opts
|
List | List of additional driver-specific options |
buildkitd-flags
|
String | Flags for buildkitd daemon |
platforms
|
String | Fixed platforms for the node. If not empty, values take priority over the detected ones. |
Here is an example using remote nodes with the
remote
driver
and TLS authentication:
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
driver: remote
endpoint: tcp://oneprovider:1234
append: |
- endpoint: tcp://graviton2:1234
platforms: linux/arm64
- endpoint: tcp://linuxone:1234
platforms: linux/s390x
env:
BUILDER_NODE_0_AUTH_TLS_CACERT: ${{ secrets.ONEPROVIDER_CA }}
BUILDER_NODE_0_AUTH_TLS_CERT: ${{ secrets.ONEPROVIDER_CERT }}
BUILDER_NODE_0_AUTH_TLS_KEY: ${{ secrets.ONEPROVIDER_KEY }}
BUILDER_NODE_1_AUTH_TLS_CACERT: ${{ secrets.GRAVITON2_CA }}
BUILDER_NODE_1_AUTH_TLS_CERT: ${{ secrets.GRAVITON2_CERT }}
BUILDER_NODE_1_AUTH_TLS_KEY: ${{ secrets.GRAVITON2_KEY }}
BUILDER_NODE_2_AUTH_TLS_CACERT: ${{ secrets.LINUXONE_CA }}
BUILDER_NODE_2_AUTH_TLS_CERT: ${{ secrets.LINUXONE_CERT }}
BUILDER_NODE_2_AUTH_TLS_KEY: ${{ secrets.LINUXONE_KEY }}
The following examples show how to handle authentication for remote builders, using SSH or TLS.
To be able to connect to an SSH endpoint using the
docker-container
driver,
you have to set up the SSH private key and configuration on the GitHub Runner:
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Set up SSH
uses: MrSquaare/ssh-setup-action@523473d91581ccbf89565e12b40faba93f2708bd # v1.1.0
with:
host: graviton2
private-key: ${{ secrets.SSH_PRIVATE_KEY }}
private-key-name: aws_graviton2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
endpoint: ssh://me@graviton2
You can also set up a remote BuildKit instance
using the remote driver. To ease the integration in your workflow, you can use
an environment variables that sets up authentication using the BuildKit client
certificates for the
tcp://
:
BUILDER_NODE_<idx>_AUTH_TLS_CACERT
BUILDER_NODE_<idx>_AUTH_TLS_CERT
BUILDER_NODE_<idx>_AUTH_TLS_KEY
The
<idx>
placeholder is the position of the node in the list of nodes.
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
driver: remote
endpoint: tcp://graviton2:1234
env:
BUILDER_NODE_0_AUTH_TLS_CACERT: ${{ secrets.GRAVITON2_CA }}
BUILDER_NODE_0_AUTH_TLS_CERT: ${{ secrets.GRAVITON2_CERT }}
BUILDER_NODE_0_AUTH_TLS_KEY: ${{ secrets.GRAVITON2_KEY }}
If you donât have the Docker CLI installed on the GitHub Runner, the Buildx
binary gets invoked directly, instead of calling it as a Docker CLI plugin. This
can be useful if you want to use the
kubernetes
driver in your self-hosted
runner:
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
driver: kubernetes
-
name: Build
run: |
buildx build .
The following example shows how you can select different builders for different jobs.
An example scenario where this might be useful is when you are using a monorepo, and you want to pinpoint different packages to specific builders. For example, some packages may be particularly resource-intensive to build and require more compute. Or they require a builder equipped with a particular capability or hardware.
For more information about remote builder, see
remote
driver
and the append builder nodes example.
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
uses: docker/setup-buildx-action@v2
id: builder1
-
uses: docker/setup-buildx-action@v2
id: builder2
-
name: Builder 1 name
run: echo ${{ steps.builder1.outputs.name }}
-
name: Builder 2 name
run: echo ${{ steps.builder2.outputs.name }}
-
name: Build against builder1
uses: docker/build-push-action@v3
with:
builder: ${{ steps.builder1.outputs.name }}
context: .
target: mytarget1
-
name: Build against builder2
uses: docker/build-push-action@v3
with:
builder: ${{ steps.builder2.outputs.name }}
context: .
target: mytarget2
This page showcases different examples of how you can customize and use the Docker GitHub Actions in your CI pipelines.
The following workflow will connect you to Docker Hub and GitHub Container Registry and push the image to both registries:
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
-
name: Build and push
uses: docker/build-push-action@v3
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: |
user/app:latest
user/app:1.0.0
ghcr.io/user/app:latest
ghcr.io/user/app:1.0.0
If you want an âautomaticâ tag management and OCI Image Format Specification for labels, you can do it in a dedicated setup step. The following workflow will use the Docker Metadata Action to handle tags and labels based on GitHub Actions events and Git metadata:
name: ci
on:
schedule:
- cron: "0 10 * * *"
push:
branches:
- "**"
tags:
- "v*.*.*"
pull_request:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
# list of Docker images to use as base name for tags
images: |
name/app
ghcr.io/username/app
# generate Docker tags based on the following events/attributes
tags: |
type=schedule
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Login to Docker Hub
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Login to GHCR
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
-
name: Build and push
uses: docker/build-push-action@v3
with:
context: .
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
You can build multi-platform images using
the
platforms
option, as described in the following example.
Note
- For a list of available platforms, see the Docker Setup Buildx action.
- If you want support for more platforms, you can use QEMU with the Docker Setup QEMU action.
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push
uses: docker/build-push-action@v3
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: user/app:latest
This page contains examples on using the cache storage backends with GitHub actions.
Note
See Cache storage backends for more details about cache storage backends.
In most cases you want to use the inline cache exporter.
However, note that the
inline
cache exporter only supports
min
cache mode.
To use
max
cache mode, push the image and the cache separately using the
registry cache exporter with the
cache-to
option, as shown in the registry cache example.
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push
uses: docker/build-push-action@v3
with:
context: .
push: true
tags: user/app:latest
cache-from: type=registry,ref=user/app:latest
cache-to: type=inline
You can import/export cache from a cache manifest or (special) image configuration on the registry with the registry cache exporter.
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push
uses: docker/build-push-action@v3
with:
context: .
push: true
tags: user/app:latest
cache-from: type=registry,ref=user/app:buildcache
cache-to: type=registry,ref=user/app:buildcache,mode=max
Warning
This cache exporter is experimental. Please provide feedback on BuildKit repository if you experience any issues.
The GitHub Actions cache exporter
backend uses the GitHub Cache API
to fetch and upload cache blobs. Thatâs why you should only use this cache
backend in a GitHub Action workflow, as the
url
(
$ACTIONS_CACHE_URL
) and
token
(
$ACTIONS_RUNTIME_TOKEN
) attributes only get populated in a workflow
context.
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push
uses: docker/build-push-action@v3
with:
context: .
push: true
tags: user/app:latest
cache-from: type=gha
cache-to: type=gha,mode=max
Warning
At the moment, old cache entries arenât deleted, so the cache size keeps growing. The following example uses the
Move cache
step as a workaround (seemoby/buildkit#1896
for more info).
You can also leverage GitHub cache using the actions/cache and local cache exporter with this action:
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Cache Docker layers
uses: actions/cache@v3
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
-
name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push
uses: docker/build-push-action@v3
with:
context: .
push: true
tags: user/app:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
-
# Temp fix
# https://github.com/docker/build-push-action/issues/252
# https://github.com/moby/buildkit/issues/1896
name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
In the following example uses and exposes the
GITHUB_TOKEN
secret
as provided by GitHub in your workflow.
First, create a
Dockerfile
that uses the secret:
# syntax=docker/dockerfile:1
FROM alpine
RUN --mount=type=secret,id=github_token \
cat /run/secrets/github_token
In this example, the secret name is
github_token
. The following workflow
exposes this secret using the
secrets
input:
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build
uses: docker/build-push-action@v3
with:
context: .
platforms: linux/amd64,linux/arm64
tags: user/app:latest
secrets: |
"github_token=${{ secrets.GITHUB_TOKEN }}"
Note
You can also expose a secret file to the build with the
secret-files
input:secret-files: | "MY_SECRET=./secret.txt"
If youâre using GitHub secrets and need to handle multi-line value, you will need to place the key-value pair between quotes:
secrets: |
"MYSECRET=${{ secrets.GPG_KEY }}"
GIT_AUTH_TOKEN=abcdefghi,jklmno=0123456789
"MYSECRET=aaaaaaaa
bbbbbbb
ccccccccc"
FOO=bar
"EMPTYLINE=aaaa
bbbb
ccc"
"JSON_SECRET={""key1"":""value1"",""key2"":""value2""}"
Key | Value |
---|---|
MYSECRET
|
***********************
|
GIT_AUTH_TOKEN
|
abcdefghi,jklmno=0123456789
|
MYSECRET
|
aaaaaaaa\nbbbbbbb\nccccccccc
|
FOO
|
bar
|
EMPTYLINE
|
aaaa\n\nbbbb\nccc
|
JSON_SECRET
|
{"key1":"value1","key2":"value2"}
|
Note
Double escapes are needed for quote signs.
You may want your build result to be available in the Docker client through
docker images
to be able to use it in another step of your workflow:
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build
uses: docker/build-push-action@v3
with:
context: .
load: true
tags: myimage:latest
-
name: Inspect
run: |
docker image inspect myimage:latest
In some cases, you might want to validate that the image works as expected before pushing it.
The following workflow implements several steps to achieve this:
name: ci
on:
push:
branches:
- "main"
env:
TEST_TAG: user/app:test
LATEST_TAG: user/app:latest
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and export to Docker
uses: docker/build-push-action@v3
with:
context: .
load: true
tags: ${{ env.TEST_TAG }}
-
name: Test
run: |
docker run --rm ${{ env.TEST_TAG }}
-
name: Build and push
uses: docker/build-push-action@v3
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ env.LATEST_TAG }}
Note
This workflow doesnât actually build the
linux/amd64
image twice. The image is built once, and the following steps uses the internal cache for from the firstBuild and push
step. The secondBuild and push
step only buildslinux/arm64
.
For testing purposes you may need to create a local registry to push images into:
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
services:
registry:
image: registry:2
ports:
- 5000:5000
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
driver-opts: network=host
-
name: Build and push to local registry
uses: docker/build-push-action@v3
with:
context: .
push: true
tags: localhost:5000/name/app:latest
-
name: Inspect
run: |
docker buildx imagetools inspect localhost:5000/name/app:latest
As each job is isolated in its own runner, you canât use your built image between jobs, except if youâre using self-hosted runners However, you can pass data between jobs in a workflow using the actions/upload-artifact and actions/download-artifact actions:
name: ci
on:
push:
branches:
- "main"
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build and export
uses: docker/build-push-action@v3
with:
context: .
tags: myimage:latest
outputs: type=docker,dest=/tmp/myimage.tar
-
name: Upload artifact
uses: actions/upload-artifact@v3
with:
name: myimage
path: /tmp/myimage.tar
use:
runs-on: ubuntu-latest
needs: build
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Download artifact
uses: actions/download-artifact@v3
with:
name: myimage
path: /tmp
-
name: Load image
run: |
docker load --input /tmp/myimage.tar
docker image ls -a
You can define additional build contexts,
and access them in your Dockerfile with
FROM name
or
--from=name
. When
Dockerfile defines a stage with the same name itâs overwritten.
This can be useful with GitHub Actions to reuse results from other builds or pin an image to a specific tag in your workflow.
Replace
alpine:latest
with a pinned one:
# syntax=docker/dockerfile:1
FROM alpine
RUN echo "Hello World"
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build
uses: docker/build-push-action@v3
with:
context: .
build-contexts: |
alpine=docker-image://alpine:3.16
tags: myimage:latest
By default, the Docker Setup Buildx
action uses
docker-container
as a build driver, so built Docker images arenât
loaded automatically.
With named contexts you can reuse the built image:
# syntax=docker/dockerfile:1
FROM alpine
RUN echo "Hello World"
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build base image
uses: docker/build-push-action@v3
with:
context: base
load: true
tags: my-base-image:latest
-
name: Build
uses: docker/build-push-action@v3
with:
context: .
build-contexts: |
alpine=docker-image://my-base-image:latest
tags: myimage:latest
Multi-platform images built using Buildx can
be copied from one registry to another using the
buildx imagetools create
command:
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
-
name: Build and push
uses: docker/build-push-action@v3
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: |
user/app:latest
user/app:1.0.0
-
name: Push image to GHCR
run: |
docker buildx imagetools create \
--tag ghcr.io/user/app:latest \
--tag ghcr.io/user/app:1.0.0 \
user/app:latest
You can update the Docker Hub repository description using a third party action called Docker Hub Description with this action:
name: ci
on:
push:
branches:
- "main"
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push
uses: docker/build-push-action@v3
with:
context: .
push: true
tags: user/app:latest
-
name: Update repo description
uses: peter-evans/dockerhub-description@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
repository: user/app