This is one stop global knowledge base where you can learn about all the products, solutions and support features.
GitHub Actions is a popular CI/CD platform for automating your build, test, and deployment pipeline. Docker provides a set of official GitHub Actions for you to use in your workflows. These official actions are reusable, easy-to-use components for building, annotating, and pushing images.
The following GitHub Actions are available:
Using Dockerâs actions provides an easy-to-use interface, while still allowing flexibility for customizing build parameters.
This tutorial walks you through the process of setting up and using Docker GitHub Actions for building Docker images, and pushing images to Docker Hub. You will complete the following steps:
To follow this tutorial, you need a Docker ID and a GitHub account.
Create a GitHub repository and configure the Docker Hub secrets.
Create a new GitHub repository using this template repository.
The repository contains a simple Dockerfile, and nothing else. Feel free to use another repository containing a working Dockerfile if you prefer.
Open the repository Settings , and go to Secrets > Actions .
Create a new secret named
DOCKER_HUB_USERNAME
and your Docker ID as value.
Create a new
Personal Access Token (PAT)
for Docker Hub. You can name this token
clockboxci
.
Add the PAT as a second secret in your GitHub repository, with the name
DOCKER_HUB_ACCESS_TOKEN
.
With your repository created, and secrets configured, youâre now ready for action!
Set up your GitHub Actions workflow for building and pushing the image to Docker Hub.
Select set up a workflow yourself .
This takes you to a page for creating a new GitHub actions workflow file in
your repository, under
.github/workflows/main.yml
by default.
In the editor window, copy and paste the following YAML configuration.
name: ci
on:
push:
branches:
- "main"
jobs:
build:
runs-on: ubuntu-latest
name
: the name of this workflow.
on.push.branches
: specifies that this workflow should run on every push
event for the branches in the list.
jobs
: creates a job ID (
build
) and declares the type of machine that
the job should run on.
For more information about the YAML syntax used here, see Workflow syntax for GitHub Actions.
Now the essentials: what steps to run, and in what order to run them.
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build and push
uses: docker/build-push-action@v3
with:
context: .
file: ./Dockerfile
push: true
tags: ${{ secrets.DOCKER_HUB_USERNAME }}/clockbox:latest
The previous YAML snippet contains a sequence of steps that:
Builds the container image and pushes it to the Docker Hub repository, using Build and push Docker images.
The
with
key lists a number of input parameters that configures the step:
context
: the build context.
file
: filepath to the Dockerfile.
push
: tells the action to upload the image to a registry after building
it.
tags
: tags that specify where to push the image.
Add these steps to your workflow file. The full workflow configuration should look as follows:
name: ci
on:
push:
branches:
- "main"
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build and push
uses: docker/build-push-action@v3
with:
context: .
file: ./Dockerfile
push: true
tags: ${{ secrets.DOCKER_HUB_USERNAME }}/clockbox:latest
Save the workflow file and run the job.
Select
Start commit
and push the changes to the
main
branch.
After pushing the commit, the workflow starts automatically.
Go to the Actions tab. It displays the workflow.
Selecting the workflow shows you the breakdown of all the steps.
When the workflow is complete, go to your repositories on Docker Hub.
If you see the new repository in that list, it means the GitHub Actions successfully pushed the image to Docker Hub!
This tutorial has shown you how to create a simple GitHub Actions workflow, using the official Docker actions, to build and push an image to Docker Hub.
There are many more things you can do to customize your workflow to better suit your needs. To learn more about some of the more advanced use cases, take a look at the advanced examples, such as building multi-platform images, or using cache storage backends and also how to configure your builder.
Continuous Integration (CI) is the part of the development process where youâre looking to get your code changes merged with the main branch of the project. At this point, development teams run tests and builds to vet that the code changes donât cause any unwanted or unexpected behaviors.
There are several uses for Docker at this stage of development, even if you donât end up packaging your application as a container image.
Containers are reproducible, isolated environments that yield predictable results. Building and testing your application in a Docker container makes it easier to prevent unexpected behaviors from occurring. Using a Dockerfile, you define the exact requirements for the build environment, including programming runtimes, operating system, binaries, and more.
Using Docker to manage your build environment also eases maintenance. For example, updating to a new version of a programming runtime can be as simple as changing a tag or digest in a Dockerfile. No need to SSH into a pet VM to manually reinstall a newer version and update the related configuration files.
Additionally, just as you expect third-party open source packages to be secure, the same should go for your build environment. You can scan and index a builder image, just like you would for any other containerized application.
The following links provide instructions for how you can get started using Docker for building your applications in CI:
You can also use a Dockerized build environment to build container images using Docker. That is, your build environment runs inside a container which itself is equipped to run Docker builds. This method is referred to as âDocker in Dockerâ.
Docker provides an official Docker image that you can use for this purpose.
Docker maintains a set of official GitHub Actions that you can use to build, annotate, and push container images on the GitHub Actions platform. See Introduction to GitHub Actions to learn more and get started.
The buildx Docker container driver allows creation of a managed and customizable BuildKit environment in a dedicated Docker container.
Using the Docker container driver has a couple of advantages over the default Docker driver. For example:
Run the following command to create a new builder, named
container
, that uses
the Docker container driver:
$ docker buildx create \
--name container \
--driver=docker-container \
--driver-opt=[key=value,...]
container
The following table describes the available driver-specific options that you can
pass to
--driver-opt
:
Parameter | Type | Default | Description |
---|---|---|---|
image
|
String | Â | Sets the image to use for running BuildKit. |
network
|
String | Â | Sets the network mode for running the BuildKit container. |
cgroup-parent
|
String |
/docker/buildx
|
Sets the cgroup parent of the BuildKit container if Docker is using the
cgroupfs
driver.
|
env.<key>
|
String | Â |
Sets the environment variable
key
to the specified
value
in the BuildKit container.
|
When you run a build, Buildx pulls the specified
image
(by default,
moby/buildkit
){:target=âblankâ rel=ânoopenerâ class=ââ}.
When the container has started, Buildx submits the build submitted to the
containerized build server.
$ docker buildx build -t <image> --builder=container .
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 1.9s done
#1 creating container buildx_buildkit_container0
#1 creating container buildx_buildkit_container0 0.5s done
#1 DONE 2.4s
...
Unlike when using the default
docker
driver, images built with the
docker-container
driver must be explicitly loaded into the local image store.
Use the
--load
flag:
$ docker buildx build --load -t <image> --builder=container .
...
=> exporting to oci image format 7.7s
=> => exporting layers 4.9s
=> => exporting manifest sha256:4e4ca161fa338be2c303445411900ebbc5fc086153a0b846ac12996960b479d3 0.0s
=> => exporting config sha256:adf3eec768a14b6e183a1010cb96d91155a82fd722a1091440c88f3747f1f53f 0.0s
=> => sending tarball 2.8s
=> importing to docker
The image becomes available in the image store when the build finishes:
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
<image> latest adf3eec768a1 2 minutes ago 197MB
The
docker-container
driver supports cache persistence, as it stores all the
BuildKit state and related cache into a dedicated Docker volume.
To persist the
docker-container
driverâs cache, even after recreating the
driver using
docker buildx rm
and
docker buildx create
, you can destroy the
builder using the
--keep-state
flag:
For example, to create a builder named
container
and then remove it while
persisting state:
# setup a builder
$ docker buildx create --name=container --driver=docker-container --use --bootstrap
container
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
container * docker-container
container0 desktop-linux running v0.10.5 linux/amd64
$ docker volume ls
DRIVER VOLUME NAME
local buildx_buildkit_container0_state
# remove the builder while persisting state
$ docker buildx rm --keep-state container
$ docker volume ls
DRIVER VOLUME NAME
local buildx_buildkit_container0_state
# the newly created driver with the same name will have all the state of the previous one!
$ docker buildx create --name=container --driver=docker-container --use --bootstrap
container
The
docker-container
driver supports using QEMU
(user mode) to build non-native platforms. Use the
--platform
flag to specify
which architectures that you want to build for.
For example, to build a Linux image for
amd64
and
arm64
:
$ docker buildx build \
--builder=container \
--platform=linux/amd64,linux/arm64 \
-t <registry>/<image> \
--push .
Warning
QEMU performs full-system emulation of non-native platforms, which is much slower than native builds. Compute-heavy tasks like compilation and compression/decompression will likely take a large performance hit.
You can customize the network that the builder container uses. This is useful if you need to use a specific network for your builds.
For example, letâs create a network
named
foonet
:
$ docker network create foonet
Now create a
docker-container
builder
that will use this network:
$ docker buildx create --use \
--name mybuilder \
--driver docker-container \
--driver-opt "network=foonet"
Boot and inspect
mybuilder
:
$ docker buildx inspect --bootstrap
Inspect the builder container and see what network is being used:
$ docker inspect buildx_buildkit_mybuilder0 --format={{.NetworkSettings.Networks}}
map[foonet:0xc00018c0c0]
For more information on the Docker container driver, see the buildx reference.
The Buildx Docker driver is the default driver. It uses the BuildKit server components built directly into the Docker engine. The Docker driver requires no configuration.
Unlike the other drivers, builders using the Docker driver canât be manually created. Theyâre only created automatically from the Docker context.
Images built with the Docker driver are automatically loaded to the local image store.
# The Docker driver is used by buildx by default
docker buildx build .
Itâs not possible to configure which BuildKit version to use, or to pass any additional BuildKit parameters to a builder using the Docker driver. The BuildKit version and parameters are preset by the Docker engine internally.
If you need additional configuration and flexibility, consider using the Docker container driver.
For more information on the Docker driver, see the buildx reference.
Buildx drivers are configurations for how and where the BuildKit backend runs. Driver settings are customizable and allows fine-grained control of the builder. Buildx supports the following drivers:
docker
: uses the BuildKit library bundled into the Docker daemon.
docker-container
: creates a dedicated BuildKit container using Docker.
kubernetes
: creates BuildKit pods in a Kubernetes cluster.
remote
: connects directly to a manually managed BuildKit daemon.
Different drivers support different use cases. The default
docker
driver
prioritizes simplicity and ease of use. It has limited support for advanced
features like caching and output formats, and isnât configurable. Other drivers
provide more flexibility and are better at handling advanced scenarios.
The following table outlines some differences between drivers.
Feature |
docker
|
docker-container
|
kubernetes
|
remote
|
---|---|---|---|---|
Automatically load image | â | Â | Â | Â |
Cache export | Inline only | â | â | â |
Tarball output | Â | â | â | â |
Multi-arch images | Â | â | â | â |
BuildKit configuration | Â | â | â | Managed externally |
Use
docker buildx ls
to see builder instances available on your system, and
the drivers theyâre using.
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default docker
default default running 20.10.17 linux/amd64, linux/386
Depending on your setup, you may find multiple builders in your list that use
the Docker driver. For example, on a system that runs both a manually installed
version of dockerd, as well as Docker Desktop, you might see the following
output from
docker buildx ls
:
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default docker
default default running 20.10.17 linux/amd64, linux/386
desktop-linux * docker
desktop-linux desktop-linux running 20.10.17 linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
This is because the Docker driver builders are automatically pulled from the
available Docker Contexts. When
you add new contexts using
docker context create
, these will appear in your
list of buildx builders.
The asterisk (
*
) next to the builder name indicates that this is the selected
builder which gets used by default, unless you specify a builder using the
--builder
option.
Use the
docker buildx create
command to create a builder, and specify the driver using the
--driver
option.
$ docker buildx create --name=<builder-name> --driver=<driver> --driver-opt=<driver-options>
This creates a new builder instance with a single build node. After creating a new builder you can also append new nodes to it.
To use a remote node for your builders, you can set the
DOCKER_HOST
environment variable or provide a remote context name when creating the builder.
To switch between different builders, use the
docker buildx use <name>
command. After running this command, the build commands will automatically use
this builder.
Read about each of the Buildx drivers to learn about how they work and how to use them:
The Buildx Kubernetes driver allows connecting your local development or CI environments to your Kubernetes cluster to allow access to more powerful and varied compute resources.
Run the following command to create a new builder, named
kube
, that uses the
Kubernetes driver:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=[key=value,...]
The following table describes the available driver-specific options that you can
pass to
--driver-opt
:
Parameter | Type | Default | Description |
---|---|---|---|
image
|
String | Â | Sets the image to use for running BuildKit. |
namespace
|
String | Namespace in current Kubernetes context | Sets the Kubernetes namespace. |
replicas
|
Integer | 1 | Sets the number of Pod replicas to create. See scaling BuildKit |
requests.cpu
|
CPU units | Â |
Sets the request CPU value specified in units of Kubernetes CPU. For example
requests.cpu=100m
or
requests.cpu=2
|
requests.memory
|
Memory size | Â |
Sets the request memory value specified in bytes or with a valid suffix. For example
requests.memory=500Mi
or
requests.memory=4G
|
limits.cpu
|
CPU units | Â |
Sets the limit CPU value specified in units of Kubernetes CPU. For example
requests.cpu=100m
or
requests.cpu=2
|
limits.memory
|
Memory size | Â |
Sets the limit memory value specified in bytes or with a valid suffix. For example
requests.memory=500Mi
or
requests.memory=4G
|
nodeselector
|
CSV string | Â |
Sets the podâs
nodeSelector
label(s). See node assignment.
|
tolerations
|
CSV string | Â | Configures the podâs taint toleration. See node assignment. |
rootless
|
true
,
false
|
false
|
Run the container as a non-root user. See rootless mode. |
loadbalance
|
sticky
,
random
|
sticky
|
Load-balancing strategy. If set to
sticky
, the pod is chosen using the hash of the context path.
|
qemu.install
|
true
,
false
|
 | Install QEMU emulation for multi platforms support. See QEMU. |
qemu.image
|
String |
tonistiigi/binfmt:latest
|
Sets the QEMU emulation image. See QEMU. |
One of the main advantages of the Kubernetes driver is that you can scale the number of builder replicas up and down to handle increased build load. Scaling is configurable using the following driver options:
replicas=N
This scales the number of BuildKit pods to the desired size. By default, it only creates a single pod. Increasing the number of replicas lets you take advantage of multiple nodes in your cluster.
requests.cpu
,
requests.memory
,
limits.cpu
,
limits.memory
These options allow requesting and limiting the resources available to each BuildKit pod according to the official Kubernetes documentation here.
For example, to create 4 replica BuildKit pods:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,replicas=4
Listing the pods, you get this:
$ kubectl -n buildkit get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kube0 4/4 4 4 8s
$ kubectl -n buildkit get pods
NAME READY STATUS RESTARTS AGE
kube0-6977cdcb75-48ld2 1/1 Running 0 8s
kube0-6977cdcb75-rkc6b 1/1 Running 0 8s
kube0-6977cdcb75-vb4ks 1/1 Running 0 8s
kube0-6977cdcb75-z4fzs 1/1 Running 0 8s
Additionally, you can use the
loadbalance=(sticky|random)
option to control
the load-balancing behavior when there are multiple replicas.
random
selects
random nodes from the node pool, providing an even workload distribution across
replicas.
sticky
(the default) attempts to connect the same build performed
multiple times to the same node each time, ensuring better use of local cache.
For more information on scalability, see the options for buildx create.
The Kubernetes driver allows you to control the scheduling of BuildKit pods
using the
nodeSelector
and
tolerations
driver options.
The value of the
nodeSelector
parameter is a comma-separated string of
key-value pairs, where the key is the node label and the value is the label
text. For example:
"nodeselector=kubernetes.io/arch=arm64"
The
tolerations
parameter is a semicolon-separated list of taints. It accepts
the same values as the Kubernetes manifest. Each
tolerations
entry specifies a
taint key and the value, operator, or effect. For example:
"tolerations=key=foo,value=bar;key=foo2,operator=exists;key=foo3,effect=NoSchedule"
Due to quoting rules for shell commands, you must wrap the
nodeselector
and
tolerations
parameters in single quotes. You can even wrap all of
--driver-opt
in single quotes, for example:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
'--driver-opt="nodeselector=label1=value1,label2=value2","tolerations=key=key1,value=value1"'
The Buildx Kubernetes driver has support for creating multi-platform images, either using QEMU or by leveraging the native architecture of nodes.
Like the
docker-container
driver, the Kubernetes driver also supports using
QEMU (user
mode) to build images for non-native platforms. Include the
--platform
flag
and specify which platforms you want to output to.
For example, to build a Linux image for
amd64
and
arm64
:
$ docker buildx build \
--builder=kube \
--platform=linux/amd64,linux/arm64 \
-t <user>/<image> \
--push .
Warning
QEMU performs full-system emulation of non-native platforms, which is much slower than native builds. Compute-heavy tasks like compilation and compression/decompression will likely take a large performance hit.
Using a custom BuildKit image or invoking non-native binaries in builds may
require that you explicitly turn on QEMU using the
qemu.install
option when
creating the builder:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,qemu.install=true
If you have access to cluster nodes of different architectures, the Kubernetes
driver can take advantage of these for native builds. To do this, use the
--append
flag of
docker buildx create
.
First, create your builder with explicit support for a single architecture, for
example
amd64
:
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--platform=linux/amd64 \
--node=builder-amd64 \
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=amd64"
This creates a Buildx builder named
kube
, containing a single builder node
builder-amd64
. Note that the Buildx concept of a node isnât the same as the
Kubernetes concept of a node. A Buildx node in this case could connect multiple
Kubernetes nodes of the same architecture together.
With the
kube
builder created, you can now introduce another architecture into
the mix using
--append
. For example, to add
arm64
:
$ docker buildx create \
--append \
--bootstrap \
--name=kube \
--driver=kubernetes \
--platform=linux/arm64 \
--node=builder-arm64 \
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=arm64"
If you list builders now, you should be able to see both nodes present:
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
kube kubernetes
builder-amd64 kubernetes:///kube?deployment=builder-amd64&kubeconfig= running linux/amd64*, linux/amd64/v2, linux/amd64/v3, linux/386
builder-arm64 kubernetes:///kube?deployment=builder-arm64&kubeconfig= running linux/arm64*
You should now be able to build multi-arch images with
amd64
and
arm64
combined, by specifying those platforms together in your buildx command:
$ docker buildx build --builder=kube --platform=linux/amd64,linux/arm64 -t <user>/<image> --push .
You can repeat the
buildx create --append
command for as many architectures
that you want to support.
The Kubernetes driver supports rootless mode. For more information on how rootless mode works, and itâs requirements, see here.
To turn it on in your cluster, you can use the
rootless=true
driver option:
$ docker buildx create \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,rootless=true
This will create your pods without
securityContext.privileged
.
Requires Kubernetes version 1.19 or later. Using Ubuntu as the host kernel is recommended.
This guide shows you how to:
Prerequisites:
kubectl
command,
with the
KUBECONFIG
environment variable
set appropriately if necessary.
Create a
buildkit
namespace.
Creating a separate namespace helps keep your Buildx resources separate from other resources in the cluster.
$ kubectl create namespace buildkit
namespace/buildkit created
Create a new Buildx builder with the Kubernetes driver:
# Remember to specify the namespace in driver options
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
List available Buildx builders using
docker buildx ls
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
kube kubernetes
kube0-6977cdcb75-k9h9m running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
default * docker
default default running linux/amd64, linux/386
Inspect the running pods created by the Buildx driver with
kubectl
.
$ kubectl -n buildkit get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kube0 1/1 1 1 32s
$ kubectl -n buildkit get pods
NAME READY STATUS RESTARTS AGE
kube0-6977cdcb75-k9h9m 1/1 Running 0 32s
The buildx driver creates the necessary resources on your cluster in the
specified namespace (in this case,
buildkit
), while keeping your driver
configuration locally.
Use your new builder by including the
--builder
flag when running buildx
commands. For example: :
# Replace <registry> with your Docker username
# and <image> with the name of the image you want to build
docker buildx build \
--builder=kube \
-t <registry>/<image> \
--push .
Thatâs it! Youâve now built an image from a Kubernetes pod, using Buildx!
For more information on the Kubernetes driver, see the buildx reference.