Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All

DevOps-Docker

Build context

Build context

The docker build or docker buildx build commands build Docker images from a Dockerfile and a “context”.

A build’s context is the set of files located at the PATH or URL specified as the positional argument to the build command:

$ docker build [OPTIONS] PATH | URL | -
                         ^^^^^^^^^^^^^^

The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context or a RUN --mount=type=bind instruction for better performance with BuildKit. The build context is processed recursively. So, a PATH includes any subdirectories and the URL includes the repository and its submodules.

PATH context

This example shows a build command that uses the current directory ( . ) as a build context:

$ docker build .
...
#16 [internal] load build context
#16 sha256:23ca2f94460dcbaf5b3c3edbaaa933281a4e0ea3d92fe295193e4df44dc68f85
#16 transferring context: 13.16MB 2.2s done
...

With the following Dockerfile:

# syntax=docker/dockerfile:1
FROM busybox
WORKDIR /src
COPY foo .

And this directory structure:

.
├── Dockerfile
├── bar
├── foo
└── node_modules

The legacy builder sends the entire directory to the daemon, including bar and node_modules directories, even though the Dockerfile does not use them. When using BuildKit, the client only sends the files required by the COPY instructions, in this case foo .

In some cases you may want to send the entire context:

# syntax=docker/dockerfile:1
FROM busybox
WORKDIR /src
COPY . .

You can use a .dockerignore file to exclude some files or directories from being sent:

# .dockerignore
node_modules
bar

Warning

Avoid using your root directory, / , as the PATH for your build context, as it causes the build to transfer the entire contents of your hard drive to the daemon.

URL context

The URL parameter can refer to three kinds of resources:

  • Git repositories
  • Pre-packaged tarball contexts
  • Plain text files

Git repositories

When the URL parameter points to the location of a Git repository, the repository acts as the build context. The builder recursively pulls the repository and its submodules. A shallow clone is performed and therefore pulls down just the latest commits, not the entire history. A repository is first pulled into a temporary directory on your host. After that succeeds, the directory is sent to the daemon as the context. Local copy gives you the ability to access private repositories using local user credentials, VPN’s, and so forth.

Note

If the URL parameter contains a fragment the system will recursively clone the repository and its submodules using a git clone --recursive command.

Git URLs accept a context configuration parameter in the form of a URL fragment, separated by a colon ( : ). The first part represents the reference that Git will check out, and can be either a branch, a tag, or a remote reference. The second part represents a subdirectory inside the repository that will be used as a build context.

For example, run this command to use a directory called docker in the branch container :

$ docker build https://github.com/user/myrepo.git#container:docker

The following table represents all the valid suffixes with their build contexts:

Build Syntax Suffix Commit Used Build Context Used
myrepo.git refs/heads/master /
myrepo.git#mytag refs/tags/mytag /
myrepo.git#mybranch refs/heads/mybranch /
myrepo.git#pull/42/head refs/pull/42/head /
myrepo.git#:myfolder refs/heads/master /myfolder
myrepo.git#master:myfolder refs/heads/master /myfolder
myrepo.git#mytag:myfolder refs/tags/mytag /myfolder
myrepo.git#mybranch:myfolder refs/heads/mybranch /myfolder

By default .git directory is not kept on Git checkouts. You can set the BuildKit built-in arg BUILDKIT_CONTEXT_KEEP_GIT_DIR=1 to keep it. It can be useful to keep it around if you want to retrieve Git information during your build:

# syntax=docker/dockerfile:1
FROM alpine
WORKDIR /src
RUN --mount=target=. \
  make REVISION=$(git rev-parse HEAD) build
$ docker build --build-arg BUILDKIT_CONTEXT_KEEP_GIT_DIR=1 https://github.com/user/myrepo.git#main

Tarball contexts

If you pass a URL to a remote tarball, the URL itself is sent to the daemon:

$ docker build http://server/context.tar.gz
#1 [internal] load remote build context
#1 DONE 0.2s

#2 copy /context /
#2 DONE 0.1s
...

The download operation will be performed on the host the daemon is running on, which is not necessarily the same host from which the build command is being issued. The daemon will fetch context.tar.gz and use it as the build context. Tarball contexts must be tar archives conforming to the standard tar UNIX format and can be compressed with any one of the xz , bzip2 , gzip or identity (no compression) formats.

Text files

Instead of specifying a context, you can pass a single Dockerfile in the URL or pipe the file in via STDIN . To pipe a Dockerfile from STDIN :

$ docker build - < Dockerfile

With Powershell on Windows, you can run:

Get-Content Dockerfile | docker build -

If you use STDIN or specify a URL pointing to a plain text file, the system places the contents into a file called Dockerfile , and any -f , --file option is ignored. In this scenario, there is no context.

The following example builds an image using a Dockerfile that is passed through stdin. No files are sent as build context to the daemon.

docker build -t myimage:latest -<<EOF
FROM busybox
RUN echo "hello world"
EOF

Omitting the build context can be useful in situations where your Dockerfile does not require files to be copied into the image, and improves the build-speed, as no files are sent to the daemon.

Multi-platform images

Multi-platform images

Docker images can support multiple platforms, which means that a single image may contain variants for different architectures, and sometimes for different operating systems, such as Windows.

When running an image with multi-platform support, docker automatically selects the image that matches your OS and architecture.

Most of the Docker Official Images on Docker Hub provide a variety of architectures. For example, the busybox image supports amd64 , arm32v5 , arm32v6 , arm32v7 , arm64v8 , i386 , ppc64le , and s390x . When running this image on an x86_64 / amd64 machine, the amd64 variant is pulled and run.

Building multi-platform images

Docker is now making it easier than ever to develop containers on, and for Arm servers and devices. Using the standard Docker tooling and processes, you can start to build, push, pull, and run images seamlessly on different compute architectures. In most cases, you don’t have to make any changes to Dockerfiles or source code to start building for Arm.

BuildKit with Buildx is designed to work well for building for multiple platforms and not only for the architecture and operating system that the user invoking the build happens to run.

When you invoke a build, you can set the --platform flag to specify the target platform for the build output, (for example, linux/amd64 , linux/arm64 , or darwin/amd64 ).

When the current builder instance is backed by the docker-container driver, you can specify multiple platforms together. In this case, it builds a manifest list which contains images for all specified architectures. When you use this image in docker run or docker service , Docker picks the correct image based on the node’s platform.

You can build multi-platform images using three different strategies that are supported by Buildx and Dockerfiles:

  1. Using the QEMU emulation support in the kernel
  2. Building on multiple native nodes using the same builder instance
  3. Using a stage in Dockerfile to cross-compile to different architectures

QEMU is the easiest way to get started if your node already supports it (for example. if you are using Docker Desktop). It requires no changes to your Dockerfile and BuildKit automatically detects the secondary architectures that are available. When BuildKit needs to run a binary for a different architecture, it automatically loads it through a binary registered in the binfmt_misc handler.

For QEMU binaries registered with binfmt_misc on the host OS to work transparently inside containers, they must be statically compiled and registered with the fix_binary flag. This requires a kernel >= 4.8 and binfmt-support >= 2.1.7. You can check for proper registration by checking if F is among the flags in /proc/sys/fs/binfmt_misc/qemu-* . While Docker Desktop comes preconfigured with binfmt_misc support for additional platforms, for other installations it likely needs to be installed using tonistiigi/binfmt image.

$ docker run --privileged --rm tonistiigi/binfmt --install all

Using multiple native nodes provide better support for more complicated cases that are not handled by QEMU and generally have better performance. You can add additional nodes to the builder instance using the --append flag.

Assuming contexts node-amd64 and node-arm64 exist in docker context ls ;

$ docker buildx create --use --name mybuild node-amd64
mybuild
$ docker buildx create --append --name mybuild node-arm64
$ docker buildx build --platform linux/amd64,linux/arm64 .

Finally, depending on your project, the language that you use may have good support for cross-compilation. In that case, multi-stage builds in Dockerfiles can be effectively used to build binaries for the platform specified with --platform using the native architecture of the build node. A list of build arguments like BUILDPLATFORM and TARGETPLATFORM is available automatically inside your Dockerfile and can be leveraged by the processes running as part of your build.

# syntax=docker/dockerfile:1
FROM --platform=$BUILDPLATFORM golang:alpine AS build
ARG TARGETPLATFORM
ARG BUILDPLATFORM
RUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM" > /log
FROM alpine
COPY --from=build /log /log

Getting started

Run the docker buildx ls command to list the existing builders:

$ docker buildx ls
NAME/NODE  DRIVER/ENDPOINT  STATUS   BUILDKIT PLATFORMS
default *  docker
  default  default          running  20.10.17 linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6

This displays the default builtin driver, that uses the BuildKit server components built directly into the docker engine, also known as the docker driver.

Create a new builder using the docker-container driver which gives you access to more complex features like multi-platform builds and the more advanced cache exporters, which are currently unsupported in the default docker driver:

$ docker buildx create --name mybuilder --driver docker-container --bootstrap
mybuilder

Switch to the new builder:

$ docker buildx use mybuilder

Note

Alternatively, run docker buildx create --name mybuilder --driver docker-container --bootstrap --use to create a new builder and switch to it using a single command.

And inspect it:

$ docker buildx inspect
Name:   mybuilder
Driver: docker-container

Nodes:
Name:      mybuilder0
Endpoint:  unix:///var/run/docker.sock
Status:    running
Buildkit:  v0.10.4
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6

Now listing the existing builders again, we can see our new builder is registered:

$ docker buildx ls
NAME/NODE     DRIVER/ENDPOINT              STATUS   BUILDKIT PLATFORMS
mybuilder     docker-container
  mybuilder0  unix:///var/run/docker.sock  running  v0.10.4  linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
default *     docker
  default     default                      running  20.10.17 linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6

Example

Test the workflow to ensure you can build, push, and run multi-platform images. Create a simple example Dockerfile, build a couple of image variants, and push them to Docker Hub.

The following example uses a single Dockerfile to build an Alpine image with cURL installed for multiple architectures:

# syntax=docker/dockerfile:1
FROM alpine:3.16
RUN apk add curl

Build the Dockerfile with buildx, passing the list of architectures to build for:

$ docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t <username>/<image>:latest --push .
...
#16 exporting to image
#16 exporting layers
#16 exporting layers 0.5s done
#16 exporting manifest sha256:71d7ecf3cd12d9a99e73ef448bf63ae12751fe3a436a007cb0969f0dc4184c8c 0.0s done
#16 exporting config sha256:a26f329a501da9e07dd9cffd9623e49229c3bb67939775f936a0eb3059a3d045 0.0s done
#16 exporting manifest sha256:5ba4ceea65579fdd1181dfa103cc437d8e19d87239683cf5040e633211387ccf 0.0s done
#16 exporting config sha256:9fcc6de03066ac1482b830d5dd7395da781bb69fe8f9873e7f9b456d29a9517c 0.0s done
#16 exporting manifest sha256:29666fb23261b1f77ca284b69f9212d69fe5b517392dbdd4870391b7defcc116 0.0s done
#16 exporting config sha256:92cbd688027227473d76e705c32f2abc18569c5cfabd00addd2071e91473b2e4 0.0s done
#16 exporting manifest list sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48 0.0s done
#16 ...

#17 [auth] <username>/<image>:pull,push token for registry-1.docker.io
#17 DONE 0.0s

#16 exporting to image
#16 pushing layers
#16 pushing layers 3.6s done
#16 pushing manifest for docker.io/<username>/<image>:latest@sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48
#16 pushing manifest for docker.io/<username>/<image>:latest@sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48 1.4s done
#16 DONE 5.6s

Note

  • <username> must be a valid Docker ID and <image> and valid repository on Docker Hub.
  • The --platform flag informs buildx to create Linux images for AMD 64-bit, Arm 64-bit, and Armv7 architectures.
  • The --push flag generates a multi-arch manifest and pushes all the images to Docker Hub.

Inspect the image using docker buildx imagetools command:

$ docker buildx imagetools inspect <username>/<image>:latest
Name:      docker.io/<username>/<image>:latest
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
Digest:    sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48

Manifests:
  Name:      docker.io/<username>/<image>:latest@sha256:71d7ecf3cd12d9a99e73ef448bf63ae12751fe3a436a007cb0969f0dc4184c8c
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/amd64

  Name:      docker.io/<username>/<image>:latest@sha256:5ba4ceea65579fdd1181dfa103cc437d8e19d87239683cf5040e633211387ccf
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm64

  Name:      docker.io/<username>/<image>:latest@sha256:29666fb23261b1f77ca284b69f9212d69fe5b517392dbdd4870391b7defcc116
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm/v7

The image is now available on Docker Hub with the tag <username>/<image>:latest . You can use this image to run a container on Intel laptops, Amazon EC2 Graviton instances, Raspberry Pis, and on other architectures. Docker pulls the correct image for the current architecture, so Raspberry PIs run the 32-bit Arm version and EC2 Graviton instances run 64-bit Arm.

The digest identifies a fully qualified image variant. You can also run images targeted for a different architecture on Docker Desktop. For example, when you run the following on a macOS:

$ docker run --rm docker.io/<username>/<image>:latest@sha256:2b77acdfea5dc5baa489ffab2a0b4a387666d1d526490e31845eb64e3e73ed20 uname -m
aarch64
$ docker run --rm docker.io/<username>/<image>:latest@sha256:723c22f366ae44e419d12706453a544ae92711ae52f510e226f6467d8228d191 uname -m
armv7l

In the above example, uname -m returns aarch64 and armv7l as expected, even when running the commands on a native macOS or Windows developer machine.

Support on Docker Desktop

Docker Desktop provides binfmt_misc multi-architecture support, which means you can run containers for different Linux architectures such as arm , mips , ppc64le , and even s390x .

This does not require any special configuration in the container itself as it uses qemu-static from the Docker for Mac VM . Because of this, you can run an ARM container, like the arm32v7 or ppc64le variants of the busybox image.

Read article

Multi-stage builds

Multi-stage builds

Multi-stage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping them easy to read and maintain.

Acknowledgment

Special thanks to Alex Ellis for granting permission to use his blog post Builder pattern vs. Multi-stage builds in Docker as the basis of the examples below.

Before multi-stage builds

One of the most challenging things about building images is keeping the image size down. Each RUN , COPY , and ADD instruction in the Dockerfile adds a layer to the image, and you need to remember to clean up any artifacts you don’t need before moving on to the next layer. To write a really efficient Dockerfile, you have traditionally needed to employ shell tricks and other logic to keep the layers as small as possible and to ensure that each layer has the artifacts it needs from the previous layer and nothing else.

It was actually very common to have one Dockerfile to use for development (which contained everything needed to build your application), and a slimmed-down one to use for production, which only contained your application and exactly what was needed to run it. This has been referred to as the “builder pattern”. Maintaining two Dockerfiles is not ideal.

Here’s an example of a build.Dockerfile and Dockerfile which adhere to the builder pattern above:

build.Dockerfile :

# syntax=docker/dockerfile:1
FROM golang:1.16
WORKDIR /go/src/github.com/alexellis/href-counter/
COPY app.go ./
RUN go get -d -v golang.org/x/net/html \
  && CGO_ENABLED=0 go build -a -installsuffix cgo -o app .

Notice that this example also artificially compresses two RUN commands together using the Bash && operator, to avoid creating an additional layer in the image. This is failure-prone and hard to maintain. It’s easy to insert another command and forget to continue the line using the \ character, for example.

Dockerfile :

# syntax=docker/dockerfile:1
FROM alpine:latest  
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY app ./
CMD ["./app"]

build.sh :

#!/bin/sh
echo Building alexellis2/href-counter:build
docker build -t alexellis2/href-counter:build . -f build.Dockerfile

docker container create --name extract alexellis2/href-counter:build  
docker container cp extract:/go/src/github.com/alexellis/href-counter/app ./app  
docker container rm -f extract

echo Building alexellis2/href-counter:latest
docker build --no-cache -t alexellis2/href-counter:latest .
rm ./app

When you run the build.sh script, it needs to build the first image, create a container from it to copy the artifact out, then build the second image. Both images take up room on your system and you still have the app artifact on your local disk as well.

Multi-stage builds vastly simplify this situation!

Use multi-stage builds

With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image. To show how this works, let’s adapt the Dockerfile from the previous section to use multi-stage builds.

# syntax=docker/dockerfile:1

FROM golang:1.16
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html  
COPY app.go ./
RUN CGO_ENABLED=0 go build -a -installsuffix cgo -o app .

FROM alpine:latest  
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app ./
CMD ["./app"]

You only need the single Dockerfile. You don’t need a separate build script, either. Just run docker build .

$ docker build -t alexellis2/href-counter:latest .

The end result is the same tiny production image as before, with a significant reduction in complexity. You don’t need to create any intermediate images, and you don’t need to extract any artifacts to your local system at all.

How does it work? The second FROM instruction starts a new build stage with the alpine:latest image as its base. The COPY --from=0 line copies just the built artifact from the previous stage into this new stage. The Go SDK and any intermediate artifacts are left behind, and not saved in the final image.

Name your build stages

By default, the stages are not named, and you refer to them by their integer number, starting with 0 for the first FROM instruction. However, you can name your stages, by adding an AS <NAME> to the FROM instruction. This example improves the previous one by naming the stages and using the name in the COPY instruction. This means that even if the instructions in your Dockerfile are re-ordered later, the COPY doesn’t break.

# syntax=docker/dockerfile:1

FROM golang:1.16 AS builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html  
COPY app.go ./
RUN CGO_ENABLED=0 go build -a -installsuffix cgo -o app .

FROM alpine:latest  
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app ./
CMD ["./app"]  

Stop at a specific build stage

When you build your image, you don’t necessarily need to build the entire Dockerfile including every stage. You can specify a target build stage. The following command assumes you are using the previous Dockerfile but stops at the stage named builder :

$ docker build --target builder -t alexellis2/href-counter:latest .

A few scenarios where this might be very powerful are:

  • Debugging a specific build stage
  • Using a debug stage with all debugging symbols or tools enabled, and a lean production stage
  • Using a testing stage in which your app gets populated with test data, but building for production using a different stage which uses real data

Use an external image as a “stage”

When using multi-stage builds, you are not limited to copying from stages you created earlier in your Dockerfile. You can use the COPY --from instruction to copy from a separate image, either using the local image name, a tag available locally or on a Docker registry, or a tag ID. The Docker client pulls the image if necessary and copies the artifact from there. The syntax is:

COPY --from=nginx:latest /etc/nginx/nginx.conf /nginx.conf

Use a previous stage as a new stage

You can pick up where a previous stage left off by referring to it when using the FROM directive. For example:

# syntax=docker/dockerfile:1

FROM alpine:latest AS builder
RUN apk --no-cache add build-base

FROM builder AS build1
COPY source1.cpp source.cpp
RUN g++ -o /binary source.cpp

FROM builder AS build2
COPY source2.cpp source.cpp
RUN g++ -o /binary source.cpp

Version compatibility

Multi-stage build syntax was introduced in Docker Engine 17.05.

Differences between legacy builder and BuildKit

The legacy Docker Engine builder processes all stages of a Dockerfile leading up to the selected --target . It will build a stage even if the selected target doesn’t depend on that stage.

BuildKit only builds the stages that the target stage depends on.

For example, given the following Dockerfile:

# syntax=docker/dockerfile:1
FROM ubuntu AS base
RUN echo "base"

FROM base AS stage1
RUN echo "stage1"

FROM base AS stage2
RUN echo "stage2"

With BuildKit enabled, building the stage2 target in this Dockerfile means only base and stage2 are processed. There is no dependency on stage1 , so it’s skipped.

$ DOCKER_BUILDKIT=1 docker build --no-cache -f Dockerfile --target stage2 .
[+] Building 0.4s (7/7) FINISHED                                                                    
 => [internal] load build definition from Dockerfile                                            0.0s
 => => transferring dockerfile: 36B                                                             0.0s
 => [internal] load .dockerignore                                                               0.0s
 => => transferring context: 2B                                                                 0.0s
 => [internal] load metadata for docker.io/library/ubuntu:latest                                0.0s
 => CACHED [base 1/2] FROM docker.io/library/ubuntu                                             0.0s
 => [base 2/2] RUN echo "base"                                                                  0.1s
 => [stage2 1/1] RUN echo "stage2"                                                              0.2s
 => exporting to image                                                                          0.0s
 => => exporting layers                                                                         0.0s
 => => writing image sha256:f55003b607cef37614f607f0728e6fd4d113a4bf7ef12210da338c716f2cfd15    0.0s

On the other hand, building the same target without BuildKit results in all stages being processed:

$ DOCKER_BUILDKIT=0 docker build --no-cache -f Dockerfile --target stage2 .
Sending build context to Docker daemon  219.1kB
Step 1/6 : FROM ubuntu AS base
 ---> a7870fd478f4
Step 2/6 : RUN echo "base"
 ---> Running in e850d0e42eca
base
Removing intermediate container e850d0e42eca
 ---> d9f69f23cac8
Step 3/6 : FROM base AS stage1
 ---> d9f69f23cac8
Step 4/6 : RUN echo "stage1"
 ---> Running in 758ba6c1a9a3
stage1
Removing intermediate container 758ba6c1a9a3
 ---> 396baa55b8c3
Step 5/6 : FROM base AS stage2
 ---> d9f69f23cac8
Step 6/6 : RUN echo "stage2"
 ---> Running in bbc025b93175
stage2
Removing intermediate container bbc025b93175
 ---> 09fc3770a9c4
Successfully built 09fc3770a9c4

stage1 gets executed when BuildKit is disabled, even if stage2 does not depend on it.

Read article

Packaging your software

Packaging your software

Dockerfile

It all starts with a Dockerfile.

Docker builds images by reading the instructions from a Dockerfile. This is a text file containing instructions that adhere to a specific format needed to assemble your application into a container image and for which you can find its specification reference in the Dockerfile reference.

Here are the most common types of instructions:

Instruction Description
FROM <image> Defines a base for your image.
RUN <command> Executes any commands in a new layer on top of the current image and commits the result. RUN also has a shell form for running commands.
WORKDIR <directory> Sets the working directory for any RUN , CMD , ENTRYPOINT , COPY , and ADD instructions that follow it in the Dockerfile.
COPY <src> <dest> Copies new files or directories from <src> and adds them to the filesystem of the container at the path <dest> .
CMD <command> Lets you define the default program that is run once you start the container based on this image. Each Dockerfile only has one CMD , and only the last CMD instance is respected when multiple exist.

Dockerfiles are crucial inputs for image builds and can facilitate automated, multi-layer image builds based on your unique configurations. Dockerfiles can start simple and grow with your needs and support images that require complex instructions. For all the possible instructions, see the Dockerfile reference.

The default filename to use for a Dockerfile is Dockerfile , without a file extension. Using the default name allows you to run the docker build command without having to specify additional command flags.

Some projects may need distinct Dockerfiles for specific purposes. A common convention is to name these <something>.Dockerfile . Such Dockerfiles can then be used through the --file (or -f shorthand) option on the docker build command. Refer to the “Specify a Dockerfile” section in the docker build reference to learn about the --file option.

Note

We recommend using the default ( Dockerfile ) for your project’s primary Dockerfile.

Docker images consist of read-only layers , each resulting from an instruction in the Dockerfile. Layers are stacked sequentially and each one is a delta representing the changes applied to the previous layer.

Example

Here’s a simple Dockerfile example to get you started with building images. We’ll take a simple “Hello World” Python Flask application, and bundle it into a Docker image that can test locally or deploy anywhere!

Let’s say we have a hello.py file with the following content:

from flask import Flask
app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello World!"

Don’t worry about understanding the full example if you’re not familiar with Python, it’s just a simple web server that will contain a single page that says “Hello World”.

Note

If you test the example, make sure to copy over the indentation as well! For more information about this sample Flask application, check the Flask Quickstart page.

Here’s the Dockerfile that will be used to create an image for our application:

# syntax=docker/dockerfile:1
FROM ubuntu:22.04

# install app dependencies
RUN apt-get update && apt-get install -y python3 python3-pip
RUN pip install flask==2.1.*

# install app
COPY hello.py /

# final configuration
ENV FLASK_APP=hello
EXPOSE 8000
CMD flask run --host 0.0.0.0 --port 8000

The first line to add to a Dockerfile is a # syntax parser directive. While optional, this directive instructs the Docker builder what syntax to use when parsing the Dockerfile, and allows older Docker versions with BuildKit enabled to use a specific Dockerfile frontend before starting the build. Parser directives must appear before any other comment, whitespace, or Dockerfile instruction in your Dockerfile, and should be the first line in Dockerfiles.

# syntax=docker/dockerfile:1

Note

We recommend using docker/dockerfile:1 , which always points to the latest release of the version 1 syntax. BuildKit automatically checks for updates of the syntax before building, making sure you are using the most current version.

Next we define the first instruction:

FROM ubuntu:22.04

Here the FROM instruction sets our base image to the 22.04 release of Ubuntu. All following instructions are executed on this base image, in this case, an Ubuntu environment. The notation ubuntu:22.04 , follows the name:tag standard for naming docker images. When you build your image you use this notation to name your images and use it to specify any existing Docker image. There are many public images you can leverage in your projects. Explore Docker Hub to find out.

# install app dependencies
RUN apt-get update && apt-get install -y python3 python3-pip

This RUN instruction executes a shell command in the build context.

In this example, our context is a full Ubuntu operating system, so we have access to its package manager, apt. The provided commands update our package lists and then, after that succeeds, installs python3 and pip , the package manager for Python.

Also note # install app dependencies line. This is a comment. Comments in Dockerfiles begin with the # symbol. As your Dockerfile evolves, comments can be instrumental to document how your dockerfile works for any future readers and editors of the file.

Note

Starting your Dockerfile by a # like regular comments is treated as a directive when you are using BuildKit (default), otherwise it is ignored.

RUN pip install flask==2.1.*

This second RUN instruction requires that we’ve installed pip in the layer before. After applying the previous directive, we can use the pip command to install the flask web framework. This is the framework we’ve used to write our basic “Hello World” application from above, so to run it in Docker, we’ll need to make sure it’s installed.

COPY hello.py /

Now we use the COPY instruction to copy our hello.py file from the local build context into the root directory of our image. After being executed, we’ll end up with a file called /hello.py inside the image.

ENV FLASK_APP=hello

This ENV instruction sets a Linux environment variable we’ll need later. This is a flask-specific variable, that configures the command later used to run our hello.py application. Without this, flask wouldn’t know where to find our application to be able to run it.

EXPOSE 8000

This EXPOSE instruction marks that our final image has a service listening on port 8000 . This isn’t required, but it is a good practice, as users and tools can use this to understand what your image does.

CMD flask run --host 0.0.0.0 --port 8000

Finally, CMD instruction sets the command that is run when the user starts a container based on this image. In this case we’ll start the flask development server listening on all addresses on port 8000 .

Testing

To test our Dockerfile, we’ll first build it using the docker build command:

$ docker build -t test:latest .

Here -t test:latest option specifies the name (required) and tag (optional) of the image we’re building. . specifies the build context as the current directory. In this example, this is where build expects to find the Dockerfile and the local files the Dockerfile needs to access, in this case your Python application.

So, in accordance with the build command issued and how build context works, your Dockerfile and python app need to be in the same directory.

Now run your newly built image:

$ docker run -p 8000:8000 test:latest

From your computer, open a browser and navigate to http://localhost:8000

Note

You can also build and run using Play with Docker that provides you with a temporary Docker instance in the cloud.

Other resources

If you are interested in examples in other languages, such as Go, check out our language-specific guides in the Guides section.

Read article

Color output controls

Color output controls

BuildKit and Buildx have support for modifying the colors that are used to output information to the terminal. You can set the environment variable BUILDKIT_COLORS to something like run=123,20,245:error=yellow:cancel=blue:warning=white to set the colors that you would like to use:

Progress output custom colors

Setting NO_COLOR to anything will disable any colorized output as recommended by no-color.org:

Progress output no color

Note

Parsing errors will be reported but ignored. This will result in default color values being used where needed.

See also the list of pre-defined colors.

Read article

Configure BuildKit

Configure BuildKit

If you create a docker-container or kubernetes builder with Buildx, you can apply a custom BuildKit configuration by passing the --config flag to the docker buildx create command.

Registry mirror

You can define a registry mirror to use for your builds. Doing so redirects BuildKit to pull images from a different hostname. The following steps exemplify defining a mirror for docker.io (Docker Hub) to mirror.gcr.io .

  1. Create a TOML at /etc/buildkitd.toml with the following content:

    debug = true
    [registry."docker.io"]
      mirrors = ["mirror.gcr.io"]
    

    Note

    debug = true turns on debug requests in the BuildKit daemon, which logs a message that shows when a mirror is being used.

  2. Create a docker-container builder that uses this BuildKit configuration:

    $ docker buildx create --use --bootstrap \
      --name mybuilder \
      --driver docker-container \
      --config /etc/buildkitd.toml
    
  3. Build an image:

    docker buildx build --load . -f - <<EOF
    FROM alpine
    RUN echo "hello world"
    EOF
    

The BuildKit logs for this builder now shows that it uses the GCR mirror. You can tell by the fact that the response messages include the x-goog-* HTTP headers.

$ docker logs buildx_buildkit_mybuilder0
...
time="2022-02-06T17:47:48Z" level=debug msg="do request" request.header.accept="application/vnd.docker.container.image.v1+json, */*" request.header.user-agent=containerd/1.5.8+unknown request.method=GET spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg="fetch response received" response.header.accept-ranges=bytes response.header.age=1356 response.header.alt-svc="h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"" response.header.cache-control="public, max-age=3600" response.header.content-length=1469 response.header.content-type=application/octet-stream response.header.date="Sun, 06 Feb 2022 17:25:17 GMT" response.header.etag="\"774380abda8f4eae9a149e5d5d3efc83\"" response.header.expires="Sun, 06 Feb 2022 18:25:17 GMT" response.header.last-modified="Wed, 24 Nov 2021 21:07:57 GMT" response.header.server=UploadServer response.header.x-goog-generation=1637788077652182 response.header.x-goog-hash="crc32c=V3DSrg==" response.header.x-goog-hash.1="md5=d0OAq9qPTq6aFJ5dXT78gw==" response.header.x-goog-metageneration=1 response.header.x-goog-storage-class=STANDARD response.header.x-goog-stored-content-encoding=identity response.header.x-goog-stored-content-length=1469 response.header.x-guploader-uploadid=ADPycduqQipVAXc3tzXmTzKQ2gTT6CV736B2J628smtD1iDytEyiYCgvvdD8zz9BT1J1sASUq9pW_ctUyC4B-v2jvhIxnZTlKg response.status="200 OK" spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg="fetch response received" response.header.accept-ranges=bytes response.header.age=760 response.header.alt-svc="h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"" response.header.cache-control="public, max-age=3600" response.header.content-length=1471 response.header.content-type=application/octet-stream response.header.date="Sun, 06 Feb 2022 17:35:13 GMT" response.header.etag="\"35d688bd15327daafcdb4d4395e616a8\"" response.header.expires="Sun, 06 Feb 2022 18:35:13 GMT" response.header.last-modified="Wed, 24 Nov 2021 21:07:12 GMT" response.header.server=UploadServer response.header.x-goog-generation=1637788032100793 response.header.x-goog-hash="crc32c=aWgRjA==" response.header.x-goog-hash.1="md5=NdaIvRUyfar8201DleYWqA==" response.header.x-goog-metageneration=1 response.header.x-goog-storage-class=STANDARD response.header.x-goog-stored-content-encoding=identity response.header.x-goog-stored-content-length=1471 response.header.x-guploader-uploadid=ADPycdtR-gJYwC7yHquIkJWFFG8FovDySvtmRnZBqlO3yVDanBXh_VqKYt400yhuf0XbQ3ZMB9IZV2vlcyHezn_Pu3a1SMMtiw response.status="200 OK" spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg="do request" request.header.accept="application/vnd.docker.image.rootfs.diff.tar.gzip, */*" request.header.user-agent=containerd/1.5.8+unknown request.method=GET spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg="fetch response received" response.header.accept-ranges=bytes response.header.age=1356 response.header.alt-svc="h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"" response.header.cache-control="public, max-age=3600" response.header.content-length=2818413 response.header.content-type=application/octet-stream response.header.date="Sun, 06 Feb 2022 17:25:17 GMT" response.header.etag="\"1d55e7be5a77c4a908ad11bc33ebea1c\"" response.header.expires="Sun, 06 Feb 2022 18:25:17 GMT" response.header.last-modified="Wed, 24 Nov 2021 21:07:06 GMT" response.header.server=UploadServer response.header.x-goog-generation=1637788026431708 response.header.x-goog-hash="crc32c=ZojF+g==" response.header.x-goog-hash.1="md5=HVXnvlp3xKkIrRG8M+vqHA==" response.header.x-goog-metageneration=1 response.header.x-goog-storage-class=STANDARD response.header.x-goog-stored-content-encoding=identity response.header.x-goog-stored-content-length=2818413 response.header.x-guploader-uploadid=ADPycdsebqxiTBJqZ0bv9zBigjFxgQydD2ESZSkKchpE0ILlN9Ibko3C5r4fJTJ4UR9ddp-UBd-2v_4eRpZ8Yo2llW_j4k8WhQ response.status="200 OK" spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
...

Setting registry certificates

If you specify registry certificates in the BuildKit configuration, the daemon copies the files into the container under /etc/buildkit/certs . The following steps show adding a self-signed registry certificate to the BuildKit configuration.

  1. Add the following configuration to /etc/buildkitd.toml :

    # /etc/buildkitd.toml
    debug = true
    [registry."myregistry.com"]
      ca=["/etc/certs/myregistry.pem"]
      [[registry."myregistry.com".keypair]]
        key="/etc/certs/myregistry_key.pem"
        cert="/etc/certs/myregistry_cert.pem"
    

    This tells the builder to push images to the myregistry.com registry using the certificates in the specified location ( /etc/certs ).

  2. Create a docker-container builder that uses this configuration:

    $ docker buildx create --use --bootstrap \
      --name mybuilder \
      --driver docker-container \
      --config /etc/buildkitd.toml
    
  3. Inspect the builder’s configuration file ( /etc/buildkit/buildkitd.toml ), it shows that the certificate configuration is now configured in the builder.

    $ docker exec -it buildx_buildkit_mybuilder0 cat /etc/buildkit/buildkitd.toml
    
    debug = true
    
    [registry]
    
      [registry."myregistry.com"]
        ca = ["/etc/buildkit/certs/myregistry.com/myregistry.pem"]
    
        [[registry."myregistry.com".keypair]]
          cert = "/etc/buildkit/certs/myregistry.com/myregistry_cert.pem"
          key = "/etc/buildkit/certs/myregistry.com/myregistry_key.pem"
    
  4. Verify that the certificates are inside the container:

    $ docker exec -it buildx_buildkit_mybuilder0 ls /etc/buildkit/certs/myregistry.com/
    myregistry.pem    myregistry_cert.pem   myregistry_key.pem
    

Now you can push to the registry using this builder, and it will authenticate using the certificates:

$ docker buildx build --push --tag myregistry.com/myimage:latest .

CNI networking

CNI networking for builders can be useful for dealing with network port contention during concurrent builds. CNI is not yet available in the default BuildKit image. But you can create your own image that includes CNI support.

The following Dockerfile example shows a custom BuildKit image with CNI support. It uses the CNI config for integration tests in BuildKit as an example. Feel free to include your own CNI configuration.

# syntax=docker/dockerfile:1

ARG BUILDKIT_VERSION=v{{ site.buildkit_version }}
ARG CNI_VERSION=v1.0.1

FROM --platform=$BUILDPLATFORM alpine AS cni-plugins
RUN apk add --no-cache curl
ARG CNI_VERSION
ARG TARGETOS
ARG TARGETARCH
WORKDIR /opt/cni/bin
RUN curl -Ls https://github.com/containernetworking/plugins/releases/download/$CNI_VERSION/cni-plugins-$TARGETOS-$TARGETARCH-$CNI_VERSION.tgz | tar xzv

FROM moby/buildkit:${BUILDKIT_VERSION}
ARG BUILDKIT_VERSION
RUN apk add --no-cache iptables
COPY --from=cni-plugins /opt/cni/bin /opt/cni/bin
ADD https://raw.githubusercontent.com/moby/buildkit/${BUILDKIT_VERSION}/hack/fixtures/cni.json /etc/buildkit/cni.json

Now you can build this image, and create a builder instance from it using the --driver-opt image option:

$ docker buildx build --tag buildkit-cni:local --load .
$ docker buildx create --use --bootstrap \
  --name mybuilder \
  --driver docker-container \
  --driver-opt "image=buildkit-cni:local" \
  --buildkitd-flags "--oci-worker-net=cni"

Resource limiting

Max parallelism

You can limit the parallelism of the BuildKit solver, which is particularly useful for low-powered machines, using a BuildKit configuration while creating a builder with the --config flags.

# /etc/buildkitd.toml
[worker.oci]
  max-parallelism = 4

Now you can create a docker-container builder that will use this BuildKit configuration to limit parallelism.

$ docker buildx create --use \
  --name mybuilder \
  --driver docker-container \
  --config /etc/buildkitd.toml

TCP connection limit

TCP connections are limited to 4 simultaneous connections per registry for pulling and pushing images, plus one additional connection dedicated to metadata requests. This connection limit prevents your build from getting stuck while pulling images. The dedicated metadata connection helps reduce the overall build time.

More information: moby/buildkit#2259

Read article