pipeline {
agent {
docker { image 'node:16.13.1-alpine' }
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
This is one stop global knowledge base where you can learn about all the products, solutions and support features.
Many organizations use Docker to unify their build
and test environments across machines, and to provide an efficient mechanism
for deploying applications. Starting with Pipeline versions 2.5 and higher,
Pipeline has built-in support for interacting with Docker from within a
Jenkinsfile
.
While this section will cover the basics of utilizing Docker from within a
Jenkinsfile
, it will not cover the fundamentals of Docker, which can be read
about in the
Docker Getting Started Guide.
Pipeline is designed to easily use
Docker
images as the execution environment for a single
Stage
or the entire Pipeline. Meaning that a user can define the tools required for
their Pipeline, without having to manually configure agents.
Practically any tool which can be
packaged in a Docker container.
can be used with ease by making only minor edits to a
Jenkinsfile
.
pipeline {
agent {
docker { image 'node:16.13.1-alpine' }
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
node {
/* Requires the Docker Pipeline plugin to be installed */
docker.image('node:16.13.1-alpine').inside {
stage('Test') {
sh 'node --version'
}
}
}
When the Pipeline executes, Jenkins will automatically start the specified container and execute the defined steps within it:
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] sh
[guided-tour] Running shell script
+ node --version
v14.15.0
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Short: if it is important to keep workspace synchronized with other stages, use
reuseNode true
.
Otherwise, dockerized stage can be run on any other agent or on the same agent, but in temporary workspace.
By default, for containerized stage, Jenkins does:
pick any agent,
create new empty workspace,
clone pipeline code into it,
mount this new workspace into container.
If you have multiple Jenkins agents, your containerized stage can be started on any of them.
When
reuseNode
set to
true
: no new workspace will be created, and current workspace from current agent will be mounted into container, and container will be started at the same node, so whole data will be synchronized.
pipeline {
agent any
stages {
stage('Build') {
agent {
docker {
image 'gradle:6.7-jdk11'
// Run the container on the node specified at the
// top-level of the Pipeline, in the same workspace,
// rather than on a new node entirely:
reuseNode true
}
}
steps {
sh 'gradle --version'
}
}
}
}
// Option "reuseNode true" currently unsupported in scripted pipeline
Many build tools will download external dependencies and cache them locally for future re-use. Since containers are initially created with "clean" file systems, this can result in slower Pipelines, as they may not take advantage of on-disk caches between subsequent Pipeline runs.
Pipeline supports adding custom arguments which are passed
to Docker, allowing users to specify custom
Docker Volumes
to mount, which can be used for caching data on the
agent
between Pipeline runs. The following example will cache
~/.m2
between
Pipeline runs utilizing the
maven
container,
thereby avoiding the need to re-download dependencies for subsequent runs of
the Pipeline.
pipeline {
agent {
docker {
image 'maven:3.8.6-eclipse-temurin-11'
args '-v $HOME/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B'
}
}
}
}
node {
/* Requires the Docker Pipeline plugin to be installed */
docker.image('maven:3.8.6-eclipse-temurin-11').inside('-v $HOME/.m2:/root/.m2') {
stage('Build') {
sh 'mvn -B'
}
}
}
It has become increasingly common for code bases to rely on
multiple, different, technologies. For example, a repository might have both a
Java-based back-end API implementation
and
a JavaScript-based front-end
implementation. Combining Docker and Pipeline allows a
Jenkinsfile
to use
multiple
types of technologies by combining the
agent {}
directive, with
different stages.
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3.8.6-eclipse-temurin-11' }
}
steps {
sh 'mvn --version'
}
}
stage('Front-end') {
agent {
docker { image 'node:16.13.1-alpine' }
}
steps {
sh 'node --version'
}
}
}
}
node {
/* Requires the Docker Pipeline plugin to be installed */
stage('Back-end') {
docker.image('maven:3.8.6-eclipse-temurin-11').inside {
sh 'mvn --version'
}
}
stage('Front-end') {
docker.image('node:16.13.1-alpine').inside {
sh 'node --version'
}
}
}
For projects which require a more customized execution environment, Pipeline
also supports building and running a container from a
Dockerfile
in the source
repository. In contrast to the previous approach of using
an "off-the-shelf" container, using the
agent { dockerfile true }
syntax will
build a new image from a
Dockerfile
rather than pulling one from
Docker Hub.
Re-using an example from above, with a more custom
Dockerfile
:
FROM node:16.13.1-alpine
RUN apk add -U subversion
By committing this to the root of the source repository, the
Jenkinsfile
can
be changed to build a container based on this
Dockerfile
and then run the
defined steps using that container:
pipeline {
agent { dockerfile true }
stages {
stage('Test') {
steps {
sh 'node --version'
sh 'svn --version'
}
}
}
}
The
agent { dockerfile true }
syntax supports a number of other options which
are described in more detail in the
Pipeline Syntax section.
By default, Pipeline assumes that any configured agent is capable of running Docker-based Pipelines. For Jenkins environments which have macOS, Windows, or other agents, which are unable to run the Docker daemon, this default setting may be problematic. Pipeline provides a global option in the Manage Jenkins page, and on the Folder level, for specifying which agents (by Label) to use for running Docker-based Pipelines.
The
/usr/local/bin
directory is not included in the macOS
PATH
for Docker images by default.
If executables from
/usr/local/bin
need to be called from within Jenkins, then the
PATH
needs to be extended to include
/usr/local/bin
.
Add a path node in the file "/usr/local/Cellar/jenkins-lts/XXX/homebrew.mxcl.jenkins-lts.plist" like this:
<key>EnvironmentVariables</key>
<dict>
<key>PATH</key
<string><!-- insert revised path here --></string>
</dict>
The revised
PATH
string
should be a colon separated list of directories in the same format as the
PATH
environment variable and should include:
/usr/local/bin
/usr/bin
/bin
/usr/sbin
/sbin
/Applications/Docker.app/Contents/Resources/bin/
/Users/XXX/Library/Group\ Containers/group.com.docker/Applications/Docker.app/Contents/Resources/bin
(where
XXX
is replaced by your user name)
Now restart jenkins using "brew services restart jenkins-lts"
Using Docker in Pipeline can be an effective way to run a service on which the build, or a set of tests, may rely. Similar to the sidecar pattern , Docker Pipeline can run one container "in the background", while performing work in another. Utilizing this sidecar approach, a Pipeline can have a "clean" container provisioned for each Pipeline run.
Consider a hypothetical integration test suite which relies on a local MySQL
database to be running. Using the
withRun
method, implemented in the
Docker Pipeline plugin’s support for Scripted Pipeline,
a
Jenkinsfile
can run MySQL as a sidecar:
node {
checkout scm
/*
* In order to communicate with the MySQL server, this Pipeline explicitly
* maps the port (`3306`) to a known port on the host machine.
*/
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"' +
' -p 3306:3306') { c ->
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -h0.0.0.0 --silent; do sleep 1; done'
/* Run some tests which require MySQL */
sh 'make check'
}
}
This example can be taken further, utilizing two containers simultaneously. One "sidecar" running MySQL, and another providing the execution environment, by using the Docker container links.
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
The above example uses the object exposed by
withRun
, which has the
running container’s ID available via the
id
property. Using the container’s
ID, the Pipeline can create a link by passing custom Docker arguments to the
inside()
method.
The
id
property can also be useful for inspecting logs from a running Docker
container before the Pipeline exits:
sh "docker logs ${c.id}"
In order to create a Docker image, the Docker Pipeline
plugin also provides a
build()
method for creating a new image, from a
Dockerfile
in the repository, during a Pipeline run.
One major benefit of using the syntax
docker.build("my-image-name")
is that a
Scripted Pipeline can use the return value for subsequent Docker Pipeline
calls, for example:
node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.inside {
sh 'make test'
}
}
The return value can also be used to publish the Docker image to
Docker Hub,
or a custom Registry,
via the
push()
method, for example:
node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.push()
}
One common usage of image "tags" is to specify a
latest
tag for the most
recently, validated, version of a Docker image. The
push()
method accepts an
optional
tag
parameter, allowing the Pipeline to push the
customImage
with
different tags, for example:
node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.push()
customImage.push('latest')
}
The
build()
method builds the
Dockerfile
in the current directory by
default. This can be overridden by providing a directory path
containing a
Dockerfile
as the second argument of the
build()
method, for example:
node {
checkout scm
def testImage = docker.build("test-image", "./dockerfiles/test") (1)
testImage.inside {
sh 'make test'
}
}
1 |
Builds
test-image
from the Dockerfile found at
./dockerfiles/test/Dockerfile
.
|
It is possible to pass other arguments to
docker build
by adding them to the second argument of the
build()
method.
When passing arguments this way, the last value in the that string must be
the path to the docker file and should end with the folder to use as the build context)
This example overrides the default
Dockerfile
by passing the
-f
flag:
node {
checkout scm
def dockerfile = 'Dockerfile.test'
def customImage = docker.build("my-image:${env.BUILD_ID}",
"-f ${dockerfile} ./dockerfiles") (1)
}
1 |
Builds
my-image:${env.BUILD_ID}
from the Dockerfile found at
./dockerfiles/Dockerfile.test
.
|
By default, the Docker Pipeline plugin will communicate
with a local Docker daemon, typically accessed through
/var/run/docker.sock
.
To select a non-default Docker server, such as with
Docker Swarm,
the
withServer()
method should be used.
By passing a URI, and optionally the Credentials ID of a Docker Server Certificate Authentication pre-configured in Jenkins, to the method with:
node {
checkout scm
docker.withServer('tcp://swarm.example.com:2376', 'swarm-certs') {
docker.image('mysql:5').withRun('-p 3306:3306') {
/* do things */
}
}
}
For
Currently neither the Jenkins plugin nor the Docker CLI will automatically
detect the case that the server is running remotely; a typical symptom would be
errors from nested
When Jenkins detects that the agent is itself running inside a Docker
container, it will automatically pass the
Additionally some versions of Docker Swarm do not support custom Registries. |
By default the Docker Pipeline integrates assumes the default Docker Registry of Docker Hub.
In order to use a custom Docker Registry, users of Scripted Pipeline can wrap
steps with the
withRegistry()
method, passing in the custom Registry URL, for
example:
node {
checkout scm
docker.withRegistry('https://registry.example.com') {
docker.image('my-custom-image').inside {
sh 'make test'
}
}
}
For a Docker Registry which requires authentication, add a "Username/Password"
Credentials item from the Jenkins home page and use the Credentials ID as a
second argument to
withRegistry()
:
node {
checkout scm
docker.withRegistry('https://registry.example.com', 'credentials-id') {
def customImage = docker.build("my-image:${env.BUILD_ID}")
/* Push the container to the custom Registry */
customImage.push()
}
}
Was this page helpful?
Please submit your feedback about this page through this quick form.
Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?
See existing feedback here.
As mentioned previously, Jenkins Pipeline is a suite of plugins that supports implementing and integrating continuous delivery pipelines into Jenkins. Pipeline provides an extensible set of tools for modeling simple-to-complex delivery pipelines "as code" via the Pipeline DSL. [1]
This section describes how to get started with creating your Pipeline project in
Jenkins and introduces you to the various ways that a
Jenkinsfile
can be
created and stored.
To use Jenkins Pipeline, you will need:
Jenkins 2.x or later (older versions back to 1.642.3 may work but are not recommended)
Pipeline plugin, [2] which is installed as part of the "suggested plugins" (specified when running through the Post-installation setup wizard after installing Jenkins).
Read more about how to install and manage plugins in Managing Plugins.
Both Declarative and Scripted Pipeline are DSLs [1] to describe portions of your software delivery pipeline. Scripted Pipeline is written in a limited form of Groovy syntax.
Relevant components of Groovy syntax will be introduced as required throughout this documentation, so while an understanding of Groovy is helpful, it is not required to work with Pipeline.
A Pipeline can be created in one of the following ways:
Through Blue Ocean - after setting up a Pipeline project in Blue Ocean,
the Blue Ocean UI helps you write your Pipeline’s
Jenkinsfile
and commit it
to source control.
Through the classic UI - you can enter a basic Pipeline directly in Jenkins through the classic UI.
In SCM - you can write a
Jenkinsfile
manually, which you can commit to your project’s source control repository.
[3]
The syntax for defining a Pipeline with either approach is the same, but while
Jenkins supports entering Pipeline directly into the classic UI, it is
generally considered best practice to define the Pipeline in a
Jenkinsfile
which Jenkins will then load directly from source control.
This video provides basic instructions on how to write both Declarative and Scripted Pipelines.
If you are new to Jenkins Pipeline, the Blue Ocean UI helps you
set up your Pipeline project, and
automatically creates and writes your Pipeline (i.e. the
Jenkinsfile
) for you
through the graphical Pipeline editor.
As part of setting up your Pipeline project in Blue Ocean, Jenkins configures a
secure and appropriately authenticated connection to your project’s source
control repository. Therefore, any changes you make to the
Jenkinsfile
via
Blue Ocean’s Pipeline editor are automatically saved and committed to source
control.
Read more about Blue Ocean in the Blue Ocean chapter and Getting started with Blue Ocean page.
Blue Ocean status
Blue Ocean will not receive further functionality updates. Blue Ocean will continue to provide easy-to-use Pipeline visualization, but it will not be enhanced further. It will only receive selective updates for significant security issues or functional defects. The Pipeline syntax snippet generator assists users as they define Pipeline steps with their arguments. It is the preferred tool for Jenkins Pipeline creation, as it provides online help for the Pipeline steps available in your Jenkins controller. It uses the plugins installed on your Jenkins controller to generate the Pipeline syntax. Refer to the Pipeline steps reference page for information on all available Pipeline steps. |
A
Jenkinsfile
created using the classic UI is stored by Jenkins itself (within
the Jenkins home directory).
To create a basic Pipeline through the Jenkins classic UI:
If required, ensure you are logged in to Jenkins.
From the Jenkins home page (i.e. the Dashboard of the Jenkins classic UI), click New Item at the top left.
In the
Enter an item name
field, specify the name for your new Pipeline
project.
Caution:
Jenkins uses this item name to create directories on disk. It is
recommended to avoid using spaces in item names, since doing so may uncover
bugs in scripts that do not properly handle spaces in directory paths.
Scroll down and click Pipeline , then click OK at the end of the page to open the Pipeline configuration page (whose General tab is selected).
Click the
Pipeline
tab at the top of the page to scroll down to the
Pipeline
section.
Note:
If instead you are defining your
Jenkinsfile
in source control,
follow the instructions in In SCM below.
In the Pipeline section, ensure that the Definition field indicates the Pipeline script option.
Enter your Pipeline code into the
Script
text area.
For instance, copy the following Declarative example Pipeline code (below the
Jenkinsfile ( … )
heading) or its Scripted version equivalent and paste
this into the
Script
text area. (The Declarative example below is used
throughout the remainder of this procedure.)
pipeline {
agent any (1)
stages {
stage('Stage 1') {
steps {
echo 'Hello world!' (2)
}
}
}
}
node { (3)
stage('Stage 1') {
echo 'Hello World' (2)
}
}
1 |
agent
instructs Jenkins to allocate an executor (on any available
agent/node in the Jenkins environment) and workspace for the entire Pipeline.
|
2 |
echo
writes simple string in the console output.
|
3 |
node
effectively does the same as
agent
(above).
|
Note: You can also select from canned Scripted Pipeline examples from the try sample Pipeline option at the top right of the Script text area. Be aware that there are no canned Declarative Pipeline examples available from this field.
Click Save to open the Pipeline project/item view page.
On this page, click Build Now on the left to run the Pipeline.
Under Build History on the left, click #1 to access the details for this particular Pipeline run.
Click Console Output to see the full output from the Pipeline run. The following output shows a successful run of your Pipeline.
Notes:
You can also access the console output directly from the Dashboard by clicking the colored globe to the left of the build number (e.g. #1 ).
Defining a Pipeline through the classic UI is convenient for testing Pipeline
code snippets, or for handling simple Pipelines or Pipelines that do not
require source code to be checked out/cloned from a repository. As mentioned
above, unlike
Jenkinsfile
s you define through Blue Ocean
(above) or in source control
(below),
Jenkinsfile
s entered into
the
Script
text area area of Pipeline projects are stored by Jenkins itself,
within the Jenkins home directory. Therefore, for greater control and
flexibility over your Pipeline, particularly for projects in source control
that are likely to gain complexity, it is recommended that you use
Blue Ocean or
source control to define your
Jenkinsfile
.
Complex Pipelines are difficult to write and maintain within the classic UI’s Script text area of the Pipeline configuration page.
To make this easier, your Pipeline’s
Jenkinsfile
can be written in a text
editor or integrated development environment (IDE) and committed to source
control
[3]
(optionally with the application code that Jenkins
will build). Jenkins can then check out your
Jenkinsfile
from source control
as part of your Pipeline project’s build process and then proceed to execute
your Pipeline.
To configure your Pipeline project to use a
Jenkinsfile
from source control:
Follow the procedure above for defining your Pipeline through the classic UI until you reach step 5 (accessing the Pipeline section on the Pipeline configuration page).
From the Definition field, choose the Pipeline script from SCM option.
From the
SCM
field, choose the type of source control system of the
repository containing your
Jenkinsfile
.
Complete the fields specific to your repository’s source control system.
Tip:
If you are uncertain of what value to specify for a given field, click
its
?
icon to the right for more information.
In the
Script Path
field, specify the location (and name) of your
Jenkinsfile
. This location is the one that Jenkins checks out/clones the
repository containing your
Jenkinsfile
, which should match that of the
repository’s file structure. The default value of this field assumes that your
Jenkinsfile
is named "Jenkinsfile" and is located at the root of the
repository.
When you update the designated repository, a new build is triggered, as long as the Pipeline is configured with an SCM polling trigger.
Since Pipeline code (i.e. Scripted Pipeline in particular) is written in
Groovy-like syntax, if your IDE is not correctly syntax highlighting your
|
Pipeline ships with built-in documentation features to make it easier to create Pipelines of varying complexities. This built-in documentation is automatically generated and updated based on the plugins installed in the Jenkins instance.
The built-in documentation can be found globally at
${YOUR_JENKINS_URL}/pipeline-syntax
.
The same documentation is also linked as
Pipeline Syntax
in the side-bar for any
configured Pipeline project.
The built-in "Snippet Generator" utility is helpful for creating bits of code for individual steps, discovering new steps provided by plugins, or experimenting with different parameters for a particular step.
The Snippet Generator is dynamically populated with a list of the steps available to the Jenkins instance. The number of steps available is dependent on the plugins installed which explicitly expose steps for use in Pipeline.
To generate a step snippet with the Snippet Generator:
Navigate to the
Pipeline Syntax
link (referenced above) from a configured Pipeline, or at
${YOUR_JENKINS_URL}/pipeline-syntax
.
Select the desired step in the Sample Step dropdown menu
Use the dynamically populated area below the Sample Step dropdown to configure the selected step.
Click Generate Pipeline Script to create a snippet of Pipeline which can be copied and pasted into a Pipeline.
To access additional information and/or documentation about the step selected, click on the help icon (indicated by the red arrow in the image above).
In addition to the Snippet Generator, which only surfaces steps, Pipeline also provides a built-in " Global Variable Reference ." Like the Snippet Generator, it is also dynamically populated by plugins. Unlike the Snippet Generator however, the Global Variable Reference only contains documentation for variables provided by Pipeline or plugins, which are available for Pipelines.
The variables provided by default in Pipeline are:
Exposes environment variables, for example:
env.PATH
or
env.BUILD_ID
. Consult the built-in global variable reference at
${YOUR_JENKINS_URL}/pipeline-syntax/globals#env
for a complete, and up to date, list of environment variables
available in Pipeline.
Exposes all parameters defined for the Pipeline as a read-only
Map,
for example:
params.MY_PARAM_NAME
.
May be used to discover information about the currently executing Pipeline,
with properties such as
currentBuild.result
,
currentBuild.displayName
,
etc. Consult the built-in global variable reference at
${YOUR_JENKINS_URL}/pipeline-syntax/globals
for a complete, and up to date, list of properties available on
currentBuild
.
This video reviews using the
currentBuild
variable in Jenkins Pipeline.
While the Snippet Generator helps with generating steps for a Scripted
Pipeline or for the
steps
block in a
stage
in a Declarative Pipeline, it
does not cover the sections and
directives used to define a Declarative Pipeline.
The "Declarative Directive Generator" utility helps with that.
Similar to the Snippet Generator, the Directive Generator allows you
to choose a Declarative directive, configure it in a form, and generate the
configuration for that directive, which you can then use in your Declarative Pipeline.
To generate a Declarative directive using the Declarative Directive Generator:
Navigate to the
Pipeline Syntax
link (referenced above) from a configured Pipeline,
and then click on the
Declarative Directive Generator
link in the sidepanel,
or go directly to
${YOUR_JENKINS_URL}/directive-generator
.
Select the desired directive in the dropdown menu
Use the dynamically populated area below the dropdown to configure the selected directive.
Click Generate Directive to create the directive’s configuration to copy into your Pipeline.
The Directive Generator can generate configuration for nested directives,
such as conditions inside a
when
directive, but it cannot generate Pipeline steps.
For the contents of directives which contain steps,
such as
steps
inside a
stage
or conditions like
always
or
failure
inside
post
,
the Directive Generator adds a placeholder comment instead.
You will still need to add steps to your Pipeline by hand.
stage('Stage 1') {
steps {
// One or more steps need to be included within the steps block.
}
}
This section merely scratches the surface of what can be done with Jenkins Pipeline, but should provide enough of a foundation for you to start experimenting with a test Jenkins instance.
In the next section, The Jenkinsfile, more Pipeline steps will be discussed along with patterns for implementing successful, real-world, Jenkins Pipelines.
Pipeline Steps Reference, encompassing all steps provided by plugins distributed in the Jenkins Update Center.
Pipeline Examples, a community-curated collection of copyable Pipeline examples.
Was this page helpful?
Please submit your feedback about this page through this quick form.
Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?
See existing feedback here.
This chapter covers all recommended aspects of Jenkins Pipeline functionality, including how to:
get started with Pipeline - covers how to
define a Jenkins Pipeline (i.e. your
Pipeline
) through
Blue Ocean, through the
classic UI or in
SCM,
create and use a
Jenkinsfile
- covers use-case scenarios
on how to craft and construct your
Jenkinsfile
,
work with branches and pull requests,
use Docker with Pipeline - covers how Jenkins can invoke Docker
containers on agents/nodes (from a
Jenkinsfile
) to build your Pipeline
projects,
extend Pipeline with shared libraries,
use different development tools to facilitate the creation of your Pipeline, and
work with Pipeline syntax - this page is a comprehensive reference of all Declarative Pipeline syntax.
For an overview of content in the Jenkins User Handbook, see User Handbook Overview.
Jenkins Pipeline (or simply "Pipeline" with a capital "P") is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.
A continuous delivery (CD) pipeline is an automated expression of your process for getting software from version control right through to your users and customers. Every change to your software (committed in source control) goes through a complex process on its way to being released. This process involves building the software in a reliable and repeatable manner, as well as progressing the built software (called a "build") through multiple stages of testing and deployment.
Pipeline provides an extensible set of tools for modeling simple-to-complex delivery pipelines "as code" via the Pipeline domain-specific language (DSL) syntax. [1]
The definition of a Jenkins Pipeline is written into a text file (called a
Jenkinsfile
) which in turn can be committed to a project’s
source control repository.
[2]
This is the foundation of "Pipeline-as-code"; treating the CD pipeline a part of
the application to be versioned and reviewed like any other code.
Creating a
Jenkinsfile
and committing it to source control provides a number
of immediate benefits:
Automatically creates a Pipeline build process for all branches and pull requests.
Code review/iteration on the Pipeline (along with the remaining source code).
Audit trail for the Pipeline.
Single source of truth [3] for the Pipeline, which can be viewed and edited by multiple members of the project.
While the syntax for defining a Pipeline, either in the web UI or with a
Jenkinsfile
is the same, it is generally considered best practice to define
the Pipeline in a
Jenkinsfile
and check that in to source control.
A
Jenkinsfile
can be written using two types of syntax - Declarative and
Scripted.
Declarative and Scripted Pipelines are constructed fundamentally differently. Declarative Pipeline is a more recent feature of Jenkins Pipeline which:
provides richer syntactical features over Scripted Pipeline syntax, and
is designed to make writing and reading Pipeline code easier.
Many of the individual syntactical components (or "steps") written into a
Jenkinsfile
, however, are common to both Declarative and Scripted Pipeline.
Read more about how these two types of syntax differ in Pipeline concepts
and Pipeline syntax overview below.
Jenkins is, fundamentally, an automation engine which supports a number of automation patterns. Pipeline adds a powerful set of automation tools onto Jenkins, supporting use cases that span from simple continuous integration to comprehensive CD pipelines. By modeling a series of related tasks, users can take advantage of the many features of Pipeline:
Code : Pipelines are implemented in code and typically checked into source control, giving teams the ability to edit, review, and iterate upon their delivery pipeline.
Durable : Pipelines can survive both planned and unplanned restarts of the Jenkins controller.
Pausable : Pipelines can optionally stop and wait for human input or approval before continuing the Pipeline run.
Versatile : Pipelines support complex real-world CD requirements, including the ability to fork/join, loop, and perform work in parallel.
Extensible : The Pipeline plugin supports custom extensions to its DSL [1] and multiple options for integration with other plugins.
While Jenkins has always allowed rudimentary forms of chaining Freestyle Jobs together to perform sequential tasks, [4] Pipeline makes this concept a first-class citizen in Jenkins.
Building on the core Jenkins value of extensibility, Pipeline is also extensible both by users with Pipeline Shared Libraries and by plugin developers. [5]
The flowchart below is an example of one CD scenario easily modeled in Jenkins Pipeline:
The following concepts are key aspects of Jenkins Pipeline, which tie in closely to Pipeline syntax (see the overview below).
A Pipeline is a user-defined model of a CD pipeline. A Pipeline’s code defines your entire build process, which typically includes stages for building an application, testing it and then delivering it.
Also, a
pipeline
block is a
key part of Declarative Pipeline syntax.
A node is a machine which is part of the Jenkins environment and is capable of executing a Pipeline.
Also, a
node
block is a
key part of Scripted Pipeline syntax.
A
stage
block defines a conceptually distinct subset of tasks performed
through the entire Pipeline (e.g. "Build", "Test" and "Deploy" stages),
which is used by many plugins to visualize or present Jenkins Pipeline
status/progress.
[6]
A single task. Fundamentally, a step tells Jenkins
what
to do at a
particular point in time (or "step" in the process). For example, to execute
the shell command
make
use the
sh
step:
sh 'make'
. When a plugin
extends the Pipeline DSL,
[1]
that typically means the plugin has
implemented a new
step
.
The following Pipeline code skeletons illustrate the fundamental differences between Declarative Pipeline syntax and Scripted Pipeline syntax.
Be aware that both stages and steps (above) are common elements of both Declarative and Scripted Pipeline syntax.
In Declarative Pipeline syntax, the
pipeline
block defines all the work done
throughout your entire Pipeline.
pipeline {
agent any (1)
stages {
stage('Build') { (2)
steps {
// (3)
}
}
stage('Test') { (4)
steps {
// (5)
}
}
stage('Deploy') { (6)
steps {
// (7)
}
}
}
}
1 | Execute this Pipeline or any of its stages, on any available agent. |
2 | Defines the "Build" stage. |
3 | Perform some steps related to the "Build" stage. |
4 | Defines the "Test" stage. |
5 | Perform some steps related to the "Test" stage. |
6 | Defines the "Deploy" stage. |
7 | Perform some steps related to the "Deploy" stage. |
In Scripted Pipeline syntax, one or more
node
blocks do the core work
throughout the entire Pipeline. Although this is not a mandatory requirement of
Scripted Pipeline syntax, confining your Pipeline’s work inside of a
node
block does two things:
Schedules the steps contained within the block to run by adding an item to the Jenkins queue. As soon as an executor is free on a node, the steps will run.
Creates a workspace (a directory specific to that particular
Pipeline) where work can be done on files checked out from source control.
Caution:
Depending on your Jenkins configuration, some workspaces may
not get automatically cleaned up after a period of inactivity. See tickets
and discussion linked from
JENKINS-2111 for more
information.
node { (1)
stage('Build') { (2)
// (3)
}
stage('Test') { (4)
// (5)
}
stage('Deploy') { (6)
// (7)
}
}
1 | Execute this Pipeline or any of its stages, on any available agent. |
2 |
Defines the "Build" stage.
stage
blocks are optional in Scripted Pipeline
syntax. However, implementing
stage
blocks in a Scripted Pipeline provides
clearer visualization of each `stage’s subset of tasks/steps in the Jenkins UI.
|
3 | Perform some steps related to the "Build" stage. |
4 | Defines the "Test" stage. |
5 | Perform some steps related to the "Test" stage. |
6 | Defines the "Deploy" stage. |
7 | Perform some steps related to the "Deploy" stage. |
Here is an example of a
Jenkinsfile
using Declarative Pipeline syntax - its
Scripted syntax equivalent can be accessed by clicking the
Toggle Scripted
Pipeline
link below:
pipeline { (1)
agent any (2)
options {
skipStagesAfterUnstable()
}
stages {
stage('Build') { (3)
steps { (4)
sh 'make' (5)
}
}
stage('Test'){
steps {
sh 'make check'
junit 'reports/**/*.xml' (6)
}
}
stage('Deploy') {
steps {
sh 'make publish'
}
}
}
}
node { (7)
stage('Build') { (3)
sh 'make' (5)
}
stage('Test') {
sh 'make check'
junit 'reports/**/*.xml' (6)
}
if (currentBuild.currentResult == 'SUCCESS') {
stage('Deploy') {
sh 'make publish' (7)
}
}
}
1 |
pipeline
is Declarative
Pipeline-specific syntax that defines a "block" containing all content and
instructions for executing the entire Pipeline.
|
2 |
agent
is Declarative Pipeline-specific syntax that
instructs Jenkins to allocate an executor (on a node) and workspace for the
entire Pipeline.
|
3 |
stage
is a syntax block that describes a
stage of this Pipeline. Read more about
stage
blocks in
Declarative Pipeline syntax on the Pipeline syntax page. As
mentioned above,
stage
blocks are
optional in Scripted Pipeline syntax.
|
4 |
steps
is Declarative Pipeline-specific syntax that
describes the steps to be run in this
stage
.
|
5 |
sh
is a Pipeline step (provided by the
Pipeline: Nodes and Processes plugin) that
executes the given shell command.
|
6 |
junit
is another Pipeline step (provided by the
JUnit plugin) for aggregating test reports.
|
7 |
sh
is a Pipeline step (provided by the
Pipeline: Nodes and Processes plugin) that
executes the given shell command.
|
Read more about Pipeline syntax on the Pipeline Syntax page.
Was this page helpful?
Please submit your feedback about this page through this quick form.
Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?
See existing feedback here.