@NonCPS
def compileOnPlatforms() {
['linux', 'windows'].each { arch ->
node(arch) {
sh 'make'
}
}
}
compileOnPlatforms()
This is one stop global knowledge base where you can learn about all the products, solutions and support features.
It is possible to customize Jenkins' appearance with custom themes. This feature is not a part of the Jenkins core, but it is supported through plugins.
There are several plugins that provide built-in themes, the most popular are
Dark Theme Plugin - provides a dark theme for Jenkins. Supports configuration as code to select the theme configuration.
Material Theme Plugin - port of Afonso F’s Jenkins material theme to use Theme Manager.
Solarized Theme Plugin - provides Solarized (light and dark) themes.
Installing any of these will also install their common dependency: the Theme Manager Plugin. This plugin allows administrators to set the default theme for a Jenkins installation via Manage Jenkins > Configure System > Built-in Themes and users can set their preferred theme in their personal settings. You can also configure this plugin using Configuration-as-Code Plugin. See the plugin documentation for more details.
To be able to fully customize Jenkins appearance you can install the Simple Theme Plugin. It allows customizing the Jenkins UI by providing custom CSS and Javascript files. It also supports replacing the Favicon.
To configure a theme, you can go to Manage Jenkins > Configure System > Theme and enter the URL of your stylesheet and/or Javascript file. You can also configure this plugin using Configuration-as-Code Plugin. See the plugin documentation for the detailed usage guidelines and links to sample themes.
Since Jenkins 2.128 themes configured using Simple Theme Plugin do not allow you to customize the login screen (announcement). To customize the login screen you can install the Login Theme Plugin.
Jenkins themes are provided âas isâ, without warranty of any kind, implicit or explicit. The Jenkins core, plugins and other component updates may break theme compatibility without notice. |
At the moment, the Jenkins project does not provide specification for layouts/CSS, and we cannot guarantee backward or forward compatibility. We try to reflect major changes in changelogs (e.g. see the âdeveloperâ changes in the Jenkins changelog), but minor changes may not be included there.
There is an ongoing effort focused on improving Jenkins look-and-feel, accessibility, and user experience. This area is mission-critical to the project. There are multiple initiatives in the Jenkins Roadmap being coordinated by the Jenkins User Experience SIG.
Major UI changes imply incompatible changes in layouts and the CSS structure which is critical for theme plugins. Historically Jenkins had no explicit support policy for themes, and we do not want to provide compatibility requirements which would create obstacles for reworking the main Jenkins interface. Later, once the Jenkins UI rework reaches its destination and the UI becomes more stable, we could consider creating specifications for theme extensibility so that we could make themes more stable and maintain compatibility.
For built-in themes, users are welcome to report discovered compatibility issues to theme maintainers, and to submit patches there.
We will generally reject bug reports to the Jenkins core/plugins involving broken UI elements with a custom theme. We will consider pull requests which restore compatibility and do not block further Web UI evolvement.
If a theme outside the jenkinsci GitHub organization is no longer maintained,
it is fine to fork it and to create a new version.
For themes hosted within the
jenkinsci
organization,
we have an adoption process which also applies to themes.
|
We encourage Jenkins users to create themes and to share them. Such themes could be a great way to experiment with UI enhancements, and we would be happy to consider enhancements from them for a default Jenkins theme.
To improve the user experience, please consider the following recommendations:
Explicitly document compatibility for themes.
Compatibility documentation should include: required theme plugins and versions, target Jenkins core version, plugin requirements and versions if applicable (UI/CSS are overridden), and browser compatibility.
Examples of such documentation: Jenkins Atlassian Theme, Neo2
Version themes with tags on Git and to maintain changelogs with explicit references to changes in the supported versions (e.g. see our release drafter documentation as one of the ways to automate changelogs).
Explicitly define an OSI-approved open source license so that users can freely modify and redistribute them.
This is also a prerequisite for hosting themes in Jenkins GitHub organizations and, in the future, theme marketplaces or other similar promotion engines.
If you would like to share a story about Jenkins themes, please let the Advocacy&Outreach SIG know!
Was this page helpful?
Please submit your feedback about this page through this quick form.
Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?
See existing feedback here.
Jenkins has a mechanism known as "User Content", where administrators can place files inside
$JENKINS_HOME/userContent
,
and these files are served from http://yourhost/jenkins/userContent. This can be thought of as a mini HTTP server to serve
images, stylesheets, and other static resources that you can use from various description fields inside Jenkins.
Note that these files are not subject to any access controls beyond requiring Overall/Read access.
See Git userContent plugin for how to manage these files through a Git repository.
Was this page helpful?
Please submit your feedback about this page through this quick form.
Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?
See existing feedback here.
This section is a work in progress. Want to help? Check out the jenkinsci-docs mailing list. For other ways to contribute to the Jenkins project, see this page about participating and contributing. |
Was this page helpful?
Please submit your feedback about this page through this quick form.
Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?
See existing feedback here.
@NonCPS
GString
Â
Jenkins Pipeline uses a library called Groovy CPS to run Pipeline scripts.
While Pipeline uses the Groovy parser and compiler, unlike a regular Groovy environment it runs most of the program inside a special interpreter.
This uses a continuation-passing style (CPS) transform to turn your code into a version that can save its current state to disk (a file called
program.dat
  inside your build directory) and continue running even after Jenkins has restarted.
(You can get some more technical background on the Pipeline: Groovy plugin page and the library page.)
While the CPS transform is usually transparent to users, there are limitations to what Groovy language constructs can be supported, and in some circumstances it can lead to counterintuitive behavior. JENKINS-31314Â makes the runtime try to detect the most common mistake: calling CPS-transformed code from non-CPS-transformed code. The following kinds of things are CPS-transformed:
Almost all of the Pipeline script you write (including in libraries)
Most Pipeline steps, including all those which take a block
The following kinds of things are not  CPS-transformed:
Compiled Java bytecode, including
the Java Platform
Jenkins core and plugins
the runtime for the Groovy language
Constructor bodies in your Pipeline script
Any method in your Pipeline script marked with theÂ
@NonCPS
 annotation
A few Pipeline steps which take no block and act instantaneously, such asÂ
echo
 orÂ
properties
CPS-transformed code may call non-CPS-transformed code or other CPS-transformed code, and non-CPS-transformed code may call other non-CPS-transformed code, but non-CPS-transformed code may not call CPS-transformed code. If you try to call CPS-transformed code from non-CPS-transformed code, the CPS interpreter is unable to operate correctly, resulting in incorrect and often confusing results.
@NonCPS
Sometimes users will apply theÂ
@NonCPS
 annotation to a method definition in order to bypass the CPS transform inside that method.
This can be done to work around limitations in Groovy language coverage (since the body of the method will execute using the native Groovy semantics), or to get better performance (the interpreter imposes a substantial overhead).
However, such methods must not call CPS-transformed code such as Pipeline steps.
For example, the following will not work:
@NonCPS
def compileOnPlatforms() {
['linux', 'windows'].each { arch ->
node(arch) {
sh 'make'
}
}
}
compileOnPlatforms()
Using theÂ
node
 orÂ
sh
 steps from this method is illegal, and the behavior will be anomalous.
The warning in the logs from running this script looks like this:
expected to call WorkflowScript.compileOnPlatforms but wound up catching node
To fix this case, simply remove the annotation â it was not needed. (Longtime Pipeline users might have thought it was, prior to the fix of JENKINS-26481.)
Some Groovy and Java methods take complex types as parameters to support dynamic behavior. A common case is sorting methods that allow callers to specify a method to use for comparing objects (JENKINS-44924). Many similar methods in the Groovy standard library work correctly after the fix for JENKINS-26481, but some methods remain unfixed. For example, the following will not work:
def sortByLength(List<String> list) {
list.toSorted { a, b -> Integer.valueOf(a.length()).compareTo(b.length()) }
}
def sorted = sortByLength(['333', '1', '4444', '22'])
echo(sorted.toString())
The closure passed toÂ
Iterable.toSorted
is CPS-transformed, but
Iterable.toSorted
itself is not CPS-transformed internally, so this will not work as intended.
The current behavior is that the return value of the call to
toSorted
will be the return value of the first call to the closure.
In the example, this results inÂ
sorted
being set to
-1
, and the warning in the logs looks like this:
expected to call java.util.ArrayList.toSorted but wound up catching org.jenkinsci.plugins.workflow.cps.CpsClosure2.call
To fix this case, any argument passed to these methods must not be CPS-transformed.
This can be accomplished by encapsulating the problematic method (
Iterable.toSorted
in the example) inside another method, and annotating the outer method withÂ
@NonCPS
, or by creating an explicit class definition for the closure and annotating all methods on that class withÂ
@NonCPS
.
Occasionally, users may attempt to use CPS-transformed code such as Pipeline steps inside of a constructor in a Pipeline script.
Unfortunately, the construction of objects via theÂ
new
 operator in Groovy is not something that can be CPS-transformed (JENKINS-26313), and so this will not work.
Here is an example that calls a CPS-transformed method in a constructor:
class Test {
def x
public Test() {
setX()
}
private void setX() {
this.x = 1;
}
}
def x = new Test().x
echo "${x}"
The construction ofÂ
Test
 will fail when the constructor calls
Test.setX
because
setX
 is a CPS-transformed method.
The warning in the logs from running this script looks like this:
expected to call Test.<init> but wound up catching Test.setX
To fix this case, ensure that any methods defined in a Pipeline script that are called from inside of a constructor are annotated withÂ
@NonCPS
 and that constructors do not call any Pipeline steps.
If you must call CPS-transformed code such a Pipeline steps from the constructor, you need move the logic related to the CPS-transformed methods out of the constructor, for example into a static factory method that calls the CPS-transformed code and then passes the results to the constructor.
Users may create a class in a Pipeline Script that extends a preexisting class defined outside of the Pipeline script, for example from the Java or Groovy standard libraries.
When doing so, the subclass must ensure that any overriding methods are annotated with
@NonCPS
 and do not use any CPS-transformed code internally.
Otherwise, the overriding methods will fail if called from a non-CPS context.
For example, the following will not work:
class Test {
@Override
public String toString() {
return "Test"
}
}
def builder = new StringBuilder()
builder.append(new Test())
echo(builder.toString())
Calling the CPS-transformed override ofÂ
toString
 from non-CPS-transformed code such as
StringBuilder.append
is not permitted and will not work as expected in most cases.
The warning in the logs from running this script looks like this:
expected to call java.lang.StringBuilder.append but wound up catching Test.toString
To fix this case, add the
@NonCPS
annotation to the overriding method, and remove any uses of CPS-transformed code such as Pipeline steps from the method.
GString
Â
In Groovy, it is possible to use a closure in a
GString
so that the closure is evaluated every time the
GString
is used as a
String
.
However, in Pipeline scripts, this will not work as expected, because the closure inside of the GString will be CPS-transformed.
Here is an example:
def x = 1
def s = "x = ${-> x}"
x = 2
echo(s)
Using a closure inside of aÂ
GString
 as in this example will not work.
The warning from the logs when running this script looks like this:
expected to call WorkflowScript.echo but wound up catching org.jenkinsci.plugins.workflow.cps.CpsClosure2.call
To fix this case, replace the original GString with a closure that returns a GString that uses a normal expression rather than a closure, and then call the closure where you would have used the original
GString
 as follows:
def x = 1
def s = { -> x = "${x}" }
x = 2
echo(s())
Unfortunately, some expressions may incorrectly trigger this warning even though they execute correctly.
If you run into such a case, please file a new issue (after first checking for duplicates) for
workflow-cps-plugin
.
Was this page helpful?
Please submit your feedback about this page through this quick form.
Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?
See existing feedback here.
Jenkins Pipeline includes built-in documentation and the Snippet Generator which are key resources when developing Pipelines. They provide detailed help and information that is customized to the currently installed version of Jenkins and related plugins. In this section, we’ll discuss other tools and resources that may help with development of Jenkins Pipelines.
Jenkins can validate, or "lint", a Declarative Pipeline from the command line before actually running it. This can be done using a Jenkins CLI command or by making an HTTP POST request with appropriate parameters. We recommended using the SSH interface to run the linter. See the Jenkins CLI documentation for details on how to properly configure Jenkins for secure command-line access.
# ssh (Jenkins CLI)
# JENKINS_PORT=[sshd port on controller]
# JENKINS_HOST=[Jenkins controller hostname]
ssh -p $JENKINS_PORT $JENKINS_HOST declarative-linter < Jenkinsfile
curl
# curl (REST API)
# Assuming "anonymous read access" has been enabled on your Jenkins instance.
# JENKINS_URL=[root URL of Jenkins controller]
# JENKINS_CRUMB is needed if your Jenkins controller has CRSF protection enabled as it should
JENKINS_CRUMB=`curl "$JENKINS_URL/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,\":\",//crumb)"`
curl -X POST -H $JENKINS_CRUMB -F "jenkinsfile=<Jenkinsfile" $JENKINS_URL/pipeline-model-converter/validate
Below are two examples of the Pipeline Linter in action.
This first example shows the output of the linter when it is passed
an invalid
Jenkinsfile
, one that is missing part of the
agent
declaration.
pipeline {
agent
stages {
stage ('Initialize') {
steps {
echo 'Placeholder.'
}
}
}
}
# pass a Jenkinsfile that does not contain an "agent" section
ssh -p 8675 localhost declarative-linter < ./Jenkinsfile
Errors encountered validating Jenkinsfile:
WorkflowScript: 2: Not a valid section definition: "agent". Some extra configuration is required. @ line 2, column 3.
agent
^
WorkflowScript: 1: Missing required section "agent" @ line 1, column 1.
pipeline }
^
In this second example, the
Jenkinsfile
has been updated to include the
missing
any
on
agent
. The linter now reports that the Pipeline is valid.
pipeline {
agent any
stages {
stage ('Initialize') {
steps {
echo 'Placeholder.'
}
}
}
}
ssh -p 8675 localhost declarative-linter < ./Jenkinsfile
Jenkinsfile successfully validated.
The Blue Ocean Pipeline Editor provides a WYSIWYG way to create Declarative Pipelines. The editor offers a structural view of all the stages, parallel branches, and steps in a Pipeline. The editor validates Pipeline changes as they are made, eliminating many errors before they are even committed. Behind the scenes it still generates Declarative Pipeline code.
Blue Ocean status
Blue Ocean will not receive further functionality updates. Blue Ocean will continue to provide easy-to-use Pipeline visualization, but it will not be enhanced further. It will only receive selective updates for significant security issues or functional defects. The Pipeline syntax snippet generator assists users as they define Pipeline steps with their arguments. It is the preferred tool for Jenkins Pipeline creation, as it provides online help for the Pipeline steps available in your Jenkins controller. It uses the plugins installed on your Jenkins controller to generate the Pipeline syntax. Refer to the Pipeline steps reference page for information on all available Pipeline steps. |
Typically a Pipeline will be defined inside of the classic Jenkins web UI,
or by committing to a
Jenkinsfile
in source control. Unfortunately,
neither approach is ideal for rapid iteration, or prototyping, of a Pipeline.
The "Replay" feature allows for quick modifications and execution of an existing
Pipeline without changing the Pipeline configuration or creating a new commit.
To use the "Replay" feature:
Select a previously completed run in the build history.
Click "Replay" in the left menu
Make modifications and click "Run". In this example, we changed "ruby-2.3" to "ruby-2.4".
Check the results of changes
Once you are satisfied with the changes,
you can use Replay to view them again, copy them back to your Pipeline job
or
Jenkinsfile
, and then commit them using your usual engineering processes.
Can be called multiple times on the same run - allows for easy parallel testing of different changes.
Can also be called on Pipeline runs that are still in-progress - As long as a Pipeline contained syntactically correct Groovy and was able to start, it can be Replayed.
Referenced Shared Library code is also modifiable - If a Pipeline run references a Shared Library, the code from the shared library will also be shown and modifiable as part of the Replay page.
Access Control via dedicated "Run / Replay" permission - implied by "Job / Configure". If Pipeline is not configurable (e.g. Branch Pipeline of a Multibranch) or "Job / Configure" is not granted, users still can experiment with Pipeline Definition via Replay
Can be used for Re-run - users lacking "Run / Replay" but who are granted "Job / Build" can still use Replay to run a build again with the same definition.
Pipeline runs with syntax errors cannot be replayed - meaning their code cannot be viewed and any changes made in them cannot be retrieved. When using Replay for more significant modifications, save your changes to a file or editor outside of Jenkins before running them. See JENKINS-37589
Replayed Pipeline behavior may differ from runs started by other methods - For Pipelines that are not part of a Multi-branch Pipeline, the commit information may differ for the original run and the Replayed run. See JENKINS-36453
The
Jenkins Editor
Eclipse plugin can be found on
Eclipse Marketplace.
This special text editor provides some features for defining pipelines e.g:
Validate pipeline scripts by Jenkins Linter Validation. Failures are shown as eclipse markers
An Outline with dedicated icons (for declarative Jenkins pipelines )
Syntax / keyword highlighting
Groovy validation
The Jenkins Editor Plugin is a third-party tool that is not supported by the Jenkins Project. |
The
Jenkins Pipeline Linter Connector
extension for
VisualStudio Code
takes the file that you have currently opened, pushes it to your Jenkins Server and displays the validation result in VS Code.
âYou can find the extension from within the VS Code extension browser or at the following url: marketplace.visualstudio.com/items?itemName=janjoerke.jenkins-pipeline-linter-connector
The extension adds four settings entries to VS Code which select the Jenkins server you want to use for validation.
jenkins.pipeline.linter.connector.url
is the endpoint at which your Jenkins Server expects the POST request, containing your Jenkinsfile which you want to validate. Typically this points to
<your_jenkins_server:port>/pipeline-model-converter/validate
.
jenkins.pipeline.linter.connector.user
allows you to specify your Jenkins username.
jenkins.pipeline.linter.connector.pass
allows you to specify your Jenkins password.
jenkins.pipeline.linter.connector.crumbUrl
has to be specified if your Jenkins Server has CRSF protection enabled. Typically this points to
<your_jenkins_server:port>/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,%22:%22,//crumb)
.
The nvim-jenkinsfile-linter Neovim plugin allows you to validate a Jenkinsfile by using the Pipeline Linter API of your Jenkins instance and report any existing diagnostics in your editor.
The linter-jenkins Atom package allows you to validate a Jenkins file by using the Pipeline Linter API of a running Jenkins. You can install it directly from the Atom package manager. It needs also to install Jenkinsfile language support in Atom
The Jenkinsfile Sublime Text package allows you to validate a Jenkinsfile by using the Pipeline Linter API of a running Jenkins instance over a secure channel (SSH). You can install it directly from the Sublime Text package manager.
âYou can find the package from within the Sublime Text interface via the Package Control package, at GitHub, or packagecontrol.io:
https://github.com/june07/sublime-Jenkinsfile
https://packagecontrol.io/packages/Jenkinsfile
The Pipeline Unit Testing Framework allows you to unit test Pipelines and Shared Libraries before running them in full. It provides a mock execution environment where real Pipeline steps are replaced with mock objects that you can use to check for expected behavior. New and rough around the edges, but promising. The README for that project contains examples and usage instructions.
Was this page helpful?
Please submit your feedback about this page through this quick form.
Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?
See existing feedback here.
Many organizations use Docker to unify their build
and test environments across machines, and to provide an efficient mechanism
for deploying applications. Starting with Pipeline versions 2.5 and higher,
Pipeline has built-in support for interacting with Docker from within a
Jenkinsfile
.
While this section will cover the basics of utilizing Docker from within a
Jenkinsfile
, it will not cover the fundamentals of Docker, which can be read
about in the
Docker Getting Started Guide.
Pipeline is designed to easily use
Docker
images as the execution environment for a single
Stage
or the entire Pipeline. Meaning that a user can define the tools required for
their Pipeline, without having to manually configure agents.
Practically any tool which can be
packaged in a Docker container.
can be used with ease by making only minor edits to a
Jenkinsfile
.
pipeline {
agent {
docker { image 'node:16.13.1-alpine' }
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
node {
/* Requires the Docker Pipeline plugin to be installed */
docker.image('node:16.13.1-alpine').inside {
stage('Test') {
sh 'node --version'
}
}
}
When the Pipeline executes, Jenkins will automatically start the specified container and execute the defined steps within it:
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] sh
[guided-tour] Running shell script
+ node --version
v14.15.0
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Short: if it is important to keep workspace synchronized with other stages, use
reuseNode true
.
Otherwise, dockerized stage can be run on any other agent or on the same agent, but in temporary workspace.
By default, for containerized stage, Jenkins does:
pick any agent,
create new empty workspace,
clone pipeline code into it,
mount this new workspace into container.
If you have multiple Jenkins agents, your containerized stage can be started on any of them.
When
reuseNode
set to
true
: no new workspace will be created, and current workspace from current agent will be mounted into container, and container will be started at the same node, so whole data will be synchronized.
pipeline {
agent any
stages {
stage('Build') {
agent {
docker {
image 'gradle:6.7-jdk11'
// Run the container on the node specified at the
// top-level of the Pipeline, in the same workspace,
// rather than on a new node entirely:
reuseNode true
}
}
steps {
sh 'gradle --version'
}
}
}
}
// Option "reuseNode true" currently unsupported in scripted pipeline
Many build tools will download external dependencies and cache them locally for future re-use. Since containers are initially created with "clean" file systems, this can result in slower Pipelines, as they may not take advantage of on-disk caches between subsequent Pipeline runs.
Pipeline supports adding custom arguments which are passed
to Docker, allowing users to specify custom
Docker Volumes
to mount, which can be used for caching data on the
agent
between Pipeline runs. The following example will cache
~/.m2
between
Pipeline runs utilizing the
maven
container,
thereby avoiding the need to re-download dependencies for subsequent runs of
the Pipeline.
pipeline {
agent {
docker {
image 'maven:3.8.6-eclipse-temurin-11'
args '-v $HOME/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B'
}
}
}
}
node {
/* Requires the Docker Pipeline plugin to be installed */
docker.image('maven:3.8.6-eclipse-temurin-11').inside('-v $HOME/.m2:/root/.m2') {
stage('Build') {
sh 'mvn -B'
}
}
}
It has become increasingly common for code bases to rely on
multiple, different, technologies. For example, a repository might have both a
Java-based back-end API implementation
and
a JavaScript-based front-end
implementation. Combining Docker and Pipeline allows a
Jenkinsfile
to use
multiple
types of technologies by combining the
agent {}
directive, with
different stages.
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3.8.6-eclipse-temurin-11' }
}
steps {
sh 'mvn --version'
}
}
stage('Front-end') {
agent {
docker { image 'node:16.13.1-alpine' }
}
steps {
sh 'node --version'
}
}
}
}
node {
/* Requires the Docker Pipeline plugin to be installed */
stage('Back-end') {
docker.image('maven:3.8.6-eclipse-temurin-11').inside {
sh 'mvn --version'
}
}
stage('Front-end') {
docker.image('node:16.13.1-alpine').inside {
sh 'node --version'
}
}
}
For projects which require a more customized execution environment, Pipeline
also supports building and running a container from a
Dockerfile
in the source
repository. In contrast to the previous approach of using
an "off-the-shelf" container, using the
agent { dockerfile true }
syntax will
build a new image from a
Dockerfile
rather than pulling one from
Docker Hub.
Re-using an example from above, with a more custom
Dockerfile
:
FROM node:16.13.1-alpine
RUN apk add -U subversion
By committing this to the root of the source repository, the
Jenkinsfile
can
be changed to build a container based on this
Dockerfile
and then run the
defined steps using that container:
pipeline {
agent { dockerfile true }
stages {
stage('Test') {
steps {
sh 'node --version'
sh 'svn --version'
}
}
}
}
The
agent { dockerfile true }
syntax supports a number of other options which
are described in more detail in the
Pipeline Syntax section.
By default, Pipeline assumes that any configured agent is capable of running Docker-based Pipelines. For Jenkins environments which have macOS, Windows, or other agents, which are unable to run the Docker daemon, this default setting may be problematic. Pipeline provides a global option in the Manage Jenkins page, and on the Folder level, for specifying which agents (by Label) to use for running Docker-based Pipelines.
The
/usr/local/bin
directory is not included in the macOS
PATH
for Docker images by default.
If executables from
/usr/local/bin
need to be called from within Jenkins, then the
PATH
needs to be extended to include
/usr/local/bin
.
Add a path node in the file "/usr/local/Cellar/jenkins-lts/XXX/homebrew.mxcl.jenkins-lts.plist" like this:
<key>EnvironmentVariables</key>
<dict>
<key>PATH</key
<string><!-- insert revised path here --></string>
</dict>
The revised
PATH
string
should be a colon separated list of directories in the same format as the
PATH
environment variable and should include:
/usr/local/bin
/usr/bin
/bin
/usr/sbin
/sbin
/Applications/Docker.app/Contents/Resources/bin/
/Users/XXX/Library/Group\ Containers/group.com.docker/Applications/Docker.app/Contents/Resources/bin
(where
XXX
is replaced by your user name)
Now restart jenkins using "brew services restart jenkins-lts"
Using Docker in Pipeline can be an effective way to run a service on which the build, or a set of tests, may rely. Similar to the sidecar pattern , Docker Pipeline can run one container "in the background", while performing work in another. Utilizing this sidecar approach, a Pipeline can have a "clean" container provisioned for each Pipeline run.
Consider a hypothetical integration test suite which relies on a local MySQL
database to be running. Using the
withRun
method, implemented in the
Docker Pipeline plugin’s support for Scripted Pipeline,
a
Jenkinsfile
can run MySQL as a sidecar:
node {
checkout scm
/*
* In order to communicate with the MySQL server, this Pipeline explicitly
* maps the port (`3306`) to a known port on the host machine.
*/
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"' +
' -p 3306:3306') { c ->
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -h0.0.0.0 --silent; do sleep 1; done'
/* Run some tests which require MySQL */
sh 'make check'
}
}
This example can be taken further, utilizing two containers simultaneously. One "sidecar" running MySQL, and another providing the execution environment, by using the Docker container links.
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
The above example uses the object exposed by
withRun
, which has the
running container’s ID available via the
id
property. Using the container’s
ID, the Pipeline can create a link by passing custom Docker arguments to the
inside()
method.
The
id
property can also be useful for inspecting logs from a running Docker
container before the Pipeline exits:
sh "docker logs ${c.id}"
In order to create a Docker image, the Docker Pipeline
plugin also provides a
build()
method for creating a new image, from a
Dockerfile
in the repository, during a Pipeline run.
One major benefit of using the syntax
docker.build("my-image-name")
is that a
Scripted Pipeline can use the return value for subsequent Docker Pipeline
calls, for example:
node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.inside {
sh 'make test'
}
}
The return value can also be used to publish the Docker image to
Docker Hub,
or a custom Registry,
via the
push()
method, for example:
node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.push()
}
One common usage of image "tags" is to specify a
latest
tag for the most
recently, validated, version of a Docker image. The
push()
method accepts an
optional
tag
parameter, allowing the Pipeline to push the
customImage
with
different tags, for example:
node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.push()
customImage.push('latest')
}
The
build()
method builds the
Dockerfile
in the current directory by
default. This can be overridden by providing a directory path
containing a
Dockerfile
as the second argument of the
build()
method, for example:
node {
checkout scm
def testImage = docker.build("test-image", "./dockerfiles/test") (1)
testImage.inside {
sh 'make test'
}
}
1 |
Builds
test-image
from the Dockerfile found at
./dockerfiles/test/Dockerfile
.
|
It is possible to pass other arguments to
docker build
by adding them to the second argument of the
build()
method.
When passing arguments this way, the last value in the that string must be
the path to the docker file and should end with the folder to use as the build context)
This example overrides the default
Dockerfile
by passing the
-f
flag:
node {
checkout scm
def dockerfile = 'Dockerfile.test'
def customImage = docker.build("my-image:${env.BUILD_ID}",
"-f ${dockerfile} ./dockerfiles") (1)
}
1 |
Builds
my-image:${env.BUILD_ID}
from the Dockerfile found at
./dockerfiles/Dockerfile.test
.
|
By default, the Docker Pipeline plugin will communicate
with a local Docker daemon, typically accessed through
/var/run/docker.sock
.
To select a non-default Docker server, such as with
Docker Swarm,
the
withServer()
method should be used.
By passing a URI, and optionally the Credentials ID of a Docker Server Certificate Authentication pre-configured in Jenkins, to the method with:
node {
checkout scm
docker.withServer('tcp://swarm.example.com:2376', 'swarm-certs') {
docker.image('mysql:5').withRun('-p 3306:3306') {
/* do things */
}
}
}
For
Currently neither the Jenkins plugin nor the Docker CLI will automatically
detect the case that the server is running remotely; a typical symptom would be
errors from nested
When Jenkins detects that the agent is itself running inside a Docker
container, it will automatically pass the
Additionally some versions of Docker Swarm do not support custom Registries. |
By default the Docker Pipeline integrates assumes the default Docker Registry of Docker Hub.
In order to use a custom Docker Registry, users of Scripted Pipeline can wrap
steps with the
withRegistry()
method, passing in the custom Registry URL, for
example:
node {
checkout scm
docker.withRegistry('https://registry.example.com') {
docker.image('my-custom-image').inside {
sh 'make test'
}
}
}
For a Docker Registry which requires authentication, add a "Username/Password"
Credentials item from the Jenkins home page and use the Credentials ID as a
second argument to
withRegistry()
:
node {
checkout scm
docker.withRegistry('https://registry.example.com', 'credentials-id') {
def customImage = docker.build("my-image:${env.BUILD_ID}")
/* Push the container to the custom Registry */
customImage.push()
}
}
Was this page helpful?
Please submit your feedback about this page through this quick form.
Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?
See existing feedback here.