Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All

DevOps-Jenkins

Manage Users

Manage Users

This section is a work in progress. Want to help? Check out the jenkinsci-docs mailing list. For other ways to contribute to the Jenkins project, see this page about participating and contributing.



Was this page helpful?

Please submit your feedback about this page through this quick form.

Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?


See existing feedback here.

Pipeline CPS Method Mismatches

Pipeline CPS Method Mismatches

Table of Contents
  • Common problems and solutions
    • Use of Pipeline steps from @NonCPS
    • Calling non-CPS-transformed methods with CPS-transformed arguments
    • Constructors
    • Overrides of non-CPS-transformed methods
    • Closures inside GString Â
  • False Positives

Jenkins Pipeline uses a library called Groovy CPS to run Pipeline scripts. While Pipeline uses the Groovy parser and compiler, unlike a regular Groovy environment it runs most of the program inside a special interpreter. This uses a continuation-passing style (CPS) transform to turn your code into a version that can save its current state to disk (a file called program.dat   inside your build directory) and continue running even after Jenkins has restarted. (You can get some more technical background on the Pipeline: Groovy plugin page and the library page.)

While the CPS transform is usually transparent to users, there are limitations to what Groovy language constructs can be supported, and in some circumstances it can lead to counterintuitive behavior. JENKINS-31314 makes the runtime try to detect the most common mistake: calling CPS-transformed code from non-CPS-transformed code. The following kinds of things are CPS-transformed:

  • Almost all of the Pipeline script you write (including in libraries)

  • Most Pipeline steps, including all those which take a block

The following kinds of things are not  CPS-transformed:

  • Compiled Java bytecode, including

    • the Java Platform

    • Jenkins core and plugins

    • the runtime for the Groovy language

  • Constructor bodies in your Pipeline script

  • Any method in your Pipeline script marked with the @NonCPS  annotation

  • A few Pipeline steps which take no block and act instantaneously, such as echo  or properties

CPS-transformed code may call non-CPS-transformed code or other CPS-transformed code, and non-CPS-transformed code may call other non-CPS-transformed code, but non-CPS-transformed code may not call CPS-transformed code. If you try to call CPS-transformed code from non-CPS-transformed code, the CPS interpreter is unable to operate correctly, resulting in incorrect and often confusing results.

Common problems and solutions

Use of Pipeline steps from @NonCPS

Sometimes users will apply the @NonCPS   annotation to a method definition in order to bypass the CPS transform inside that method. This can be done to work around limitations in Groovy language coverage (since the body of the method will execute using the native Groovy semantics), or to get better performance (the interpreter imposes a substantial overhead). However, such methods must not call CPS-transformed code such as Pipeline steps. For example, the following will not work:

@NonCPS
def compileOnPlatforms() {
  ['linux', 'windows'].each { arch ->
    node(arch) {
      sh 'make'
    }
  }
}
compileOnPlatforms()

Using the node   or sh   steps from this method is illegal, and the behavior will be anomalous. The warning in the logs from running this script looks like this:

expected to call WorkflowScript.compileOnPlatforms but wound up catching node

To fix this case, simply remove the annotation — it was not needed. (Longtime Pipeline users might have thought it was, prior to the fix of JENKINS-26481.)

Calling non-CPS-transformed methods with CPS-transformed arguments

Some Groovy and Java methods take complex types as parameters to support dynamic behavior. A common case is sorting methods that allow callers to specify a method to use for comparing objects (JENKINS-44924). Many similar methods in the Groovy standard library work correctly after the fix for JENKINS-26481, but some methods remain unfixed. For example, the following will not work:

def sortByLength(List<String> list) {
  list.toSorted { a, b -> Integer.valueOf(a.length()).compareTo(b.length()) }
}
def sorted = sortByLength(['333', '1', '4444', '22'])
echo(sorted.toString())

The closure passed to Iterable.toSorted is CPS-transformed, but Iterable.toSorted itself is not CPS-transformed internally, so this will not work as intended. The current behavior is that the return value of the call to toSorted will be the return value of the first call to the closure. In the example, this results in sorted being set to -1 , and the warning in the logs looks like this:

expected to call java.util.ArrayList.toSorted but wound up catching org.jenkinsci.plugins.workflow.cps.CpsClosure2.call

To fix this case, any argument passed to these methods must not be CPS-transformed. This can be accomplished by encapsulating the problematic method ( Iterable.toSorted in the example) inside another method, and annotating the outer method with @NonCPS , or by creating an explicit class definition for the closure and annotating all methods on that class with @NonCPS .

Constructors

Occasionally, users may attempt to use CPS-transformed code such as Pipeline steps inside of a constructor in a Pipeline script. Unfortunately, the construction of objects via the new  operator in Groovy is not something that can be CPS-transformed (JENKINS-26313), and so this will not work. Here is an example that calls a CPS-transformed method in a constructor:

class Test {
  def x
  public Test() {
    setX()
  }
  private void setX() {
    this.x = 1;
  }
}
def x = new Test().x
echo "${x}"

The construction of Test  will fail when the constructor calls Test.setX because setX  is a CPS-transformed method. The warning in the logs from running this script looks like this:

expected to call Test.<init> but wound up catching Test.setX

To fix this case, ensure that any methods defined in a Pipeline script that are called from inside of a constructor are annotated with @NonCPS  and that constructors do not call any Pipeline steps. If you must call CPS-transformed code such a Pipeline steps from the constructor, you need move the logic related to the CPS-transformed methods out of the constructor, for example into a static factory method that calls the CPS-transformed code and then passes the results to the constructor.

Overrides of non-CPS-transformed methods

Users may create a class in a Pipeline Script that extends a preexisting class defined outside of the Pipeline script, for example from the Java or Groovy standard libraries. When doing so, the subclass must ensure that any overriding methods are annotated with @NonCPS  and do not use any CPS-transformed code internally. Otherwise, the overriding methods will fail if called from a non-CPS context. For example, the following will not work:

class Test {
  @Override
  public String toString() {
    return "Test"
  }
}
def builder = new StringBuilder()
builder.append(new Test())
echo(builder.toString())

Calling the CPS-transformed override of toString  from non-CPS-transformed code such as StringBuilder.append is not permitted and will not work as expected in most cases. The warning in the logs from running this script looks like this:

expected to call java.lang.StringBuilder.append but wound up catching Test.toString

To fix this case, add the @NonCPS annotation to the overriding method, and remove any uses of CPS-transformed code such as Pipeline steps from the method.

Closures inside GString Â

In Groovy, it is possible to use a closure in a GString so that the closure is evaluated every time the GString is used as a String . However, in Pipeline scripts, this will not work as expected, because the closure inside of the GString will be CPS-transformed. Here is an example:

def x = 1
def s = "x = ${-> x}"
x = 2
echo(s)

Using a closure inside of a GString   as in this example will not work. The warning from the logs when running this script looks like this:

expected to call WorkflowScript.echo but wound up catching org.jenkinsci.plugins.workflow.cps.CpsClosure2.call

To fix this case, replace the original GString with a closure that returns a GString that uses a normal expression rather than a closure, and then call the closure where you would have used the original GString  as follows:

def x = 1
def s = { -> x = "${x}" }
x = 2
echo(s())

False Positives

Unfortunately, some expressions may incorrectly trigger this warning even though they execute correctly. If you run into such a case, please file a new issue (after first checking for duplicates) for workflow-cps-plugin .



Was this page helpful?

Please submit your feedback about this page through this quick form.

Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?


See existing feedback here.

Read article

Pipeline Development Tools

Pipeline Development Tools

Table of Contents
  • Command-line Pipeline Linter
    • Examples
  • Blue Ocean Editor
  • "Replay" Pipeline Runs with Modifications
    • Usage
    • Features
    • Limitations
  • IDE Integrations
    • Eclipse Jenkins Editor
    • VisualStudio Code Jenkins Pipeline Linter Connector
    • Neovim nvim-jenkinsfile-linter plugin
    • Atom linter-jenkins package
    • Sublime Text Jenkinsfile package
  • Pipeline Unit Testing Framework

Jenkins Pipeline includes built-in documentation and the Snippet Generator which are key resources when developing Pipelines. They provide detailed help and information that is customized to the currently installed version of Jenkins and related plugins. In this section, we’ll discuss other tools and resources that may help with development of Jenkins Pipelines.

Command-line Pipeline Linter

Jenkins can validate, or "lint", a Declarative Pipeline from the command line before actually running it. This can be done using a Jenkins CLI command or by making an HTTP POST request with appropriate parameters. We recommended using the SSH interface to run the linter. See the Jenkins CLI documentation for details on how to properly configure Jenkins for secure command-line access.

Linting via the CLI with SSH
# ssh (Jenkins CLI)
# JENKINS_PORT=[sshd port on controller]
# JENKINS_HOST=[Jenkins controller hostname]
ssh -p $JENKINS_PORT $JENKINS_HOST declarative-linter < Jenkinsfile
Linting via HTTP POST using curl
# curl (REST API)
# Assuming "anonymous read access" has been enabled on your Jenkins instance.
# JENKINS_URL=[root URL of Jenkins controller]
# JENKINS_CRUMB is needed if your Jenkins controller has CRSF protection enabled as it should
JENKINS_CRUMB=`curl "$JENKINS_URL/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,\":\",//crumb)"`
curl -X POST -H $JENKINS_CRUMB -F "jenkinsfile=<Jenkinsfile" $JENKINS_URL/pipeline-model-converter/validate

Examples

Below are two examples of the Pipeline Linter in action. This first example shows the output of the linter when it is passed an invalid Jenkinsfile , one that is missing part of the agent declaration.

Jenkinsfile
pipeline {
  agent
  stages {
    stage ('Initialize') {
      steps {
        echo 'Placeholder.'
      }
    }
  }
}
Linter output for invalid Jenkinsfile
# pass a Jenkinsfile that does not contain an "agent" section
ssh -p 8675 localhost declarative-linter < ./Jenkinsfile
Errors encountered validating Jenkinsfile:
WorkflowScript: 2: Not a valid section definition: "agent". Some extra configuration is required. @ line 2, column 3.
     agent
     ^

WorkflowScript: 1: Missing required section "agent" @ line 1, column 1.
   pipeline &#125;
   ^

In this second example, the Jenkinsfile has been updated to include the missing any on agent . The linter now reports that the Pipeline is valid.

Jenkinsfile
pipeline {
  agent any
  stages {
    stage ('Initialize') {
      steps {
        echo 'Placeholder.'
      }
    }
  }
}
Linter output for valid Jenkinsfile
ssh -p 8675 localhost declarative-linter < ./Jenkinsfile
Jenkinsfile successfully validated.

Blue Ocean Editor

The Blue Ocean Pipeline Editor provides a WYSIWYG way to create Declarative Pipelines. The editor offers a structural view of all the stages, parallel branches, and steps in a Pipeline. The editor validates Pipeline changes as they are made, eliminating many errors before they are even committed. Behind the scenes it still generates Declarative Pipeline code.

Blue Ocean status

Blue Ocean will not receive further functionality updates. Blue Ocean will continue to provide easy-to-use Pipeline visualization, but it will not be enhanced further. It will only receive selective updates for significant security issues or functional defects.

The Pipeline syntax snippet generator assists users as they define Pipeline steps with their arguments. It is the preferred tool for Jenkins Pipeline creation, as it provides online help for the Pipeline steps available in your Jenkins controller. It uses the plugins installed on your Jenkins controller to generate the Pipeline syntax. Refer to the Pipeline steps reference page for information on all available Pipeline steps.

"Replay" Pipeline Runs with Modifications

Typically a Pipeline will be defined inside of the classic Jenkins web UI, or by committing to a Jenkinsfile in source control. Unfortunately, neither approach is ideal for rapid iteration, or prototyping, of a Pipeline. The "Replay" feature allows for quick modifications and execution of an existing Pipeline without changing the Pipeline configuration or creating a new commit.

Usage

To use the "Replay" feature:

  1. Select a previously completed run in the build history.

    Previous Pipeline Run
  2. Click "Replay" in the left menu

    Replay Left-menu Button
  3. Make modifications and click "Run". In this example, we changed "ruby-2.3" to "ruby-2.4".

    Replay Left-menu Button
  4. Check the results of changes

Once you are satisfied with the changes, you can use Replay to view them again, copy them back to your Pipeline job or Jenkinsfile , and then commit them using your usual engineering processes.

Features

  • Can be called multiple times on the same run - allows for easy parallel testing of different changes.

  • Can also be called on Pipeline runs that are still in-progress - As long as a Pipeline contained syntactically correct Groovy and was able to start, it can be Replayed.

  • Referenced Shared Library code is also modifiable - If a Pipeline run references a Shared Library, the code from the shared library will also be shown and modifiable as part of the Replay page.

  • Access Control via dedicated "Run / Replay" permission - implied by "Job / Configure". If Pipeline is not configurable (e.g. Branch Pipeline of a Multibranch) or "Job / Configure" is not granted, users still can experiment with Pipeline Definition via Replay

  • Can be used for Re-run - users lacking "Run / Replay" but who are granted "Job / Build" can still use Replay to run a build again with the same definition.

Limitations

  • Pipeline runs with syntax errors cannot be replayed - meaning their code cannot be viewed and any changes made in them cannot be retrieved. When using Replay for more significant modifications, save your changes to a file or editor outside of Jenkins before running them. See JENKINS-37589

  • Replayed Pipeline behavior may differ from runs started by other methods - For Pipelines that are not part of a Multi-branch Pipeline, the commit information may differ for the original run and the Replayed run. See JENKINS-36453

IDE Integrations

Eclipse Jenkins Editor

The Jenkins Editor Eclipse plugin can be found on Eclipse Marketplace. This special text editor provides some features for defining pipelines e.g:

  • Validate pipeline scripts by Jenkins Linter Validation. Failures are shown as eclipse markers

  • An Outline with dedicated icons (for declarative Jenkins pipelines )

  • Syntax / keyword highlighting

  • Groovy validation

The Jenkins Editor Plugin is a third-party tool that is not supported by the Jenkins Project.

VisualStudio Code Jenkins Pipeline Linter Connector

The Jenkins Pipeline Linter Connector extension for VisualStudio Code takes the file that you have currently opened, pushes it to your Jenkins Server and displays the validation result in VS Code.

​You can find the extension from within the VS Code extension browser or at the following url: marketplace.visualstudio.com/items?itemName=janjoerke.jenkins-pipeline-linter-connector

The extension adds four settings entries to VS Code which select the Jenkins server you want to use for validation.

  • jenkins.pipeline.linter.connector.url is the endpoint at which your Jenkins Server expects the POST request, containing your Jenkinsfile which you want to validate. Typically this points to <your_jenkins_server:port>/pipeline-model-converter/validate .

  • jenkins.pipeline.linter.connector.user allows you to specify your Jenkins username.

  • jenkins.pipeline.linter.connector.pass allows you to specify your Jenkins password.

  • jenkins.pipeline.linter.connector.crumbUrl has to be specified if your Jenkins Server has CRSF protection enabled. Typically this points to <your_jenkins_server:port>/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,%22:%22,//crumb) .

Neovim nvim-jenkinsfile-linter plugin

The nvim-jenkinsfile-linter Neovim plugin allows you to validate a Jenkinsfile by using the Pipeline Linter API of your Jenkins instance and report any existing diagnostics in your editor.

Atom linter-jenkins package

The linter-jenkins Atom package allows you to validate a Jenkins file by using the Pipeline Linter API of a running Jenkins. You can install it directly from the Atom package manager. It needs also to install Jenkinsfile language support in Atom

Sublime Text Jenkinsfile package

The Jenkinsfile Sublime Text package allows you to validate a Jenkinsfile by using the Pipeline Linter API of a running Jenkins instance over a secure channel (SSH). You can install it directly from the Sublime Text package manager.

​You can find the package from within the Sublime Text interface via the Package Control package, at GitHub, or packagecontrol.io:

  • https://github.com/june07/sublime-Jenkinsfile

  • https://packagecontrol.io/packages/Jenkinsfile

Pipeline Unit Testing Framework

The Pipeline Unit Testing Framework allows you to unit test Pipelines and Shared Libraries before running them in full. It provides a mock execution environment where real Pipeline steps are replaced with mock objects that you can use to check for expected behavior. New and rough around the edges, but promising. The README for that project contains examples and usage instructions.



Was this page helpful?

Please submit your feedback about this page through this quick form.

Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?


See existing feedback here.

Read article

Using Docker with Pipeline

Using Docker with Pipeline

Table of Contents
  • Customizing the execution environment
    • Workspace synchronization
    • Caching data for containers
    • Using multiple containers
    • Using a Dockerfile
    • Specifying a Docker Label
    • Path setup for mac OS users
  • Advanced Usage with Scripted Pipeline
    • Running "sidecar" containers
    • Building containers
    • Using a remote Docker server
    • Using a custom registry

Many organizations use Docker to unify their build and test environments across machines, and to provide an efficient mechanism for deploying applications. Starting with Pipeline versions 2.5 and higher, Pipeline has built-in support for interacting with Docker from within a Jenkinsfile .

While this section will cover the basics of utilizing Docker from within a Jenkinsfile , it will not cover the fundamentals of Docker, which can be read about in the Docker Getting Started Guide.

Customizing the execution environment

Pipeline is designed to easily use Docker images as the execution environment for a single Stage or the entire Pipeline. Meaning that a user can define the tools required for their Pipeline, without having to manually configure agents. Practically any tool which can be packaged in a Docker container. can be used with ease by making only minor edits to a Jenkinsfile .

Jenkinsfile (Declarative Pipeline)
pipeline {
    agent {
        docker { image 'node:16.13.1-alpine' }
    }
    stages {
        stage('Test') {
            steps {
                sh 'node --version'
            }
        }
    }
}
Toggle Scripted Pipeline (Advanced)

When the Pipeline executes, Jenkins will automatically start the specified container and execute the defined steps within it:

[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] sh
[guided-tour] Running shell script
+ node --version
v14.15.0
[Pipeline] }
[Pipeline] // stage
[Pipeline] }

Workspace synchronization

Short: if it is important to keep workspace synchronized with other stages, use reuseNode true . Otherwise, dockerized stage can be run on any other agent or on the same agent, but in temporary workspace.

By default, for containerized stage, Jenkins does:

  • pick any agent,

  • create new empty workspace,

  • clone pipeline code into it,

  • mount this new workspace into container.

If you have multiple Jenkins agents, your containerized stage can be started on any of them.

When reuseNode set to true : no new workspace will be created, and current workspace from current agent will be mounted into container, and container will be started at the same node, so whole data will be synchronized.

Jenkinsfile (Declarative Pipeline)
pipeline {
    agent any
    stages {
        stage('Build') {
            agent {
                docker {
                    image 'gradle:6.7-jdk11'
                    // Run the container on the node specified at the
                    // top-level of the Pipeline, in the same workspace,
                    // rather than on a new node entirely:
                    reuseNode true
                }
            }
            steps {
                sh 'gradle --version'
            }
        }
    }
}
Toggle Scripted Pipeline (Advanced)

Caching data for containers

Many build tools will download external dependencies and cache them locally for future re-use. Since containers are initially created with "clean" file systems, this can result in slower Pipelines, as they may not take advantage of on-disk caches between subsequent Pipeline runs.

Pipeline supports adding custom arguments which are passed to Docker, allowing users to specify custom Docker Volumes to mount, which can be used for caching data on the agent between Pipeline runs. The following example will cache ~/.m2 between Pipeline runs utilizing the maven container, thereby avoiding the need to re-download dependencies for subsequent runs of the Pipeline.

Jenkinsfile (Declarative Pipeline)
pipeline {
    agent {
        docker {
            image 'maven:3.8.6-eclipse-temurin-11'
            args '-v $HOME/.m2:/root/.m2'
        }
    }
    stages {
        stage('Build') {
            steps {
                sh 'mvn -B'
            }
        }
    }
}
Toggle Scripted Pipeline (Advanced)

Using multiple containers

It has become increasingly common for code bases to rely on multiple, different, technologies. For example, a repository might have both a Java-based back-end API implementation and a JavaScript-based front-end implementation. Combining Docker and Pipeline allows a Jenkinsfile to use multiple types of technologies by combining the agent {} directive, with different stages.

Jenkinsfile (Declarative Pipeline)
pipeline {
    agent none
    stages {
        stage('Back-end') {
            agent {
                docker { image 'maven:3.8.6-eclipse-temurin-11' }
            }
            steps {
                sh 'mvn --version'
            }
        }
        stage('Front-end') {
            agent {
                docker { image 'node:16.13.1-alpine' }
            }
            steps {
                sh 'node --version'
            }
        }
    }
}
Toggle Scripted Pipeline (Advanced)

Using a Dockerfile

For projects which require a more customized execution environment, Pipeline also supports building and running a container from a Dockerfile in the source repository. In contrast to the previous approach of using an "off-the-shelf" container, using the agent { dockerfile true } syntax will build a new image from a Dockerfile rather than pulling one from Docker Hub.

Re-using an example from above, with a more custom Dockerfile :

Dockerfile
FROM node:16.13.1-alpine

RUN apk add -U subversion

By committing this to the root of the source repository, the Jenkinsfile can be changed to build a container based on this Dockerfile and then run the defined steps using that container:

Jenkinsfile (Declarative Pipeline)
pipeline {
    agent { dockerfile true }
    stages {
        stage('Test') {
            steps {
                sh 'node --version'
                sh 'svn --version'
            }
        }
    }
}

The agent { dockerfile true } syntax supports a number of other options which are described in more detail in the Pipeline Syntax section.

Using a Dockerfile with Jenkins Pipeline

Specifying a Docker Label

By default, Pipeline assumes that any configured agent is capable of running Docker-based Pipelines. For Jenkins environments which have macOS, Windows, or other agents, which are unable to run the Docker daemon, this default setting may be problematic. Pipeline provides a global option in the Manage Jenkins page, and on the Folder level, for specifying which agents (by Label) to use for running Docker-based Pipelines.

Configuring the Pipeline Docker Label

Path setup for mac OS users

The /usr/local/bin directory is not included in the macOS PATH for Docker images by default. If executables from /usr/local/bin need to be called from within Jenkins, then the PATH needs to be extended to include /usr/local/bin . Add a path node in the file "/usr/local/Cellar/jenkins-lts/XXX/homebrew.mxcl.jenkins-lts.plist" like this:

Contents of homebrew.mxcl.jenkins-lts.plist
<key>EnvironmentVariables</key>
<dict>
<key>PATH</key
<string><!-- insert revised path here --></string>
</dict>

The revised PATH string should be a colon separated list of directories in the same format as the PATH environment variable and should include:

  • /usr/local/bin

  • /usr/bin

  • /bin

  • /usr/sbin

  • /sbin

  • /Applications/Docker.app/Contents/Resources/bin/

  • /Users/XXX/Library/Group\ Containers/group.com.docker/Applications/Docker.app/Contents/Resources/bin (where XXX is replaced by your user name)

Now restart jenkins using "brew services restart jenkins-lts"

Advanced Usage with Scripted Pipeline

Running "sidecar" containers

Using Docker in Pipeline can be an effective way to run a service on which the build, or a set of tests, may rely. Similar to the sidecar pattern , Docker Pipeline can run one container "in the background", while performing work in another. Utilizing this sidecar approach, a Pipeline can have a "clean" container provisioned for each Pipeline run.

Consider a hypothetical integration test suite which relies on a local MySQL database to be running. Using the withRun method, implemented in the Docker Pipeline plugin’s support for Scripted Pipeline, a Jenkinsfile can run MySQL as a sidecar:

node {
    checkout scm
    /*
     * In order to communicate with the MySQL server, this Pipeline explicitly
     * maps the port (`3306`) to a known port on the host machine.
     */
    docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"' +
                                    ' -p 3306:3306') { c ->
        /* Wait until mysql service is up */
        sh 'while ! mysqladmin ping -h0.0.0.0 --silent; do sleep 1; done'
        /* Run some tests which require MySQL */
        sh 'make check'
    }
}

This example can be taken further, utilizing two containers simultaneously. One "sidecar" running MySQL, and another providing the execution environment, by using the Docker container links.

node {
    checkout scm
    docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
        docker.image('mysql:5').inside("--link ${c.id}:db") {
            /* Wait until mysql service is up */
            sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
        }
        docker.image('centos:7').inside("--link ${c.id}:db") {
            /*
             * Run some tests which require MySQL, and assume that it is
             * available on the host name `db`
             */
            sh 'make check'
        }
    }
}

The above example uses the object exposed by withRun , which has the running container’s ID available via the id property. Using the container’s ID, the Pipeline can create a link by passing custom Docker arguments to the inside() method.

The id property can also be useful for inspecting logs from a running Docker container before the Pipeline exits:

sh "docker logs ${c.id}"

Building containers

In order to create a Docker image, the Docker Pipeline plugin also provides a build() method for creating a new image, from a Dockerfile in the repository, during a Pipeline run.

One major benefit of using the syntax docker.build("my-image-name") is that a Scripted Pipeline can use the return value for subsequent Docker Pipeline calls, for example:

node {
    checkout scm

    def customImage = docker.build("my-image:${env.BUILD_ID}")

    customImage.inside {
        sh 'make test'
    }
}

The return value can also be used to publish the Docker image to Docker Hub, or a custom Registry, via the push() method, for example:

node {
    checkout scm
    def customImage = docker.build("my-image:${env.BUILD_ID}")
    customImage.push()
}

One common usage of image "tags" is to specify a latest tag for the most recently, validated, version of a Docker image. The push() method accepts an optional tag parameter, allowing the Pipeline to push the customImage with different tags, for example:

node {
    checkout scm
    def customImage = docker.build("my-image:${env.BUILD_ID}")
    customImage.push()

    customImage.push('latest')
}

The build() method builds the Dockerfile in the current directory by default. This can be overridden by providing a directory path containing a Dockerfile as the second argument of the build() method, for example:

node {
    checkout scm
    def testImage = docker.build("test-image", "./dockerfiles/test") (1)

    testImage.inside {
        sh 'make test'
    }
}
1 Builds test-image from the Dockerfile found at ./dockerfiles/test/Dockerfile .

It is possible to pass other arguments to docker build by adding them to the second argument of the build() method. When passing arguments this way, the last value in the that string must be the path to the docker file and should end with the folder to use as the build context)

This example overrides the default Dockerfile by passing the -f flag:

node {
    checkout scm
    def dockerfile = 'Dockerfile.test'
    def customImage = docker.build("my-image:${env.BUILD_ID}",
                                   "-f ${dockerfile} ./dockerfiles") (1)
}
1 Builds my-image:${env.BUILD_ID} from the Dockerfile found at ./dockerfiles/Dockerfile.test .

Using a remote Docker server

By default, the Docker Pipeline plugin will communicate with a local Docker daemon, typically accessed through /var/run/docker.sock .

To select a non-default Docker server, such as with Docker Swarm, the withServer() method should be used.

By passing a URI, and optionally the Credentials ID of a Docker Server Certificate Authentication pre-configured in Jenkins, to the method with:

node {
    checkout scm

    docker.withServer('tcp://swarm.example.com:2376', 'swarm-certs') {
        docker.image('mysql:5').withRun('-p 3306:3306') {
            /* do things */
        }
    }
}

inside() and build() will not work properly with a Docker Swarm server out of the box

For inside() to work, the Docker server and the Jenkins agent must use the same filesystem, so that the workspace can be mounted.

Currently neither the Jenkins plugin nor the Docker CLI will automatically detect the case that the server is running remotely; a typical symptom would be errors from nested sh commands such as

cannot create /…@tmp/durable-…/pid: Directory nonexistent

When Jenkins detects that the agent is itself running inside a Docker container, it will automatically pass the --volumes-from argument to the inside container, ensuring that it can share a workspace with the agent.

Additionally some versions of Docker Swarm do not support custom Registries.

Using a custom registry

By default the Docker Pipeline integrates assumes the default Docker Registry of Docker Hub.

In order to use a custom Docker Registry, users of Scripted Pipeline can wrap steps with the withRegistry() method, passing in the custom Registry URL, for example:

node {
    checkout scm

    docker.withRegistry('https://registry.example.com') {

        docker.image('my-custom-image').inside {
            sh 'make test'
        }
    }
}

For a Docker Registry which requires authentication, add a "Username/Password" Credentials item from the Jenkins home page and use the Credentials ID as a second argument to withRegistry() :

node {
    checkout scm

    docker.withRegistry('https://registry.example.com', 'credentials-id') {

        def customImage = docker.build("my-image:${env.BUILD_ID}")

        /* Push the container to the custom Registry */
        customImage.push()
    }
}


Was this page helpful?

Please submit your feedback about this page through this quick form.

Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?


See existing feedback here.

Read article

Getting started with Pipeline

Getting started with Pipeline

Table of Contents
  • Prerequisites
  • Defining a Pipeline
    • Through Blue Ocean
    • Through the classic UI
    • In SCM
  • Built-in Documentation
    • Snippet Generator
    • Global Variable Reference
    • Declarative Directive Generator
  • Further Reading
    • Additional Resources

As mentioned previously, Jenkins Pipeline is a suite of plugins that supports implementing and integrating continuous delivery pipelines into Jenkins. Pipeline provides an extensible set of tools for modeling simple-to-complex delivery pipelines "as code" via the Pipeline DSL. [1]

This section describes how to get started with creating your Pipeline project in Jenkins and introduces you to the various ways that a Jenkinsfile can be created and stored.

Prerequisites

To use Jenkins Pipeline, you will need:

  • Jenkins 2.x or later (older versions back to 1.642.3 may work but are not recommended)

  • Pipeline plugin, [2] which is installed as part of the "suggested plugins" (specified when running through the Post-installation setup wizard after installing Jenkins).

Read more about how to install and manage plugins in Managing Plugins.

Defining a Pipeline

Both Declarative and Scripted Pipeline are DSLs [1] to describe portions of your software delivery pipeline. Scripted Pipeline is written in a limited form of Groovy syntax.

Relevant components of Groovy syntax will be introduced as required throughout this documentation, so while an understanding of Groovy is helpful, it is not required to work with Pipeline.

A Pipeline can be created in one of the following ways:

  • Through Blue Ocean - after setting up a Pipeline project in Blue Ocean, the Blue Ocean UI helps you write your Pipeline’s Jenkinsfile and commit it to source control.

  • Through the classic UI - you can enter a basic Pipeline directly in Jenkins through the classic UI.

  • In SCM - you can write a Jenkinsfile manually, which you can commit to your project’s source control repository. [3]

The syntax for defining a Pipeline with either approach is the same, but while Jenkins supports entering Pipeline directly into the classic UI, it is generally considered best practice to define the Pipeline in a Jenkinsfile which Jenkins will then load directly from source control.

This video provides basic instructions on how to write both Declarative and Scripted Pipelines.

Writing a Pipeline script in Jenkins

Through Blue Ocean

If you are new to Jenkins Pipeline, the Blue Ocean UI helps you set up your Pipeline project, and automatically creates and writes your Pipeline (i.e. the Jenkinsfile ) for you through the graphical Pipeline editor.

As part of setting up your Pipeline project in Blue Ocean, Jenkins configures a secure and appropriately authenticated connection to your project’s source control repository. Therefore, any changes you make to the Jenkinsfile via Blue Ocean’s Pipeline editor are automatically saved and committed to source control.

Read more about Blue Ocean in the Blue Ocean chapter and Getting started with Blue Ocean page.

Blue Ocean status

Blue Ocean will not receive further functionality updates. Blue Ocean will continue to provide easy-to-use Pipeline visualization, but it will not be enhanced further. It will only receive selective updates for significant security issues or functional defects.

The Pipeline syntax snippet generator assists users as they define Pipeline steps with their arguments. It is the preferred tool for Jenkins Pipeline creation, as it provides online help for the Pipeline steps available in your Jenkins controller. It uses the plugins installed on your Jenkins controller to generate the Pipeline syntax. Refer to the Pipeline steps reference page for information on all available Pipeline steps.

Through the classic UI

A Jenkinsfile created using the classic UI is stored by Jenkins itself (within the Jenkins home directory).

To create a basic Pipeline through the Jenkins classic UI:

  1. If required, ensure you are logged in to Jenkins.

  2. From the Jenkins home page (i.e. the Dashboard of the Jenkins classic UI), click New Item at the top left.

    Classic UI left column

  3. In the Enter an item name field, specify the name for your new Pipeline project.
    Caution: Jenkins uses this item name to create directories on disk. It is recommended to avoid using spaces in item names, since doing so may uncover bugs in scripts that do not properly handle spaces in directory paths.

  4. Scroll down and click Pipeline , then click OK at the end of the page to open the Pipeline configuration page (whose General tab is selected).

    Enter a name, click <strong>Pipeline</strong> and then click <strong>OK</strong>

  5. Click the Pipeline tab at the top of the page to scroll down to the Pipeline section.
    Note: If instead you are defining your Jenkinsfile in source control, follow the instructions in In SCM below.

  6. In the Pipeline section, ensure that the Definition field indicates the Pipeline script option.

  7. Enter your Pipeline code into the Script text area.
    For instance, copy the following Declarative example Pipeline code (below the Jenkinsfile ( …​ ) heading) or its Scripted version equivalent and paste this into the Script text area. (The Declarative example below is used throughout the remainder of this procedure.)

    Jenkinsfile (Declarative Pipeline)
    pipeline {
        agent any (1)
        stages {
            stage('Stage 1') {
                steps {
                    echo 'Hello world!' (2)
                }
            }
        }
    }
    Toggle Scripted Pipeline (Advanced)
    1 agent instructs Jenkins to allocate an executor (on any available agent/node in the Jenkins environment) and workspace for the entire Pipeline.
    2 echo writes simple string in the console output.
    3 node effectively does the same as agent (above).

    Example Pipeline in the classic UI

    Note: You can also select from canned Scripted Pipeline examples from the try sample Pipeline option at the top right of the Script text area. Be aware that there are no canned Declarative Pipeline examples available from this field.

  8. Click Save to open the Pipeline project/item view page.

  9. On this page, click Build Now on the left to run the Pipeline.

    Classic UI left column on an item

  10. Under Build History on the left, click #1 to access the details for this particular Pipeline run.

  11. Click Console Output to see the full output from the Pipeline run. The following output shows a successful run of your Pipeline.

    <strong>Console Output</strong> for the Pipeline

    Notes:

    • You can also access the console output directly from the Dashboard by clicking the colored globe to the left of the build number (e.g. #1 ).

    • Defining a Pipeline through the classic UI is convenient for testing Pipeline code snippets, or for handling simple Pipelines or Pipelines that do not require source code to be checked out/cloned from a repository. As mentioned above, unlike Jenkinsfile s you define through Blue Ocean (above) or in source control (below), Jenkinsfile s entered into the Script text area area of Pipeline projects are stored by Jenkins itself, within the Jenkins home directory. Therefore, for greater control and flexibility over your Pipeline, particularly for projects in source control that are likely to gain complexity, it is recommended that you use Blue Ocean or source control to define your Jenkinsfile .

In SCM

Complex Pipelines are difficult to write and maintain within the classic UI’s Script text area of the Pipeline configuration page.

To make this easier, your Pipeline’s Jenkinsfile can be written in a text editor or integrated development environment (IDE) and committed to source control [3] (optionally with the application code that Jenkins will build). Jenkins can then check out your Jenkinsfile from source control as part of your Pipeline project’s build process and then proceed to execute your Pipeline.

To configure your Pipeline project to use a Jenkinsfile from source control:

  1. Follow the procedure above for defining your Pipeline through the classic UI until you reach step 5 (accessing the Pipeline section on the Pipeline configuration page).

  2. From the Definition field, choose the Pipeline script from SCM option.

  3. From the SCM field, choose the type of source control system of the repository containing your Jenkinsfile .

  4. Complete the fields specific to your repository’s source control system.
    Tip: If you are uncertain of what value to specify for a given field, click its ? icon to the right for more information.

  5. In the Script Path field, specify the location (and name) of your Jenkinsfile . This location is the one that Jenkins checks out/clones the repository containing your Jenkinsfile , which should match that of the repository’s file structure. The default value of this field assumes that your Jenkinsfile is named "Jenkinsfile" and is located at the root of the repository.

When you update the designated repository, a new build is triggered, as long as the Pipeline is configured with an SCM polling trigger.

Since Pipeline code (i.e. Scripted Pipeline in particular) is written in Groovy-like syntax, if your IDE is not correctly syntax highlighting your Jenkinsfile , try inserting the line #!/usr/bin/env groovy at the top of the Jenkinsfile , [4] footnotegroovy_shebang:[Shebang line (Groovy syntax)] which may rectify the issue.

Built-in Documentation

Pipeline ships with built-in documentation features to make it easier to create Pipelines of varying complexities. This built-in documentation is automatically generated and updated based on the plugins installed in the Jenkins instance.

The built-in documentation can be found globally at ${YOUR_JENKINS_URL}/pipeline-syntax . The same documentation is also linked as Pipeline Syntax in the side-bar for any configured Pipeline project.

Classic UI left column on an item

Snippet Generator

The built-in "Snippet Generator" utility is helpful for creating bits of code for individual steps, discovering new steps provided by plugins, or experimenting with different parameters for a particular step.

The Snippet Generator is dynamically populated with a list of the steps available to the Jenkins instance. The number of steps available is dependent on the plugins installed which explicitly expose steps for use in Pipeline.

To generate a step snippet with the Snippet Generator:

  1. Navigate to the Pipeline Syntax link (referenced above) from a configured Pipeline, or at ${YOUR_JENKINS_URL}/pipeline-syntax .

  2. Select the desired step in the Sample Step dropdown menu

  3. Use the dynamically populated area below the Sample Step dropdown to configure the selected step.

  4. Click Generate Pipeline Script to create a snippet of Pipeline which can be copied and pasted into a Pipeline.

Snippet Generator

To access additional information and/or documentation about the step selected, click on the help icon (indicated by the red arrow in the image above).

Global Variable Reference

In addition to the Snippet Generator, which only surfaces steps, Pipeline also provides a built-in " Global Variable Reference ." Like the Snippet Generator, it is also dynamically populated by plugins. Unlike the Snippet Generator however, the Global Variable Reference only contains documentation for variables provided by Pipeline or plugins, which are available for Pipelines.

The variables provided by default in Pipeline are:

env

Exposes environment variables, for example: env.PATH or env.BUILD_ID . Consult the built-in global variable reference at ${YOUR_JENKINS_URL}/pipeline-syntax/globals#env for a complete, and up to date, list of environment variables available in Pipeline.

params

Exposes all parameters defined for the Pipeline as a read-only Map, for example: params.MY_PARAM_NAME .

currentBuild

May be used to discover information about the currently executing Pipeline, with properties such as currentBuild.result , currentBuild.displayName , etc. Consult the built-in global variable reference at ${YOUR_JENKINS_URL}/pipeline-syntax/globals for a complete, and up to date, list of properties available on currentBuild .

This video reviews using the currentBuild variable in Jenkins Pipeline.

Declarative Directive Generator

While the Snippet Generator helps with generating steps for a Scripted Pipeline or for the steps block in a stage in a Declarative Pipeline, it does not cover the sections and directives used to define a Declarative Pipeline. The "Declarative Directive Generator" utility helps with that. Similar to the Snippet Generator, the Directive Generator allows you to choose a Declarative directive, configure it in a form, and generate the configuration for that directive, which you can then use in your Declarative Pipeline.

To generate a Declarative directive using the Declarative Directive Generator:

  1. Navigate to the Pipeline Syntax link (referenced above) from a configured Pipeline, and then click on the Declarative Directive Generator link in the sidepanel, or go directly to ${YOUR_JENKINS_URL}/directive-generator .

  2. Select the desired directive in the dropdown menu

  3. Use the dynamically populated area below the dropdown to configure the selected directive.

  4. Click Generate Directive to create the directive’s configuration to copy into your Pipeline.

The Directive Generator can generate configuration for nested directives, such as conditions inside a when directive, but it cannot generate Pipeline steps. For the contents of directives which contain steps, such as steps inside a stage or conditions like always or failure inside post , the Directive Generator adds a placeholder comment instead. You will still need to add steps to your Pipeline by hand.

Jenkinsfile (Declarative Pipeline)
stage('Stage 1') {
    steps {
        // One or more steps need to be included within the steps block.
    }
}

Further Reading

This section merely scratches the surface of what can be done with Jenkins Pipeline, but should provide enough of a foundation for you to start experimenting with a test Jenkins instance.

In the next section, The Jenkinsfile, more Pipeline steps will be discussed along with patterns for implementing successful, real-world, Jenkins Pipelines.

Additional Resources

  • Pipeline Steps Reference, encompassing all steps provided by plugins distributed in the Jenkins Update Center.

  • Pipeline Examples, a community-curated collection of copyable Pipeline examples.


1. Domain-specific language
2. Pipeline plugin
3. Source control management
4. Shebang (general definition)


Was this page helpful?

Please submit your feedback about this page through this quick form.

Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?


See existing feedback here.

Read article

Pipeline

Pipeline

Table of Contents
  • What is Jenkins Pipeline?
    • Declarative versus Scripted Pipeline syntax
  • Why Pipeline?
  • Pipeline concepts
    • Pipeline
    • Node
    • Stage
    • Step
  • Pipeline syntax overview
    • Declarative Pipeline fundamentals
    • Scripted Pipeline fundamentals
  • Pipeline example

This chapter covers all recommended aspects of Jenkins Pipeline functionality, including how to:

  • get started with Pipeline - covers how to define a Jenkins Pipeline (i.e. your Pipeline ) through Blue Ocean, through the classic UI or in SCM,

  • create and use a Jenkinsfile - covers use-case scenarios on how to craft and construct your Jenkinsfile ,

  • work with branches and pull requests,

  • use Docker with Pipeline - covers how Jenkins can invoke Docker containers on agents/nodes (from a Jenkinsfile ) to build your Pipeline projects,

  • extend Pipeline with shared libraries,

  • use different development tools to facilitate the creation of your Pipeline, and

  • work with Pipeline syntax - this page is a comprehensive reference of all Declarative Pipeline syntax.

For an overview of content in the Jenkins User Handbook, see User Handbook Overview.

What is Jenkins Pipeline?

Jenkins Pipeline (or simply "Pipeline" with a capital "P") is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.

A continuous delivery (CD) pipeline is an automated expression of your process for getting software from version control right through to your users and customers. Every change to your software (committed in source control) goes through a complex process on its way to being released. This process involves building the software in a reliable and repeatable manner, as well as progressing the built software (called a "build") through multiple stages of testing and deployment.

Pipeline provides an extensible set of tools for modeling simple-to-complex delivery pipelines "as code" via the Pipeline domain-specific language (DSL) syntax. [1]

The definition of a Jenkins Pipeline is written into a text file (called a Jenkinsfile ) which in turn can be committed to a project’s source control repository. [2] This is the foundation of "Pipeline-as-code"; treating the CD pipeline a part of the application to be versioned and reviewed like any other code.

Creating a Jenkinsfile and committing it to source control provides a number of immediate benefits:

  • Automatically creates a Pipeline build process for all branches and pull requests.

  • Code review/iteration on the Pipeline (along with the remaining source code).

  • Audit trail for the Pipeline.

  • Single source of truth [3] for the Pipeline, which can be viewed and edited by multiple members of the project.

While the syntax for defining a Pipeline, either in the web UI or with a Jenkinsfile is the same, it is generally considered best practice to define the Pipeline in a Jenkinsfile and check that in to source control.

Declarative versus Scripted Pipeline syntax

A Jenkinsfile can be written using two types of syntax - Declarative and Scripted.

Declarative and Scripted Pipelines are constructed fundamentally differently. Declarative Pipeline is a more recent feature of Jenkins Pipeline which:

  • provides richer syntactical features over Scripted Pipeline syntax, and

  • is designed to make writing and reading Pipeline code easier.

Many of the individual syntactical components (or "steps") written into a Jenkinsfile , however, are common to both Declarative and Scripted Pipeline. Read more about how these two types of syntax differ in Pipeline concepts and Pipeline syntax overview below.

Why Pipeline?

Jenkins is, fundamentally, an automation engine which supports a number of automation patterns. Pipeline adds a powerful set of automation tools onto Jenkins, supporting use cases that span from simple continuous integration to comprehensive CD pipelines. By modeling a series of related tasks, users can take advantage of the many features of Pipeline:

  • Code : Pipelines are implemented in code and typically checked into source control, giving teams the ability to edit, review, and iterate upon their delivery pipeline.

  • Durable : Pipelines can survive both planned and unplanned restarts of the Jenkins controller.

  • Pausable : Pipelines can optionally stop and wait for human input or approval before continuing the Pipeline run.

  • Versatile : Pipelines support complex real-world CD requirements, including the ability to fork/join, loop, and perform work in parallel.

  • Extensible : The Pipeline plugin supports custom extensions to its DSL [1] and multiple options for integration with other plugins.

While Jenkins has always allowed rudimentary forms of chaining Freestyle Jobs together to perform sequential tasks, [4] Pipeline makes this concept a first-class citizen in Jenkins.

What is the difference between Freestyle and Pipeline in Jenkins

Building on the core Jenkins value of extensibility, Pipeline is also extensible both by users with Pipeline Shared Libraries and by plugin developers. [5]

The flowchart below is an example of one CD scenario easily modeled in Jenkins Pipeline:

Pipeline Flow

Pipeline concepts

The following concepts are key aspects of Jenkins Pipeline, which tie in closely to Pipeline syntax (see the overview below).

Pipeline

A Pipeline is a user-defined model of a CD pipeline. A Pipeline’s code defines your entire build process, which typically includes stages for building an application, testing it and then delivering it.

Also, a pipeline block is a key part of Declarative Pipeline syntax.

Node

A node is a machine which is part of the Jenkins environment and is capable of executing a Pipeline.

Also, a node block is a key part of Scripted Pipeline syntax.

Stage

A stage block defines a conceptually distinct subset of tasks performed through the entire Pipeline (e.g. "Build", "Test" and "Deploy" stages), which is used by many plugins to visualize or present Jenkins Pipeline status/progress. [6]

Step

A single task. Fundamentally, a step tells Jenkins what to do at a particular point in time (or "step" in the process). For example, to execute the shell command make use the sh step: sh 'make' . When a plugin extends the Pipeline DSL, [1] that typically means the plugin has implemented a new step .

Pipeline syntax overview

The following Pipeline code skeletons illustrate the fundamental differences between Declarative Pipeline syntax and Scripted Pipeline syntax.

Be aware that both stages and steps (above) are common elements of both Declarative and Scripted Pipeline syntax.

Declarative Pipeline fundamentals

In Declarative Pipeline syntax, the pipeline block defines all the work done throughout your entire Pipeline.

Jenkinsfile (Declarative Pipeline)
pipeline {
    agent any (1)
    stages {
        stage('Build') { (2)
            steps {
                // (3)
            }
        }
        stage('Test') { (4)
            steps {
                // (5)
            }
        }
        stage('Deploy') { (6)
            steps {
                // (7)
            }
        }
    }
}
1 Execute this Pipeline or any of its stages, on any available agent.
2 Defines the "Build" stage.
3 Perform some steps related to the "Build" stage.
4 Defines the "Test" stage.
5 Perform some steps related to the "Test" stage.
6 Defines the "Deploy" stage.
7 Perform some steps related to the "Deploy" stage.

Scripted Pipeline fundamentals

In Scripted Pipeline syntax, one or more node blocks do the core work throughout the entire Pipeline. Although this is not a mandatory requirement of Scripted Pipeline syntax, confining your Pipeline’s work inside of a node block does two things:

  1. Schedules the steps contained within the block to run by adding an item to the Jenkins queue. As soon as an executor is free on a node, the steps will run.

  2. Creates a workspace (a directory specific to that particular Pipeline) where work can be done on files checked out from source control.
    Caution: Depending on your Jenkins configuration, some workspaces may not get automatically cleaned up after a period of inactivity. See tickets and discussion linked from JENKINS-2111 for more information.

Jenkinsfile (Scripted Pipeline)
node {  (1)
    stage('Build') { (2)
        // (3)
    }
    stage('Test') { (4)
        // (5)
    }
    stage('Deploy') { (6)
        // (7)
    }
}
1 Execute this Pipeline or any of its stages, on any available agent.
2 Defines the "Build" stage. stage blocks are optional in Scripted Pipeline syntax. However, implementing stage blocks in a Scripted Pipeline provides clearer visualization of each `stage’s subset of tasks/steps in the Jenkins UI.
3 Perform some steps related to the "Build" stage.
4 Defines the "Test" stage.
5 Perform some steps related to the "Test" stage.
6 Defines the "Deploy" stage.
7 Perform some steps related to the "Deploy" stage.

Pipeline example

Here is an example of a Jenkinsfile using Declarative Pipeline syntax - its Scripted syntax equivalent can be accessed by clicking the Toggle Scripted Pipeline link below:

Jenkinsfile (Declarative Pipeline)
pipeline { (1)
    agent any (2)
    options {
        skipStagesAfterUnstable()
    }
    stages {
        stage('Build') { (3)
            steps { (4)
                sh 'make' (5)
            }
        }
        stage('Test'){
            steps {
                sh 'make check'
                junit 'reports/**/*.xml' (6)
            }
        }
        stage('Deploy') {
            steps {
                sh 'make publish'
            }
        }
    }
}
Toggle Scripted Pipeline (Advanced)
1 pipeline is Declarative Pipeline-specific syntax that defines a "block" containing all content and instructions for executing the entire Pipeline.
2 agent is Declarative Pipeline-specific syntax that instructs Jenkins to allocate an executor (on a node) and workspace for the entire Pipeline.
3 stage is a syntax block that describes a stage of this Pipeline. Read more about stage blocks in Declarative Pipeline syntax on the Pipeline syntax page. As mentioned above, stage blocks are optional in Scripted Pipeline syntax.
4 steps is Declarative Pipeline-specific syntax that describes the steps to be run in this stage .
5 sh is a Pipeline step (provided by the Pipeline: Nodes and Processes plugin) that executes the given shell command.
6 junit is another Pipeline step (provided by the JUnit plugin) for aggregating test reports.
7 sh is a Pipeline step (provided by the Pipeline: Nodes and Processes plugin) that executes the given shell command.

Read more about Pipeline syntax on the Pipeline Syntax page.


1. Domain-specific language
2. Source control management
3. Single source of truth
4. Additional plugins have been used to implement complex behaviors utilizing Freestyle Jobs such as the Copy Artifact, Parameterized Trigger, and Promoted Builds plugins
5. GitHub Organization Folder plugin
6. Blue Ocean, Pipeline: Stage View plugin


Was this page helpful?

Please submit your feedback about this page through this quick form.

Alternatively, if you don't wish to complete the quick form, you can simply indicate if you found this page helpful?


See existing feedback here.

Read article