A "build pipeline" in Jenkins terminology is a cascade of jobs, where the successful execution of one job (the "upstream build") triggers the execution of one or more other jobs (the "downstream builds").
I have seen such "build pipelines" very frequently, so I guess it is a kind of obvious solution to a common problem (though I am not sure what exactly this problem is…).
The main advantages of a build pipeline probably are:
  • A complex build process looks less complex if split up into several jobs
  • If the build pipeline fails, it is obvious at which step (i.e. at which job) it did fail exactly
Surprisingly, my own experiences concerning build pipelines are just the opposite: Build pipelines lead to complex and hard-to-maintain CI builds. I am trying to point out why this is so:

When setting up a “good” CI build (and by “good” I mean: stable, maintainable, repeatable etc.), you should consider the following points (and each one is an argument against build pipelines):


Your job needs a clean database? Then clean it up as part of the job itself, and don’t hope that someone did it before. Your job may only be executed if all unit tests are passed? Then execute the tests within your job.
The jobs of a build pipeline usually share the same workspace, which means that every job expects the workspace to be in a certain state (and causing strange errors if it isn’t). Furthermore, it can be hard to make a build pipeline “thread-safe”.
A job must not rely on anything that is out of its scope. If the job needs exclusive access to a shared resource (like an application server or database instance), use the Locks-Plugin.

Build Triggers

Every build should know why it originally has been triggered.
E.g. if your build is triggered by a change to the source code repository, Jenkins makes it very easy trace a failed test to the source code revision that is responsible for it.
If – on the other hand – your build is part of a pipeline (and therefore triggered by another build), it can be very hard to determine the changelog between two successive pipeline builds.


When using a build pipeline, you must ensure that
  • each job has a lock on the same shared resources (and the resources must no get unlocked when going downstream)
  • each job has the same expiration policy
  • whenever one job executes, any other job is blocked until the end of the pipeline is reached.
These problems simply don’t exist if the build process is mapped to a single job.
Furthermore, a build pipeline prohibits code reuse (a job may only be part of one pipeline), which probably leads to many cut-paste jobs. A “Groovy script” build step is a good way to share code between different jobs, you don’t need a build pipeline for that.