From the Jenkins website: “Jenkins Pipeline (or simply “Pipeline” with a capital “P”) is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.”
Let me start by saying that I really like the new Jenkins pipelines. Jenkins has always provided a great build and deploy platform but the new pipelines offer a first class approach for storing your Pipeline as Code as well as providing teams a way to deliver their applications from development to production with excellent visibility and stability.
The Jenkins Pipeline tries to move us from the complex scripted workflows we may have put in place earlier to a potentially simpler and more structured approach. When starting a new project that is unhindered by large release trains or long-running dev cycles, the new declarative pipelines are wonderful. Each team can store a “Jenkinsfile” with their project that defines their product’s path to production. This is exactly what we should be working for as development teams in a DevOps world. Excellent!
These pipelines work great if your dev team’s applications are able to go to production with each commit to master and without much coordination or synchronization across other delivery teams or IT organizations. If the path to production is under your dev team’s control, you’ve probably got a relatively modern and mature delivery approach. Unfortunately, this use case is still not the norm for a large majority of enterprise software delivery organizations.
Many organizations keep a release schedule and have a gated delivery system that requires sign-off from multiple groups before a team is given the go-ahead to deploy to production. This model is usually found in a large batch release, still common in many enterprises. Managing multiple parallel production deployments or a product with many steps can be difficult with Jenkinsfile based pipelines.
How many of your builds actually end up creating an artifact that will go to production? If we’re practicing Continuous Delivery, it could be virtually all successful builds of your master branch. That’s where the rub is. Most enterprises aren’t practicing true Continuous Delivery. As mentioned in the definition at the top of the post, the Jenkins Pipeline is built to enable continuous delivery. With CD, every build should produce a releasable artifact, deployed to a CI environment where it can be promoted as soon as integration tests and security scans pass.
If you’ve got all of this automated and may only need a quick sanity check before deploying your application to production, then your CI/CD pipeline is doing what it was created to do. You don’t have to release at this point, but you can if you want to. Jenkins Pipelines should help you here.
The reality within many delivery organizations is that even though there may be hundreds of builds per day across multiple products in active development, the odds are the majority of these builds don’t produce an artifact that will go to production. Most of the pipeline builds end up skipping a number of their steps because they’re simply not built with the intention of every build being released.
Enterprises have complex environments, multiple dependencies, several internal and external customers, as well as the administrative overhead built up over years delivering products. The Pipeline defined for just one application going to production at a time is a simple use case. The average enterprise system may require a large number of applications deployed and tested in concert over the course of several weeks.
Can you imagine the need to deploy 50 or 100 applications one at a time? That is a reality for many large organizations. As these enterprises migrate to a more modern tech stack, they are faced with adopting a hybrid delivery model. In addition to the monolithic legacy applications they deploy regularly, they have an increasing number of new microservices that may need to be delivered in parallel in each environment throughout the delivery process.
This scenario is common in a large organization where many teams are building applications dependent on one another in a production environment. It takes a concerted effort between teams to understand what versions exist and which may be compatible with their features. When these administrative processes are in place to manage delivery of multiple products, the use of these new pipelines may become a reminder that you’re still working within a legacy delivery process. The local optimization of creating a “siloed” delivery pipeline may have become another bottleneck. When there are hundreds of builds a day that don’t go to production, you will soon be looking at how you can adapt the pipeline to fit your delivery process. The modernization of our tech stack and delivery approach coupled with legacy release coordination at enterprise scale is not easy.
By using the new Pipeline in this type of mixed delivery environment, a production deployment can become a long, serialized process. Teams may need to coordinate dependency deployment or manual validations in multiple environments along the way. Keeping track of which pipelines are the ones actually going to production may require more coordination than your release engineers can easily manage. If your dev team is not in charge of delivering your product to production independently, you may have inadvertently created extra process waste.
That said… Do it anyway!
There may be some complexity involved in making the new pipelines work for your use case but you can still benefit from the progress made in Jenkins 2.0. The first thing to remember here is there is the option of using custom scripts in the Declarative Pipeline or using the Scripted Pipeline from the start. These features allow you to customize any step in the pipeline. There is also a great feature called Shared Libraries that allow common functions to be shared across multiple pipelines. This will enable your teams to use standard steps without having the intricate detail stored in their repositories.
Along with the scripting and shared library functionality, several other plugins have been developed as components of the Pipeline plugin to help teams manage delivery in an enterprise environment. The Pipeline: Multibranch plugin will automatically recognize new branches created and allow builds of your applications to happen outside of the standard pipeline flow. By adding the GitHub Branch Source or Bitbucket Branch Source plugins, you can use the wonderful feature of automatic Pull Request builds. This is a must-have for rapid feedback in any pipeline. Having the build verified before you merge your changes can really help most dev teams with quality.
Take the new pipelines for a spin and decide for yourself. You may need a period of migration as well as a change in how your teams are managing releases. Ultimately, moving this direction is the right way to go but don’t expect to move to the new standard before understanding how it impacts your existing pipeline. The benefits will outweigh the cons but you may need to uplift your release processes to keep up with the decentralized management of the pipeline. That’s where the really hard work is after all.
At Liatrio, we focus on the enterprise and understand the complexities involved in delivery at scale. We help facilitate the move to a rapid delivery model by assessing existing pipeline strategies and tailoring the path forward for each product and team within the delivery organization. While helping your organization understand the impacts on the overall delivery system, we work with you to target iterative improvements throughout the enterprise.