Jenkins Automation

A little background

Jenkins in the enterprise. In traditional waterfall IT organizations, jenkins existed in precarious position. Jenkins main focus is builds- everything is a build job- but since it’s a service that needs to be hosted somewhere, it falls somewhere in between developers and operations. Developers needed integration environments to run their applications, and would setup jenkins to accomplish this. They would use jenkins to build on code checkin and possibly deploy to shared dev servers. If operations wasn’t involved in setting jenkins up, they wouldn’t trust the artifacts that came out of it. This would often cause operations to do their own builds of the developers code once it was ready to be released. Anyway, jenkins wasn’t an operations priority, and therefore wasn’t built-proof. Manual setup was easy since it’s just a jar, so it can run anywhere. Most jobs on these jenkins instances were just maven build jobs. The artifacts would be installed into jenkins’s .m2 directory, and be consumed by products that depended on those artifacts on other jenkins builds.

Deleting jobs is meticulous in jenkins. Unless one is using scripting, one job needs to be deleted at a time. This is why Jenkins development was not a common thing. Since everything was manual, if a setting was changed that would break things (for example using a different version of maven) that user would see the error and hopefully realize what was changed. Plugins such as “JobConfigHistoryPlugin” helped debug changes when breaking changes were introduced. For every need that arose, there appeared to be a jenkins plugin to accomplish the task. More and more often though, a plugin would cause issues on jenkins due to dependency versions, or it was just not a well written plugin. This introduced to the need to have a sandbox jenkins to verify that new changes could be verified on the sandbox instance.

Automating Jenkins

Fast forward a couple of years, and jenkins is no longer just doing builds. It is normal for it to be deploying to higher up environments and more and more often, to production. Almost suddenly, jenkins moves from a developer or release manager problem to an operations problem. Jenkins is no longer sitting on an old PC under the tech lead’s desk; it’s in a locked server rack in a colocation data-center. Since jenkins is now an operations tool, it’s now required to be up 99% of the time. With this additional responsibility, jenkins needs to be as locked down as other application servers since one persons mistake could be make it unusable.

At this point jenkins is running not just build jobs and deployment jobs. Smoke-test suites are now needed to validate that the deployments succeeded and that the applications started up correctly. In an enterprise with even 12 environments, 25 products can quickly turn into 700 jobs. Managing these jobs manually is next to impossible. In comes Jenkins DSL. Jenkins DSL allows the ability to write jobs in code.

mavenJob("my-build-job") {
   scm {

job("my-deploy-job") {
 		shell("scp file http://serverSomewhere")

job("my-smokeTestJob") {
	scm {

Jobs can now be added via code, or creating a config to be parsed by code; adding one environment name to a list could create a new deploy job for every product. This is double edged sword. As easy as it is to spin up hundreds of jobs, it is quite easy to make a mistake and impact, or even delete 100s of jobs. The jenkins job creating scripts and configurations become their own product since they are just as delicate as the products they manage and build. Once this workflow is setup, it works quite well. The workflow has huge benefits over manually created jobs, but jobs generated by code are no different than the jobs that could be setup manually. We link jobs together and create other jobs to promote version from one job to another, but they are still just jenkins jobs that were originally designed for just building maven projects ten years ago.

Moving Forward

Jenkins Pipeline is a plugin for jenkins that no longer uses jobs. Instead of jobs, you declare a pipeline with code that has stages. Stages provide similar functionality to jobs, only are much more flexible. Each job is limited to its own workspace, whereas stages can use the same a or a new workspace.

node {
    stage "Checkout"
    checkout([$class: 'GitSCM', branches: [[name: '*/jenkins2']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[url: '']]])
    stage "Build"
    def mvnHome = tool 'maven-3.3.9'
    sh "${mvnHome}/bin/mvn -B deploy"
    stage "deploy to server"
  1. Lay’s out strategies to replicate Jenkins Jobs in lower sandbox environments
    1. Same jobs running local as in Sandbox and Prod
    2. Use environment variables to determine environment and swap out functionality/URLs, etc

Sandbox Jenkins

Jenkins jobs, plugins, groovy versions, etc need to be tested, not just locally. It’s important to have as close to a copy of production in another environment (or 2) to verify changes. I have 700 jobs on my local vagrant jenkins to eye configurations, test plugins etc, but I disable them all by default since enabled them would be a giant resource hog. With a prod-like environment, you could actually have all your jobs run.

If this were AWS or Facebook, I’m sure there would be a full prod-like environment to verify that anything can be built and deployed to destroyable containers. At most companies this is unfortunately not the case;, especially when it comes to licensed software. In this case though, there’s a ton of value that can be taken away from running as much as possible in non-prod environments. This goes back to how one change can render in jenkins server ineffective.

Like any piece of software, Jenkins is vulnerable to dependency hell. Plugins often depend on other plugins. Upgrading one plugin can require another plugin being update, which could cause another plugin to no longer work. In recent years this has become less of a problem, yet going from jenkins 1 to jenkins 2 was no easy task. This can get especially messy if the groovy dsl syntax changes from one version of a plugin to another. For example, the HipChat dsl changed slightly from v1 to v2. When running the job dsl with the updated plugin, the job dsl broke requiring the code to be updated. This can all be verified on sandbox before updating production. BUT, it would require job dsl being run in the sandbox environment.

Environment Configurations in Jobs

As easy as it is to hard-code production urls in code, this becomes a problem as soon as there is a second jenkins instance in the sandbox environment or even locally. We have found that the best way to encourage local development is to make it easy as possible to develop locally. Following the idea that anyone should be able to clone the pipeline repository and run it locally, the pipeline’s default configurations should set for local development. Running `mvn deploy` or kicking off a manifest deployment should not deploy to the production artifact repository, or anything along those lines.

Above is an example for how to set defaults for configurations in groovy. This allows the configurations to be manually set in the seed job for sandbox and production jenkins.

Docker Docker Docker

Virtual machines were effective because they allowed more flexibility than using bare-metal, but at a cost. Virtual machines are big and heavy, even with modern networking speeds. VMs tend to be static in enterprises, but a few widely used languages require the ability build with different versions of a language. Take NodeJs for example. Some apps still need to be built with 0.11, some with node 4, and some of the newest apps needs to be built with node7. To get around the lack of ability to create and destroy vms with different node versions, NVM exists which allows the ability to use different node versions on the same server. This works but it’s not easy to setup and requires an additional dependency.

Vagrant finally provided an effective way for reproducing production-like environments locally, but not without problems. We’ve found spinning up jenkins node locally on vagrant to be quite delicate. It required everyone to have the same version of vagrant, virtualbox, chef, and berkshelf for our vagrant boxes to work correctly. Between that and how much of a resource hog running multiple vms turned out to be, it wasn’t worth it for many developers to run jenkins locally. In comes docker. Docker is so fast (benefits of being light-weight) that we can create a new container for every build. This allows that container to be as customized to be as simple (just java and maven) or a more complex container with dependencies for headless browser testing.


In this blog, we talked about how Jenkins is the key to an enterprise Continuous Integration & Delivery strategy and how Jenkins has evolved in the last few years. Jenkins 2.0 & the concept of pipelines will really change how builds & deployments are done and the concept of what jobs mean. We also touched on Docket and how to take advantage of it.

If you have any comments or questions, reach out to us @liatrio

Liatrio is a DevOps Consulting firm focussing on helping enterprises get better at software delivery using DevOps & Rapid Release philosophies. We work as “boots on the ground change agents” helping our clients re-imagine their daily work and get better at delivery one day at a time. Liatrio is also hiring! If you enjoy being a part of a team that is solving challenges around software delivery automation, deployment pipelines and large scale transformations, reach out to us via our contact page on our website.

Leave a Reply

Your email address will not be published. Required fields are marked *