Blue ocean Pipeline Automation

Blue Ocean 1.0.0 released in early April 2017 and it looks great. There has been some criticism from the community regarding the system but we think Blue Ocean has come a long way and is continuing to grow. Admittedly using Blue Ocean will not come without some growing pains but the overall benefits will help to make your software delivery better. This blog will talk about a bit of history, some of the main benefits of Blue Ocean, and some examples of software and infrastructure as code.

Pipelines of Old

We already have a pipeline; what’s wrong with it? Implementation and Visualization could be better.

Implementation of pipelines from their beginnings through today vary from project to project. Historically pipelines were created manually by an admin and these jobs would be tedious, prone to error, and – without proper backup – subject to loss if the server went down. If there are hundreds of jobs and something changes project-wide then the maintainer would be required to verify all changes occurred in every job. Implementation would be much simpler with a streamlined way to keep the configuration with the project files. This has been partially remedied with Groovy scripting and the Jenkinsfile plugin but it can always be better.

Visualizing a pipeline in Jenkins before Blue Ocean was not aesthetically pleasing and manual set up of the views were required. An admin could use the Build Pipeline Plugin which would present a visual representation of the pipeline but this took extra time. Developers may need to dive deep into folders to find the job they needed to see only to be met with the wall of text that the console outputs. Visualization could be improved by automatically generating views for users and helping to navigate through the messy log files.

 Benefits of Blue Ocean

Blue Ocean is a project that is designed to enhance the user’s experience with the software. Some of the main benefits are:

  • Jenkinsfile and Pipeline Integration
  • Visualization of Pipelines
  • Focuses on what the current user is interested in
  • Logs that are easy to navigate.

Jenkinsfile and Pipeline Integration

It is true that standard Jenkins can use pipelines and Jenkinsfile but the native support built into Blue Ocean rises above. A Jenkinsfile is a configuration file that will define the pipeline for a particular project. This file has been designed to live alongside your code in a repo to create accountability and reliability for your configuration through source control. This close coupling of Jenkinsfile and repo allowed Blue Ocean to implement an easy way to scan via the “New Pipeline” option.

This option brings up the Create Pipeline view and includes a step-by-step instruction set to add a new pipeline to the list. If you use Github then you can choose the repository you want to use or have it scan the entire organization looking for Jenkinsfiles.

Otherwise you can set up a URL and credentials to access a repo elsewhere. Once completed the application will inform you that the new pipeline has been created and it is ready to view. An example Jenkinsfile can be found at the bottom of this document.Blue ocean and Jenkinsfile have some great feedback for pipeline configuration. Getting an error an hour into a build because of a bad value or misspell is a thing of the past. Declarative Jenkinsfile will be linted at the start of a job and will fail if somethings is incorrectly added. Also, linting before committing is now a possibility using the Jenkins-cli. Both of these features add to benefits of pipeline as code and make building software less painful.

Visualization of Pipelines and Projects

In 2017 UI/UX design isn’t just for consumers anymore. Blue Ocean out of the box has a more modern, streamlined appearance that makes it easy to work in. Aside from aesthetics the views are personalized to show your favorite projects, projects you work on, and all of their statuses. The user can focus on the items important to them. Their favorites and projects they work on are listed on top with the color coding to draw quick attention to what is broken. Easily select favorites by clicking the star on the righthand side of the page and, in good Jenkins fashion, the weather report is still available with newer icons.

Two views of the Same Instance by Different Users

Blue Ocean also includes a new pipeline view that shows the steps the project takes through the pipeline. The main sections of the pipeline are broken into the stages in the project’s Jenkinsfile. If the stage has any parallel steps then those are shown like a stack. If a stage fails then it is clearly outlined by the view and you can dig further into the issue by selecting the failing step to see the log.

 Easy to Read Logs

Speaking of logs, most people hate having to dig through huge log files to determine what went wrong with a build or test. Blue ocean breaks up the console output into the same sections that the build pipeline view shows. On top of that, those same sections may have multiple steps and each step has it’s own break in the console output.

Basic Examples

Now you see some of the features that Blue Ocean gives but how about actually using them? Moving to Jenkinsfile and pipeline-as-code takes some serious thought and effort but it is worth it in the long run. It is worth noting that there are two types of Jenkinsfile, Scripted and Declarative. These examples are both Declarative.

Here are two basic examples to look at:

Node.js example

Node.js is a popular Javascript runtime that is used in projects all over. If you are here after viewing the Linux Foundation’s Intro to Continuous Delivery course then you are already familiar with the project Dromedary. If you aren’t, Dromedary is a demo application by Stelligent (Thanks!).  You are welcome to spin up your own instance of Jenkins with all of the Blue Ocean plugins to try this.

Let’s dive into the Jenkinsfile.

  1. agent
    1. The agent is the workspace that the
    2. This build is using a Dockerfile that is placed in the top level of the software’s repository. This Dockerfile takes the Nodejs image and creates a new image included Gulp.
  2. Stages
    1. We created an initialize stage to install the node_modules necessary for testing.
    2. Unit testing using gulp
    3. Convergence testing (No convergence testing here, just created for the sake of example)
      1. Parallelized builds will all run as a stage
      2. You can take this parallelized steps and spread them out over different agents to save on time and configuration
    4. Build using gulp. In your own software you can do what you see fit.
    5. Deploy
      1. Deploy the application with you preferred option.
      2. Some options are Heroku, AWS, an Artifact repo, etc.
pipeline {
  agent {
    dockerfile true
  }

  stages {
    stage('Initialize') {
      steps {
        sh 'npm install'
      }
    }
    stage('Unit Test') {
      steps {
        sh 'gulp test'
      }
    }
    stage('Convergence Testing') {
      steps {
        parallel (
          firefox: {
            echo "Firefox Testing"
          },
          Chrome: {
            echo "Chrome Testing"
          },
          IE: {
            echo "IE Testing"
          },
          Mobile: {
            echo "Mobile Testing"
          }
        )
      }
    }
    stage('Build') {
      steps {
        sh 'gulp package-app'
      }
    }
    stage('Deploy') {
      steps {
        echo 'Deploying...'
      }
    }
  }
}

Chef Infrastructure Example

Liatrio’s infrastructure code also goes through CI. Historically we have used Chef but this methodology can be used with whatever Infrastructure tool assuming testing is possible.

This example is straightforward. Let’s take a look:

  1. Agent for this pipeline is any. This will use the master agent in this case.
  2. Stages are similar to the previous example.
    1. The Setup step prints the version for log verification if need be as well as getting the dependencies with Berkshelf.
    2. Acceptance testing in parallel to easily see what failed if it did.
    3. Test Kitchen is a step that purposely fails for the sake of the description. It runs convergence (integration) testing of the cookbook.
  3. Post – Runs after the build is complete.
    1. Success – will print in this example but you can send messages, slack, etc on a successful build
    2. Failure – will print on a failed build but you can send emails, slack messages, etc on a failed build.
pipeline {
  agent any

  stages {
    stage('Setup') {
      steps {
        sh 'echo "Versions: "'
        sh 'chef --version'
        sh 'rubocop --version'
        sh 'foodcritic --version'
        sh 'echo "Updating Berkshelf: "'
        sh 'if [ ! -f Berksfile.lock ]; then berks install; else berks update; fi;'
      }
    }
    stage('Acceptance Testing') {
      steps {
        parallel (
          rubocop: {
            sh 'echo "Starting chefstyle (rubocop): "'
            sh 'rubocop --color'
          },
          foodcritic: {
            sh 'echo "Starting foodcritic: "'
            sh 'foodcritic .'
          },
          ChefSpec: {
            sh 'echo Starting ChefSpec: '
            sh 'chef exec rspec'
          }
        )
      }
    }
    stage('Test Kitchen') {
      steps {
        sh 'if [ ! -f Berksfile.lock ]; then berks install; else berks update; fi; kitchen test -d always --color'
      }
    }
  }
  post {
    success {
      echo "Knife upload here"
    }
    failure {
      echo "The build failed"
    }
  }
}

Final Thought

This is a new take on a familiar tool and there is going to be growing pains. We think it’s worth it. The above examples are trivial at best but you can see that the options and capabilities are present. Liatrio plans on putting on more in depth blogs of many tools including Jenkins so check back for more in the future!

If you have any comments or questions, reach out to us @liatrio

Liatrio is a DevOps Consulting firm focussing on helping enterprises get better at software delivery using DevOps & Rapid Release philosophies. We work as “boots on the ground change agents” helping our clients re-imagine their daily work and get better at delivery one day at a time. Liatrio is also hiring! If you enjoy being a part of a team that is solving challenges around software delivery automation, deployment pipelines and large scale transformations, reach out to us via our contact page on our website.

Leave a Reply

Your email address will not be published. Required fields are marked *