Openshift

Openshift and Pipelines for Microservices

It’s hard to avoid all the buzz around microservices and Docker today. Back in April 2017 Datadog reported that the number of their customers who use Docker had risen 40% since 2016, and that about 40% of those using Docker use some kind of orchestration tool such as Docker-swarm or Kubernetes.

With all the benefits that come with containers and microservices it’s no surprise either. Microservices are much faster at starting and stopping than virtual machines, they are much easier to deploy and scale, they are much more portable and they isolate and modularize components of complex monoliths.

Just from a short glance it’s clear that DevOps and microservices would be a match made in heaven. Both DevOps and microservices aim at maintaining agile, easily manageable software. It was the gaining popularity and the obvious common goal between microservices and DevOps that lead us to Openshift.

What Is Openshift?

OpenShift is a platform from Redhat built on top of the container orchestration tool Kubernetes. OpenShift comes with a bunch of extra features on top of Kubernetes. A lot of these features are intended to ease the flow of development and encourage good CI/CD practices. It does this by tightly integrating with the CI tool Jenkins and allowing you to set up triggers for when to rollout a new deployment.

Using OpenShift’s source-to-image tool and buildConfigs we can build and update images with nothing more than the application source, and then we can use image change triggers to automatically rollout a new deployment with the new image. If we wanted, we could also set up triggers for source code changes, so when a change is pushed to the source repository it will trigger a build using the buildConfig.

Even though we can automate both the build and deployment using triggers in OpenShift, we may want more intermediate steps in our workflow for testing and promoting across environments. Which is where Jenkins comes in.

Jenkins has plugins that interface with the OpenShift API and allow us to manage resources in our cluster. Beyond that, OpenShift has their own Jenkins image with the plugins properly configured to connect to the cluster upon startup, so no extra work is needed to begin working with these plugins.

Starting up Jenkins is as simple as executing one command using the oc binary tool.

oc new-app jenkins-ephemeral

We could also go into the OpenShift web console and deploy Jenkins from the catalogue.

This will create all the resources needed to have Jenkins fully up and running inside the OpenShift cluster, such as the deployment, the service, and the route. It will use OpenShift’s Jenkins image that will automatically have OpenShift’s authentication realm hooked up, the OpenShift pipeline plugin and OpenShift client plugin, as well as other important plugins such as the pipeline plugin and Kubernetes plugin.

Jenkins Pipelines in Openshift

So what would a Jenkins pipeline look like inside OpenShift? The big differences between a pipeline in OpenShift compared to previous pipelines is the use of the pipeline DSL that we gain thanks to the OpenShift pipeline plugin

pipeline {
    agent none
    triggers {
        cron('30 16 * * *')
    }
    stages {
        stage('Start build') {
            agent any
            steps {
              openshiftBuild(bldCfg: '<name_of_buildConfig>', showBuildLogs: 'true')
            }
        }
        stage('Verify Build') {
            agent any
            steps {
              openshiftVerifyBuild(bldCfg: '<name_of_buildConfig>')
            }
        }
        stage('Testing') {
            agent any
            steps {
              /* Steps for testing... */
            }
        }
        stage('Promote Image') {
            agent any
            steps {
              openshiftTag(srcStream: '<image_from_buildConfig>', srcTag: '<tag_from_buildConfig>', destStream: '<image_from_buildConfig>', destTag: 'prod')
            }
        }
    }
}

The first stage starts a build for a new image. The bldCfg key references the name of a buildConfig that exists within the cluster. This will build a new image and push the image to an image repository either on Dockerhub or in an imagestream within the cluster.

The second stage verifies that the build completed successfully. This inspects the state of the latest build and makes sure that it completed within a reasonable amount of time. Success or failure of builds before the latest one don’t matter.

The promotion stage simply takes the image that was just built and retags it. In this case, with the destination tag being ‘prod’. So any deployments that have triggers set up to look for an image change for the image <image_from_buildConfig>:prod will automatically rollout the new deployment.

Once a developer pushes a code change, Jenkins will be triggered to start a build using source-to-image for a new image. This new image will be output to OpenShift’s built-in image registry. This image change will trigger a deployment for the respective environment (For this example, we’ll call it dev.)

While OpenShift outputs the new image and deploys it to the dev environment, Jenkins continues through the rest of the pipeline stages. It will verify the latest build completed with success, go through testing, and then promote the image. By updating the <image_from_buildConfig>:prod tag, the image change trigger will be set off and the prod environment will rollout the new image.

Expanding on the Pipeline

We may want to add more features to this pipeline to make it more ready for production. Suppose we have different projects setup inside our cluster to separate environments from each other. If resources are spread out across multiple project spaces, we could pass in extra arguments to manage resources outside of the Jenkins microservice namespace. We could also pass in a commit hash if we wanted a specific version of the source.

steps {
    openshiftBuild(bldCfg: '<name_of_buildConfig>',
                            showBuildLogs: 'true',
                            commitID: <commit_hash>
                            namespace: <name_of_project_BuildCfg_is_in>)
}

We could also add steps for an approval before the new image gets deployed. Since promotions are handled just by tagging images, we can just add an input step after our tests and before the tagging stage

stage('Testing') {
    agent any
    steps {
        /* Steps for testing... */
        input 'Deploy image to prod environment?'
    }
}

After the promotion, we could extra stages for verifying the deployment worked properly and for scaling the deployment

stage('Verify Deployment') {
    agent any
    steps {
        openshiftVerifyDeployment (
            depCfg: <name_of_deploymentConfig>,
            verifyReplicaCount: 'true',
            namespace: 'prod',
            verbose: 'true'
        )
    }
}
stage('Scale Deployment') {
    agent any
    steps {
        openshiftScale (
            depCfg: <name_of_deploymentConfig>,
            replicaCount: <num_of_pods> /* number of pods to loadbalance over */
            verifyReplicaCount: 'true',
            namespace: 'prod',
            verbose: 'true'
        )
    }
}

There is even more pipeline DSL that comes from the OpenShift Pipeline plugin that isn’t shown here. Jenkins has good documentation to show just what you can do with this plugin.

Final Thoughts

Microservices and DevOps were meant to cooperate together with both looking to acheive the same thing; faster deployments with less worry about environments. Openshift is a great platform for coupling these two concepts together and providing the resources necessary to take full advantage of both. For anyone who is interested in experiencing the magic that OpenShift has to offer, you can spin up a single node cluster on your local machine using Minishift.

ABOUT LIATRIO

Liatrio is a collaborative DevOps consulting firm that helps enterprises drive innovation, expedite world-class software delivery and evolve their culture, tools and processes.

We work as “boots on the ground change agents,” helping our clients improve their development practices, get better at delivering value from conception to deployment and react more quickly to market changes. Our tech veterans have the experience needed to execute on DevOps philosophies and Continuous Integration (CI) and Continuous Delivery (CD) core practices, including automation, sharing, and feedback. Through proven tools and processes and a collaborative culture of shared responsibilities, we unite our clients’ technology organizations to help them become high-performing teams. Reach out — let’s start the conversation.

Liatrio is also hiring! If you want to be a part of a team that solves challenges around software delivery automation, deployment pipelines and large-scale transformations, contact us!

Leave a Reply

Logged in as Carolyn DaughtersLog out?

EDIT THIS POST

Leave a Reply

Your email address will not be published. Required fields are marked *