AWS CodePipeline Use Case

AWS CodePipeline Use Case

AWS CodePipeline Use Case

In 2015 AWS released CodePipeline, a tool for automating software releases. CodePipeline allows you to configure and visualize entire software release processes from commit to deployment. It brings together several AWS developer tools like CodeCommit, CodeBuild, and CodeDeploy, as well as supports integrations with a variety of third-party tools/services like GitHub, Jenkins, and Runscope.

CodePipeline has a modular interface for defining delivery processes. Pipelines are broken into “stages,” with each stage having its own input and output artifacts. This allows you to use the tools that work best for your situation to organize and visualize a customized delivery process.

In this AWS CodePipeline use case, we’ll review an example pipeline for releasing a Java application to a Tomcat server, then dive deeper into the tools and services that make up this pipeline.

AWS CodePipeline Use Case: Tomcat Release Pipeline

I’ve built a sample pipeline using CloudFormation an orchestration tool for AWS resources. If you’re not familiar with CloudFormation, just follow the instructions in the repo’s README file. Be aware that it will spin up some AWS resources and continue to incur charges if you don’t delete the stack when you’re done looking at it.

This pipeline is designed to deploy a Java application to a Tomcat server. The visualization to the right shows it being used for spring-petclinic, a sample application that uses the Spring framework. The pipeline flows like this:

  • Source stage detects updates to a GitHub repo to send new revisions through the pipeline.
  • Build stage launches a container to run a Maven goal defined in the repository. If the goal is successful, the build artifact is uploaded to an S3 bucket.
  • Approval blocks the revision from continuing through the pipeline until a manual approval is submitted.
  • Deploy deploys the build artifact to an existing EC2 instance based on some deploy scripts in the repository.

This release pipeline is very basic, but it highlights a lot of the building blocks that Amazon’s Code* services provide. A larger application might call for a more sophisticated pipeline that breaks down the Build stage into several different building and testing phases, or deploys to a fleet of EC2 instances in an Auto Scale group.

AWS CodePipeline Use Case: Building Code

This pipeline uses AWS’s CodeBuild service to produce build artifacts. One of the most attractive features of CodeBuild is that it doesn’t require a dedicated build server – builds are run in containers that exist only for the duration of the build. This allows the CodeBuild service to scale to your organization’s needs while only billing for compute time by the minute. Build containers can be spawned from one of several AWS-provided Docker images (for Node, Java, Python, and others), or from a user-provided image. The build procedure is defined in a file called buildspec.yml that lives in the application repo. Here is the buildspec for this petclinic pipeline:

version: 0.1
phases:
  build:
    commands:
      - echo Build started on `date`
      - mvn test
  post_build:
    commands:
      - echo Build completed on `date`
      - mvn package
artifacts:
  files:
    - target/petclinic.war
    - appspec.yml
    - codedeploy/*

Build phases are composed of shell commands and the resulting artifact files are zipped up and uploaded to S3. In this example CodeDeploy scripts and configuration are also packaged for when they are needed later in the pipeline.

AWS CodePipeline Use Case: Deploying Code

Artifacts in this pipeline are deployed to a Tomcat server via AWS’s CodeDeploy service. CodeDeploy performs deployments to both EC2 and on-premise instances through a codedeploy-agent service that runs on the target platform. Groups of deployment instances can be formed with Deployment Groups, and CodeDeploy even supports deployments to Auto Scaling groups (though this brings up some tricky edge cases when a deployment occurs during a scale up). CodeDeploy also offers a Blue/Green deployment strategy for seamless releases that don’t interrupt service.

Like CodeBuild, CodeDeploy procedures and configuration are defined in the application repo. Here’s what the config file appspec.yml looks like for this project:

version: 0.0
os: linux
files:
  - source: /target/petclinic.war
    destination: /usr/share/tomcat7/webapps/
hooks:
  AfterInstall:
    - location: codedeploy/restartTomcat.sh
      timeout: 300
      runas: root

This file allows you to configure where on the filesystem the artifact should be deployed, as well as hooks for scripts (that are usually located in the repo) to be triggered at various lifecycle events. In this simple example, a script is triggered to restart Tomcat after the artifact has been installed on the server. However, deployments are broken down into several other lifecycle events, giving full control over how an application is launched for both In-Place and Blue/Green deployments.

AWS CodePipeline Use Case: Thoughts on What I Like

AWS Ecosystem

Having a delivery stack completely within the AWS ecosystem makes many parts of the release process incredibly smooth. For example:

  • Deploying to EC2 Auto Scaling instances without much pain
  • Using CloudFormation to orchestrate entire pipelines as code
  • Using AWS’s APIs to control and interact with Pipeline services
  • Securing pipeline services by locking down their IAM roles
  • Monitoring, logging, and alert capabilities through CloudTrail and CloudWatch

Integrations

CodePipeline strikes a great balance between keeping most processes within AWS’s control while still allowing for third-party integrations when necessary. Its a nice middle ground between a tool like Jenkins that gives the user absolute control of the CI process and a service like TravisCI that controls most of the CI logic behind the scenes. CodePipeline is both reliable and flexible enough to adapt to your needs.

Visualizations

Being able to visualize pipelines is vital to understanding and improving a delivery process. CodePipeline does a great job at laying out pipelines in an interactive UI that updates in real time as revisions pass through it.

Coupling Delivery Configuration with the Application

Having delivery logic live inside the application repo aligns very nicely with the DevOps approach to delivery. It brings developers closer to the delivery process and helps to close the gap between developers and releases engineers. Also, versioning release logic alongside the application just makes sense.

AWS CodePipeline Use Case: Thoughts on What I Don’t Like

The Learning Curve

As someone new to the AWS ecosystem, I had a tough time getting started with CodePipeline. The interactive CodeDeploy tutorial that spins up 3 EC2 instances is very flashy and impressive, but at the end of it I didn’t feel like I learned anything concrete. All in all, I spent about two weeks learning about the Code* resources and creating the Tomcat pipeline for this blog. A huge portion of this time involved sifting through AWS’s documentation trying to understand how CodePipeline and the Code* services work. While the documentation is comprehensive, it takes a lot of time to digest. Also, I had to spend a couple days learning how IAM roles work.

Lack of Pipeline-Level Organization

In a large organization, CI systems can grow very large and complex. Tools like Jenkins allow you to manage this complexity by organizing pipeline jobs in a hierarchy. If a large organization were to adopt CodePipeline, they would end up with a huge flat list of pipelines that would likely be unmanageable.

AWS CodePipeline Use Case Summary

Here, we looked at an AWS CodePipeline use case for a deployment pipeline and reviewed how easy it is to get deployments done using the concept of pipelines. We also summarized some of the challenges, such as the learning curve and the standards that need to be practiced to make sure CI systems do not go haywire and become to complex.

If you have any comments or questions, reach out to us @liatrio.

ABOUT LIATRIO

Liatrio is an Enterprise Delivery Acceleration consulting firm that helps enterprises transform into world-class technology delivery organizations through successful adoption of DevOps and Lean software delivery practices. We work as “boots on the ground change agents,” uniting enterprise technology organizations by uplifting culture, tools, and processes.

Want to learn more? Let’s have a conversation.

Liatrio is also hiring! To learn more about our hiring process, check out recent posts on hiring tipsour interview processour hiring process, and hiring success the Liatrio Way. If you want to be a part of a team that solves challenges around software delivery automation, deployment pipelines and large-scale transformations, reach out!

This Post Has One Comment
  1. Great post Eddie! I am a nodejs developer with similar experience using CI platforms such as jenkins, teamcity, travis, and circle. I spun up a lambda nodejs microservice project with AWS CodeStar, and since then I’ve been using AWS CodePipeline. I think CodePipline feels much slower, at least the ui in the aws console for it. I do agree with you that sometimes you feel like you spend all this time just learning cheesy Amazon names and their specific formats for everything which is sort of not really helping you but instead just getting you more locked into aws architecture. The thing is that often all this “extra-stuff” that you need to set up with Amazon is actually giving you the flexibility, control, and security you really want for a robust, complex serverless project.

Leave a Reply

Your email address will not be published. Required fields are marked *