AWS Codepipeline 1

AWS Delivery Pipeline Use Case


In 2015 AWS released CodePipeline, a tool for automating software releases. CodePipeline allows you to configure and visualize entire software release processes from commit to deployment. It brings together several AWS developer tools like CodeCommit, CodeBuild, and CodeDeploy, and also supports integrations with a variety of third party tools/services like GitHub, Jenkins, and Runscope.

CodePipeline has a very modular interface for defining delivery processes. Pipelines are broken down into “stages,” with each stage having its own input and output artifacts. This allows you to use the tools that work best for your situation to organize and visualize a customized delivery process.

In this post, we’ll go over an example pipeline for releasing Java application to a Tomcat server, then dive a little deeper into the tools and services that make up this pipeline.

Tomcat Release Pipeline

I’ve put together a sample pipeline using CloudFormation an orchestration tool for AWS resources. If you’re not familiar with CloudFormation, just follow the instructions in the repo’s README file. Be aware that it will spin up some AWS resources and will continue to incur charges if you don’t delete the stack when you’re done looking at it.

This pipeline is designed to deploy a Java application to a Tomcat server. The visualization to the right shows it being used for spring-petclinic, a sample application that uses the Spring framework. The pipeline flows like this:

  • Source stage detects updates to a GitHub repo to send new revisions through the pipeline.
  • Build stage launches a container to run a Maven goal defined in the repository. If the goal is successful, the build artifact is uploaded to an S3 bucket.
  • Approval blocks the revision from continuing through the pipeline until a manual approval is submitted.
  • Deploy deploys the build artifact to an existing EC2 instance based on some deploy scripts in the repository.

This release pipeline is very basic, but it highlights a lot of the building blocks that Amazon’s Code* services provide. A larger application might call for a more sophisticated pipeline that breaks down the Build stage into several different building and testing phases, or deploys to a fleet of EC2 instances in an Auto Scale group.

Building Code

This pipeline uses AWS’s CodeBuild service to produce build artifacts. One of the most attractive features of CodeBuild is that it doesn’t require a dedicated build server – builds are run in containers that exist only for the duration of the build. This allows the CodeBuild service to scale to your organization’s needs while only billing for compute time by the minute. Build containers can be spawned from one of several AWS-provided Docker images (for Node, Java, Python, and others), or from a user-provided image. The build procedure is defined in a file called buildspec.yml that lives in the application repo. Here is the buildspec for this petclinic pipeline:

version: 0.1
      - echo Build started on `date`
      - mvn test
      - echo Build completed on `date`
      - mvn package
    - target/petclinic.war
    - appspec.yml
    - codedeploy/*

Build phases are composed of shell commands and the resulting artifact files are zipped up and uploaded to S3. In this example CodeDeploy scripts and configuration are also packaged for when they are needed later in the pipeline.

Deploying Code

Artifacts in this pipeline are deployed to a Tomcat server via AWS’s CodeDeploy service. CodeDeploy performs deployments to both EC2 and on-premise instances through a codedeploy-agent service that runs on the target platform. Groups of deployment instances can be formed with Deployment Groups, and CodeDeploy even supports deployments to Auto Scaling groups (though this brings up some tricky edge cases when a deployment occurs during a scale up). CodeDeploy also offers a Blue/Green deployment strategy for seamless releases that don’t interrupt service.

Like CodeBuild, CodeDeploy procedures and configuration are defined in the application repo. Here’s what the config file appspec.yml looks like for this project:

version: 0.0
os: linux
  - source: /target/petclinic.war
    destination: /usr/share/tomcat7/webapps/
    - location: codedeploy/
      timeout: 300
      runas: root

This file allows you to configure where on the filesystem the artifact should be deployed, as well as hooks for scripts (that are usually located in the repo) to be triggered at various lifecycle events. In this simple example, a script is triggered to restart Tomcat after the artifact has been installed on the server. However, deployments are broken down into several other lifecycle events, giving full control over how an application is launched for both In-Place and Blue/Green deployments.

Thoughts on What I Like

The AWS Ecosystem

Having a delivery stack completely within the AWS ecosystem makes many parts of the release process incredibly smooth. Some examples of this:

  • Deploying to EC2 Auto Scaling instances without much pain
  • Using CloudFormation to orchestrate entire pipelines as code
  • Using AWS’s APIs to control and interact with Pipeline services
  • Securing pipeline services by locking down their IAM roles
  • Monitoring, logging, and alert capabilities through CloudTrail and CloudWatch


I think that CodePipeline strikes a great balance between keeping most processes within AWS’s control while still allowing for third-party integrations when necessary. Its a nice middleground between a tool like Jenkins that gives the user absolute control of the CI process, and a service like TravisCI that controls most of the CI logic behind the scenes. It makes CodePipeline is flexible enough to adapt to your needs while still being a very reliable tool.


Being able to visualize pipelines is vital to understanding and improving a delivery process. CodePipeline does a great job at laying out pipelines in an interactive UI that updates in real time as revisions pass through it.

Coupling Delivery Configuration with the Application

Having delivery logic live inside the application repo aligns very nicely with the DevOps approach to delivery. It brings developers closer to the delivery process and helps to close the gap between developers and releases engineers. Also, versioning release logic alongside the application just makes sense.

Thoughts on What I Don’t Like

The Learning Curve

As someone who is new to the AWS ecosystem, I had a tough time getting started with CodePipeline. The interactive CodeDeploy tutorial that spins up 3 EC2 instances is very flashy and impressive, but at the end of it I didn’t feel like I learned anything concrete. All in all, I spent about two weeks learning about the Code* resources and creating the Tomcat pipeline for this blog. A huge portion of my time was spent sifting through AWS’s documentation trying to build a picture in my head of how CodePipline and the Code* services work. While the documentation is very complete and full of information, it takes a lot of time to digest. Also, I had to spend a couple days learning how IAM roles work.

Lack of Pipeline-Level Organization

In a large organization, CI systems can grow very large and complex. Tools like Jenkins allow you to manage this complexity by organizing pipeline jobs in a hierarchy. If a large organization were to adopt CodePipeline, they would end up with a huge flat list of pipelines which would likely be unmanageable.

CodePipeline Summary

In this blog, we looked at the CodePipeline introduced by Amazon and also a use case for a deployment pipeline and how easy it is to get deployments done using the concept of pipelines. We also tried to be balanced and summarize some of the things that are great about this and some that are hard like the learning curve and the standards that need to be practiced to make sure CI systems do not go haywire and become to complex.

If you have any comments or questions, reach out to us @liatrio.


Liatrio is a collaborative DevOps consulting firm that helps enterprises drive innovation, expedite world-class software delivery and evolve their culture, tools and processes.

We work as “boots on the ground change agents,” helping our clients improve their development practices, get better at delivering value from conception to deployment and react more quickly to market changes. Our tech veterans have the experience needed to execute on DevOps philosophies and Continuous Integration (CI) and Continuous Delivery (CD) core practices, including automation, sharing, and feedback. Through proven tools and processes and a collaborative culture of shared responsibilities, we unite our clients’ technology organizations to help them become high-performing teams. Reach out — let’s start the conversation.

Liatrio is also hiring! If you want to be a part of a team that solves challenges around software delivery automation, deployment pipelines and large-scale transformations, contact us!

This Post Has One Comment
  1. Great post Eddie! I am a nodejs developer with similar experience using CI platforms such as jenkins, teamcity, travis, and circle. I spun up a lambda nodejs microservice project with AWS CodeStar, and since then I’ve been using AWS CodePipeline. I think CodePipline feels much slower, at least the ui in the aws console for it. I do agree with you that sometimes you feel like you spend all this time just learning cheesy Amazon names and their specific formats for everything which is sort of not really helping you but instead just getting you more locked into aws architecture. The thing is that often all this “extra-stuff” that you need to set up with Amazon is actually giving you the flexibility, control, and security you really want for a robust, complex serverless project.

Leave a Reply

Your email address will not be published. Required fields are marked *