If one of our goals is to deliver value by shipping usable software to a customer or end user, making this process as easy and repeatable as possible should be something we strive to accomplish.
Is this a “solved problem?” Do you already have an enterprise delivery pipeline? You may be doing everything right with CI/CD in place, with fully automated testing, production releases happening multiple times a day with every change committed to your code base. If you are operating at that level already, I’d argue that it didn’t happen by accident. Multiple teams have likely worked together over time to create a well-defined and robust enterprise delivery pipeline.
Today, I’d like to discuss why I believe the enterprise delivery pipeline is a product in its own right, and something we should support with the same rigor as any customer-facing software application.
The Case for a First-Class Enterprise Delivery Pipeline
We see some common themes within large enterprise IT organizations:
- Software development teams are often separated from QA teams
- Dev and QA teams may also be separated from infrastructure operations teams, and in many cases, those groups are also separated from production application support teams
- At larger companies, there are often a greater number of legacy systems in production
- Each organization in an enterprise attempts to improve individual efficiencies hoping to improve the larger system
- There is a perceived need for separation of concerns in the delivery process
- A history of poorly executed release activities has contributed to the increasing number of organizational silos put in place
- In some large enterprises, we see separate teams in the enterprise delivery pipeline acting independently often for months at a time
- The act of deploying software to production has become difficult and time consuming for multiple groups involved
- The production release is usually “coordinated” using some varying amount of bureaucracy with multiple change gates or change approval board meetings
- There are often few representatives of the siloed teams involved in the release process do to handoffs to dedicated teams
- Much of the context will be lost during handoffs to the point that the team actually deploying the software has limited understanding of how it was assembled or how it should be configured in order to function properly
Trust between IT teams may have eroded over time to the point where it’s a struggle to get anything out the door. Bringing these teams back together is an important step in your DevOps journey. Working toward building an enterprise delivery pipeline can be a way to bridge these teams and processes so that each group understands what is necessary to release their company’s products as frequently as required with as little risk as possible.
It takes a concerted effort to ensure your software is delivered with minimal risk and providing continuous business value. By working together to build a robust enterprise delivery pipeline, we will help ease the stress of providing value to our customers while reinforcing the importance of collaboration between the teams that may have grown apart over time.
What’s Your Enterprise Delivery Pipeline Made Of?
An enterprise delivery pipeline can be many things. It can be one engineer copying a file from their laptop to a production server (yes, still a pipeline). A pipeline could have home-grown, custom tools put in place over several years. Each of your many development teams could have their own build and deploy environments or you could be using a shared, centralized system with thousands of jobs keeping CI moving for hundreds of developers. You may also have approval processes, ticketing systems, and manual gates with business users at different checkpoints that inform how quickly your products will be made available to your customers.
The attributes listed above highlight two primary themes within in the enterprise delivery pipeline; the human interaction and the tools. How do we improve the health posture in both of these areas?
Improving the Human Side
To improve within the human interaction portion of the enterprise delivery pipeline, we want to reduce the direct or manual interaction within the delivery process. The less direct involvement we have with the flow, the less chance we have to slow it down, either intentionally or unintentionally. I encourage each team invested in the delivery of the products to work together to understand their part in improving this flow.
If you hold the keys to a change control approval, consider what it will take for you to feel comfortable automating this step. See if you can write down all of the “check boxes” in the approval workflow and start evaluating what it would take to get those questions answered by the enterprise delivery pipeline itself. If someone is copying files manually somewhere, just stop that process right now and find a better way to ensure the product is delivered the same way every time.
Everyone involved should see the benefits of delivering value as quickly as possible. The shorter the cycles of delivery, the more opportunities we have to get feedback and improve our products. Even in large enterprises where many of the products you support may not be direct revenue generating applications, the benefits gained by improving the ease and speed of delivery with internal applications will quickly become very evident.
Choosing Enterprise Delivery Pipeline Tools
The enterprise delivery pipeline tools may differ a bit from team to team or with the technology stack in your organization but there are some important features that should be present in order to drive a rapid delivery model. Automation is the theme in the modern pipeline and we should choose tools that support this capability. By choosing tools that support automation, we reduce the undesired manual involvement mentioned above. While certainly not an exhaustive list, I’ve described some categories and products below which, when used together, will help build or reinforce your pipeline.
Source Control Management
Every application or service we deploy through the enterprise delivery pipeline comes from a source control management tool of some kind. The configuration for the pipeline itself, along with all of the scripts and environment information should be derived from a source control repository. This enables versioning, audit trails, and a verifiable source of record for all changes introduced into the pipeline. Tools such as Git, Mercurial, Subversion, can be hosted in house or on shared service providers via GitHub, Gitlab, or Bitbucket to name a few.
Continuous Integration / Build and Deploy Tools
Arguably the most important tool in the enterprise delivery pipeline, your CI tool acts as the orchestrator of the pipeline. Each of these tools will build your software but some will deploy it, run tests, and execute custom scripts to accomplish almost any task you may require of your pipeline. Tools like Jenkins, TravisCI, Bamboo, TeamCity, or Microsoft TFS are each options that could fill the need for your team or organization. Tools like Jenkins and TravisCI each provide to a model where they support pipeline configuration that is stored with your application’s source code. This ensures that the product’s deployment pipeline is always visible and source controlled.
Artifact Repositories
Artifact repositories handle the management and tracking of artifacts from development to production. They are instrumental in organizing the dependencies and packages necessary to build and release your applications. Modern repository managers have integrations built in to support many of the existing tools in your technology stack. Nexus and Artifactory are two leading artifact management solutions that provide both professional and open source products. Getting started using either of these options should be on your short list of tools in this category.
Configuration Management
Configuration management tools are sometimes sold as “DevOps in a box.” While automating the build and configuration of your infrastructure is very important, it is only a piece of your enterprise delivery pipeline. However, if you have the discipline required to automate the provisioning and configuration of your infrastructure, you will likely carry that over into the rest of the pipeline. Chef, Puppet, Ansible, and SaltStack are widely used tools that each provide a viable solution to this problem.
These tools can be used to do more than just configure servers or install software but placing too much of the weight on one tool can quickly create undesired technical debt. By using orchestration tools like Terraform, you can have a cloud agnostic solution that sits at a higher level than the configuration management tools while integrating more than one configuration management tool behind the scenes. One thing to keep in mind here again is that every configuration file or template used to describe your system should be kept in source control and versioned with each change deployed to your environment.
Automated Deployment Tools
There are also products available to be added to your enterprise delivery pipeline that enable automated deployments via executing dedicated deployment scripts or triggering installs via agents on remote systems. Tools such as IBM UrbanCode Deploy, Octopus Deploy assist with actions taken after your application is built. Jenkins gets another mention here for being super cross-functional and a tool like Rundeck can assist with orchestration of these activities or executing tasks in support of delivery. You can potentially reinforce the strengths of other tools and increase visibility of individual stages by separating the deployment activity from the rest of the stages in the pipeline.
Static Source Code Analysis and Security Analysis Scanners
Static code analysis tools can uncover troublesome coding errors, duplications, underperforming code statements, and give insight into correcting unintended complexity. There are tools called “linters” that are language-specific and can be used to find issues with code quality early in the SDLC. SonarQube lives within your CI pipeline and can offer immediate feedback in changes or gaps in code quality.
While improving code quality is very important, no application should be deployed without first being scanned for security vulnerabilities. VeraCode, WhiteHat Sentinel, Snyk, among others, will scan your source code for security vulnerabilities and inform you of problems before you release your software. An important feature of these tools is the ability to plug them into the pipeline so each scan provides feedback on the current security posture of your applications.
Test Automation and Performance Profiling
Functional testing, load testing, and performance profiling tools are important reinforcements to your application’s quality. By executing automated testing within the enterprise delivery pipeline, you prove its current functionality and maintain feature stability over the lifetime of the system within test coverage. This automation code should live with the code it is testing in order properly test the version of the application to which it belongs.
Because the application changes over time, the test automation scripts will change with it. Selenium or SoapUI may be used to test web pages, SOAP applications, or ReST APIs. Tests can be built to execute against local code or integration tests can be run against other environments. JMeter, LoadRunner, and WebLoad can evaluate application or system performance. There is a difference between performance testing and load testing but the important part is understanding that they both belong in a modern application pipeline.
Containers
The use of containers and container orchestration within the enterprise delivery pipeline is increasing rapidly in enterprise environments. Container technologies like Docker together with orchestration platforms like Docker Swarm or Kubernetes are making it easier for development teams to deliver immutable, “full stack” applications that run the same way on a developer’s laptop as they do in production. By reducing the number of dependencies in a container to the bare minimum required to run an application, we are able to gain a great amount of performance and portability; not to mention the ease of deployment and testing in each stage of our delivery lifecycle. You can run just about anything in a container at this point so make Docker your next integration POC if you haven’t already.
Other Tools That Inform or Interact with the Pipeline
There are many other tools that make appearances in the enterprise delivery pipeline. Ticketing/work item management tools like JIRA, Rally, TFS, etc. all track the requests for changes to the applications we want to deploy. Collaboration tools like Confluence can provide a historical documentation platform and a source of truth for product design while fostering community interaction around feature development. Real-time communication tools like Slack and HipChat give teams a way to interface with the pipeline via rapid feedback notifications or take action on pipeline events using a popular approach to operations called “chat-ops”.
Don’t Get Scared! Just Get Started!
Implementing an enterprise delivery pipeline is a journey of continuous improvement. It takes a certain amount of commitment to do it well but only healthy curiosity to get started. Any amount of pipeline automation is better than nothing so don’t be afraid to pick a spot and dive in. Start with Jenkins and branch out from there. This gets fun pretty quickly when you see the benefits of automating and delivering software rapidly.
And remember, you’re not alone in this journey! Liatrio can help answer any questions regarding the improvement of your pipeline so don’t hesitate to reach out.