The efficiencies of automation are moving into all areas of IT. This core tenet of DevOps has tangible benefits within traditional operations, system administration, network administration, database management, etc. The ability to derive the changes to our systems and processes from a scripted source of record can remove a huge percentage of the human error from our daily work.
Placing the focus on automation will help us better understand how to test changes before they are made. When you are looking through this lens, it’s easy to see the advantages of making our changes in a local sandbox environment before they are made in production. Where this has been a must for traditional software development teams, expanding this behavior to other functional roles in the organization for changes in all IT systems is becoming more widely accepted as a best practice. We believe the fast feedback loops that exist for software development should exist for all changes in your delivery pipeline. Modifications to your infrastructure, updates to databases, and changes to application configuration can be rock solid and made with the same (or more) rigor as the products flowing through it.
Using a local development environment for building applications is status quo for software developers. As long as software has been a product, there has been a focus on enhancing the experience of local development. Countless tools are available to help developers write better code. Software developers have been known to build tools to help software developers build other tools. Today, there is no faster way to get feedback than interacting directly with the code in an IDE (Integrated Development Environment) or seeing verbose console output after making changes in your favorite text editor. But does this experience translate to work done outside of the traditional software development lifecycle? Shouldn’t we be taking advantage of these tools to help us with other delivery challenges?
Step 1: Treating Everything as Code
What does “Everything as Code” mean? We believe it means that we can use processes and tools to manage configuration changes in a way that will help teams understand exactly what it takes to configure and support the environments of all IT systems. There’s a good chance you already have a source control tool in your environment and you should be able to use it to track all of these changes. SCM tools like GitHub, SVN, etc., offer obvious benefits for a traditional software development team. They make it easy to track changes to our code or configuration over time. When we expand the use of SCM tools to contain the scripts for installing tools, setting up base images for operating systems, or changes to environment configuration, we are bringing the behaviors we expect of traditional software developers to other operational roles. There is tremendous value in understanding, tracking, and eventually automating every modification to our systems and the key to enabling these behaviors is keeping track of the changes to their settings and configuration.
How do we do this?
- You don’t have to tackle it all at once. Start small: You may start by keeping a run list of manual steps in a text file
- You can add scripts of things that you automate along the way to your automation repository
- Next you may add links to those scripts in the run list, eventually phasing out much of the manual work
- Build in a review process for changes to the scripts (By using an SCM tool, you’re able to easily keep track of each change to the scripts or run list)
- Add additional contributors to your repositories so others can add small changes or improvements via Pull Requests
- Expand the script repository to reference artifacts of specific application versions or “gold” OS images
- Create additional SCM repositories to organize different types of automation
Step 2: Using a Local Development Environment First for All Changes
Before you can make a change to a production system, you should test the change thoroughly. Making sure the configuration change is tracked and tested should be something we try to enforce. To enable this, we should have local development and testing of any change made to your infrastructure systems or environment configuration.
Let’s take a look at a couple of the most popular tools that help drive this philosophy. The configuration for both Vagrant and Docker can be kept in source control and meet the requirements we set above to help deliver changes iteratively. You just have to decide what approach meets the needs of your product and environment.
Step 3: Automating Updates in Other Environments
As you’re making the move to building a suite of ops playbooks and have local development environments to help test your changes, you can take on the challenge of automating the delivery of your systems. By adding a few more tweaks to the pipeline, these changes can be fully automated in any environment.
- Add automated script validation by connecting your repository to a Continuous Integration tool like Jenkins – one of many tools tools to build and test infrastructure
- If you’re using a configuration management tool such as Chef, you can use RuboCop and Test Kitchen to enforce best practices and automate the testing of your configuration change
- After your changes are being tested in CI and producing versioned artifacts, you can begin to confidently schedule updates to environments
- Move secret information such as credentials for your different environments to a secrets management tool such as HashiCorp’s Vault
Learning new tools or implementing these changes may not be something you can do overnight, but each of these things can be added incrementally. Find out where you are in your journey and pick up the next improvement to your pipeline. The more of your tools and environment you can prove out locally, the closer you will be to removing the need for manual changes everywhere else.
If you have any comments or questions, reach out to us @liatrio.
Liatrio is a collaborative DevOps consulting firm that helps enterprises drive innovation, expedite world-class software delivery and evolve their culture, tools and processes.
We work as “boots on the ground change agents,” helping our clients improve their development practices, get better at delivering value from conception to deployment and react more quickly to market changes. Our tech veterans have the experience needed to execute on DevOps philosophies and Continuous Integration (CI) and Continuous Delivery (CD) core practices, including automation, sharing, and feedback. Through proven tools and processes and a collaborative culture of shared responsibilities, we unite our clients’ technology organizations to help them become high-performing teams. Reach out — let’s start the conversation.
Liatrio is also hiring! If you want to be a part of a team that solves challenges around software delivery automation, deployment pipelines and large-scale transformations, contact us!