In the world of enterprises, every new technological revolution is looked at using 2 lenses. One is that of a silver bullet by some sections of the enterprises with an attitude of “let’s go do this” or that of in-difference to a fad that “this will not work”. Interestingly, Docker in particular and containers in general have managed to gain both of these type of followers based on our experience. But as in most cases, unfortunately, both of these approaches when it comes to Docker are in-correct as containerization, though the future of software development & delivery, still is a few years away from maturity and full enterprise adoption.
The “Docker Cliff” is one of the defining images that summarizes what “Docker in Enterprises” feels like.
Containers in Dev vs Prod pic.twitter.com/YqDHvigW8r
— Michael Ducy (@mfdii) February 10, 2016
Hype around Docker:
As with any new technological movement, the hype around Docker has been one of the leading factors contributing to confusion around “what docker really is” and “how to use docker in enterprises”. At the outset, it seems like the way to go immediately, standardize the working environments in an enterprise, build everything locally and just deploy onto very standardized environments that attain parity all across the enterprise footprint.
That said, it is just not that easy. The hype around containers solves some very specific problems in some very specific settings that are drastically different from large enterprises and it is very important that all of the current state situations are considered carefully before getting swept away by the hype.
That said, the future definitely belongs to containers. New applications, new architectures need to invest, consider and forge ahead on some combinations of vagrant+docker especially around services that don’t need large monolithic infrastructure and need to scale dynamically.
Docker is hard in Enterprises
Let’s look at a typical enterprise setting:
- Architecture and applications that have evolved over the years if not decades
- Service offerings that have morphed over those years
- Aging, behind the fire-wall infrastructure
- Projects and Program based technological investment vs. investments around technical debts
- Code refactoring that is next to impossible
- Multiple data center application deployments with very little expertise all around the stack in-house
- Lack of technical skills and capabilities in-house in the workforce
- Outlook towards big-bang solutions
When you take a setting like this, containers which come from a paradigm where everything is immutable; everything is a service and code above all else finds itself in a very synergy-less environment. The above list just shows you how much enterprises need to evolve and re-imagine before they embark on this journey.
Fundamental barrier for entry in enterprises
With all of the above being true and applicable, there is still one over-arching reason why “getting docker right” will still be a huge challenge and that is the “system” that drives all things in enterprises. As is true in most cases, it is not the people, process or technology that is a problem, it is the system that combines all of them and more that becomes the challenge.
Large enterprises struggle with delivery as they cannot visualize flow, cannot get past the concept of large batch-planning with target dates and find it very hard to take small risks to get better. The urge is to focus on thinking such as “projects need to get delivered for business” or “large testing cycles means more quality” and less on looking at data, looking at work in motion and making small adjustments across the entire pipeline. This system just is not conducive for embracing Docker.
The simplest and the easiest way to get containers wrong is to try to make it work in production by a certain date. The movement you go down this path, the docker cliff is just insurmountable.
How to get started on Docker in Enterprises
Like with all things DevOps, start very small and start on local. Pick a product and try to re-imagine all things around it that will enable containers to be used right. Invest in the concept of “many-small servers” over one large server. Look at your existing virtualization technologies and re-use automation that is available there to ensure that you are not solving all of the problems at one shot.
Start on your local, build a pipeline to consistently repeat the container strategy through the promotion process and then try to implement in production. There will be a ton of learnings along the way and try to solve them one-after the other and get it right vs. getting into production.
One simple guide:
There are many excellent use cases to get started using Docker in Enterprise environments. A way to get started is to identify an internal application that immediately benefit the Enterprise by implementing Docker containers. Internal applications are great candidates because they enable your deploy team to become familiar with building a pipeline that can deploy from development to production without the impact to any existing external customers. Depending on the internal application, the systems and processes can be put into place to use Docker at scale and apply the correct monitoring tools in place to ensure stability and uptime.
Building a pipeline for your application using containers is similar with one change: the artifact deployed will be a Docker container. Central repositories exist such as Docker Hub that contain container artifacts that can be tagged as public or private. The deploy process can retrieve the latest or specified version and deploy that into the target environment. For localized environments, using Docker containers adds a unique advantage. It simplifies the process for engineers while adding less requirements for installation. Docker containers can easily be started and disposed in a local environment with less dependency issues and an assurance of consistency across all developer machines.
Virtually every modern monitoring service and tool used in Enterprise has Docker compatibility making maintenance of your docker containers an easy new task for your support team. Mounting volumes to your docker containers enables the data to remain separate and managed independently with the existing Enterprises processes to ensure reliability and security of your data. It is recommended to mount all logging directories and enable your docker containers to focus on the application layer, to scale for the desired resource usage.
Docker in enterprises and in production is hard! Very hard! And Docker will not solve all of the problems in a large enterprise but if done right, they will form a very important piece to your organizations capabilities especially around services/applications that are small and need rapid changes and those that can move to or stay in cloud.
This journey to get Docker to cloud can also be a way to get your organization to start a transformative journey to automate your software delivery and re-imagine all of the processes around it. This journey has to be all in where everyone from business to production operations needs to get involved and investment from people from the top to bottom. If done right, this could be a way to setting a foundation from which the enterprise can gain great benefits from.
Liatrio is a DevOps Consulting firm focusing on helping enterprises get better at software delivery using DevOps & Rapid Release philosophies. We work as “boots on the ground change agents” helping our clients re-imagine their daily work and get better at delivery one day at a time. Liatrio is also hiring! If you enjoy being a part of a team that is solving challenges around software delivery automation, deployment pipelines and large scale transformations, reach out to us via our contact page on our website.