In the first post of my review and some notes about the book “Continuous Delivery and DevOps: A Quickstart Guide,” the topics of how an organizational transforms itself from a “Legacy Code” organization into a DevOps organization was covered. This post is about the tools used and I found to be more notes than review of the content.
CD and DevOps is about removing waste and inefficiency. “It is based on the premise that quality software can be developed, built, tested, and shipped very quickly, many times in quick succession — ideally in hours or days at the most.”
Good software engineering fundamentals are needed. Some fundamentals outlined in the book are–
– Always use source control
– Commit small code changes frequently
– Do not make code overly complex and keep it documented
– If you have automated tests, run them frequently
– Run continuous integration (CI) frequently, if you have it
– Use regular code review
– Do not be afraid of having tests that fail or other finding fault in your code.
Some strategies and goals included– “Never break your consumer,” Open and honest peer working practices, Fail Fast and often, Automated build and testing, and CI. Almost all of these topics can be directly applied to Ops practices.
Component based architecture is where code and services are broken down into discrete modules or components that are loosely coupled. Using component based architecture reduces the pain and overhead of releases.
Use Layers of abstraction to reduce dependencies which require two components to be deployed together. This also simplifies processes and reduces downtime.
Getting the right amount of environments is also important. The fewer the better, meaning if you can get away with just two then do it. For larger organizations the following number of environments is suggested–
– Pre-production (UAT/Spot Check/[load testing, something not stated in the book])
Later the book also talks about using a “Like Live” environment to develop against. This environment might be a virtual copy on your desktop, but is essentially what is currently running in production.
The same binary should be used across all environments. This might mean you will need to load configurations specific to the environment at deploy time or package time. A repository should be used to version and store the binaries.
There is a long section about CD tooling. What it should do and how it should do it. The book mentions there is not a lot of commercial tools out there, but that might not be so true now. It suggested building your own. The CD tool seems to be what orchestrates everything. Being able to access source code and build binaries, do deployments, record every action taken, be able display how much it is used, etc.
Automated provisioning is being able to programmatically create the infrastructure or platform for deployments, IaaS and PaaS, with all the needed configurations. This is helpful for no-downtime deployments. This is something that should be included in the CD tooling.
No-downtime deployments are critical for real time systems that customers count on. Downtime could be very impactful on the business and the companies reputation.
Everything should be monitored and everyone should have access to the monitoring information. It will get used more if it is all in one place and easy to get to.
This section also talks about simple manual processes. Not everything has to be solved by software. Some electronic solution might be overkill and counter productive.
Since the tooling was the area I was most interested in I was hoping for more examples of the tools that are actually used and how to implement them. The 10,000 foot view is also helpful for the amount of time I took to consume it.
My next post will cover the last few chapters of the book that covers culture and behaviors, hurdles to look out for, and measuring success.