Adam discusses the stages in deployment pipelines and why they are necessary but not sufficient for continuous delivery. Listen through to the end.
- NEW! The Small Batches Slack App for Teams
- Toyota Kata Pocket Guide
- The Flow Collective
- DevOps Email Course
- Project to Product Email Course
- War & Peace & IT Pocket Guide
- Adam Hawkins on Twitter
- Adam Hawkins on LinkedIn
- Adam Hawkins' website
- Small Batches #17: Continuous Delivery with Dave Farley
- Small Batches #63: Modern Software Engineering
- Minimum Viable Continuous Delivery
Creators & Guests
What is Software Delivery in Small Batches?
Adam Hawkins presents the theory and practices behind building a high velocity software organization. Topics include DevOps, lean, software architecture, continuous delivery, and interviews with industry leaders.
Hello and welcome to Small Batches. I’m your host Adam Hawkins. In each episode, I share a small batch of software delivery education aiming to help find flow, feedback, and learning in own daily work. Topics include DevOps, lean, continuous delivery, and conversations with industry leaders. Now, let’s begin today’s episode.
I’ve said before on this podcast that test driven development is skill zero for professional software engineers and I mean that sincerly. Skill zero teaches how to write code.
Skill one for professional software engineers is constructing deployment pipelines. Skill one teaches how to ship code.
A deployment pipelines defines releasability and is the only path to production. Commits go in one end and releasable outcomes come out the other end. The deployment pipeline is a standard and repeatable route from commit to production.
Everything, and I mean everything, that constitutes releasability is within the scope of the deployment pipeline. "Releasability" is a moving target. In the early days of a project it may enough to simply run the unit tests. As the project grows then perhaps integration tests are required, then acceptance tests in a preproduction environment, or even compliance and automated static analysis, and performance testing.
The point is that pipeline tests releasability to the degree that matters. If the pipeline says "Good to go", then you should be comfortable releasing the code. No extra work, integration tests, sign-offs, or preproduction whatever. If you’re not comfortable releasing the code, then something is missing from the pipeline. That’s a signal to add more fitness functions to the pipeline.
Deployment pipelines contain four stages.
Stage one: the commit stage. Developers commit new code and undertake fast, lightweight, technical tests to get fast feedback and a high level of confidence that code works. Aim for five minutes or faster.
Stage two: the artifact stage. If the commit stage passes, then produce a deployable artifact, such as a Docker image or binary, and push it to an artifact repository. This stage produces the release candidate.
Stage three: acceptance tests. Deploy the release candidate to a production-like environment and evaluate it from a user’s perspective using automated tests.
Stage four: ability to release to production. It the release candidate passes stage three, then engineers may opt to release to the change. Build the button that releases the change.
Now, I’ll walkthrough the deployment pipeline for a typical web service. Remember that these stages are not hard requirements, just guidelines, but the aim is always the same: turn commits into releasable outcomes.
We can use the links between stages and a bit of lean thinking to construct the pipelines.
Stage zero is the precommit stage. This happens on the developer’s machine. This phase aims to reject commits that could not pass the commit stage. The minimum variable precommit stage is static verification of configuration files needed in the subsequent stages.
For example, most CI providers include a CLI tool that can validate a build configuration file. Run that command in the precommit hooks to reject commits that could never progress through the pipeline. Why push a commit with broken configuration files? That’s waste. Avoid that. Provide fast feedback to developers when problems are detected. Push the commits to SCM at the end of the iteration. That kicks off the commit stage.
The commit stage happens in your build system, such as Buildkite, CircleCI, or Github Actions. Every code push goes through this stage. Include as many tests are necessary for releasability. Include nonfunctional requirements like code formatting and compliance. Remember this is the subjective bar for releasability. Decide the degree to which it matters. If all the checks pass, then build the artifacts.
The artifact stage also happens in the build system. Be sure to verify the artifact before pushing to the artifact repository. For example, if you’re building a Docker image then try to start a container using some sort of dry run mode. Again, the point is to check the results of the stage before proceeding to subsequent stages.
The acceptance stage happens in a production-like environment. Let’s assume you’re deploying the app to Kubernetes. Deploy the app to a production-like cluster, then run an acceptance test. The minimum viable acceptance test may be a simple as curl command that checks a 200 OK. Expand tests as needed for confidence.
The final stage is likely a repurposing of part of the previous stage. The previous stage requires deploying to a production-like environment. So take what’s there and modify it (or better yet, parametrize) to go the production environment. There are many ways to actually deploy production code that I won’t go into here. Use simple tests like curl commands to check things are working as you go.
Once the code is running in production then observe production telemetry to verify things work as expected. Use the telemetry to inform future commits, thus kicking off new runs through the pipeline.
In this way the pipeline acts as learning engine for the team. It’s the way to get fast feedback from production, then assuming code as necessary telemetry, unlock the learnings from production.
The learning is not limited to code either. The pipeline provides a way for teams to quickly experiment and learn about their customers and business while maintaining a quality bar at the same time.
This type of thinking leads to continuous delivery: keeping code in an always releasable state, then iterating forward in small batches. Deployment pipelines in themselves are necessary but not sufficient for continuous delivery. You’ll need continuous integration and trunk-based-development too. Each reinforces the other.
Continuous integration verifies each change with automated tests. If a change breaks the build, then halt and fix the problem. Trunk-based-development keeps teams working in small batches because commits must quickly land in trunk or master. These processes feed the deployment pipeline. Combining all three creates the minimum viable continuous delivery setup.
Alright, that’s all for this batch. There is much more to say about deployment pipelines. I’ll defer to me esteemed colleague Dave Farley for that.
I’ve actually interviewed Dave on the podcast to discuss pipelines and continuous delivery. That was over a year go. More recently I published a Small Batches episode on his latest book "Modern Software Delivery". Great book by the way. Highly recommended.
Find links to both of these episodes and a link to the minimum continuous delivery manifesto at SmallBatches.fm/70.
One of these days, I’ll do a bonus episode walking through my own deployment pipelines.
Well I hope to have you back again for the next episode. Until then, happy shipping.
Here’s something extra for listening through to the end. I wanted to include more in the main episode but I cut it to keep the batch size down. Then I figured, if you’re willing to listen to the whole thing then perhaps you want to hear this too.
People have asked me when I will do an episode about TDD. It will come. Until then, here’s some of my thoughts on why TDD is skill zero for professional software engineers.
I mean it when I say it, and I mean it to be provocative. TDD is damn important because one cannot hope to achieve any real higher level of performance without learning its lessons.
Yes, there are plenty of software engineers being paid to write code without TDD. Many times that code doesn’t have tests, let alone tests written prior to the code. That’s not me. I shudder just thinking about. More so, shudder thinking about what happens after that code goes through code review or enters production. Completely unpredictable with no quality bar. Talk about variation right?
Anyways, I follow a simple (but not easy) process for all my software projects. Step 1: establish TDD. Step 2: construct a continuous deployment pipeline. Step 3: Iterate. If code passed the pipeline but resulted in a production defect then add tests or updates the checks in the pipeline. Repeat.
There are no dev or staging environments. Begin with a strong enough test suite such that you begin with continuous deployment to production from t-zero. Add intermediate environments as necessary. Avoid dev environments for anything other than GUIs.
This is only possible with TDD.
Ok, that’s really it for now. See ya!