Software Delivery in Small Batches

Adam introduces the percent complete and accurate metric used in system design and operations.

Show Notes

Free Resources
Links
★ Support this podcast on Patreon ★

Creators & Guests

Host
Adam Hawkins
Software Delivery Coach

What is Software Delivery in Small Batches?

Adam Hawkins presents the theory and practices behind software delivery excellence. Topics include DevOps, lean, software architecture, continuous delivery, and interviews with industry leaders.

Hello and welcome to Small Batches. I’m your host Adam Hawkins. In each episode, I share a small batch of software delivery education aiming to help find flow, feedback, and learning in own daily work. Topics include DevOps, lean, continuous delivery, and conversations with industry leaders. Now, let’s begin today’s episode.
Allow me to introduce to one of the important metrics for system design and operation. This metric is "percent complete and accurate". I’ll refer to this as "%C/A" from now one. I first encountered this metric in the opening chapters of The DevOps Handbook.
Here is how the authors of Value Stream Mapping Karen Martin & Mike Osterling explain it:
The %C/A can be obtained by asking the downstream consumers what percentage of the time they receive work that is 'usable as is', meaning that they can do their work without having to correct the information that was provided, add missing information that should have been supplied, or clarify information that should have been clearer.
The metric is simple and profound. It’s the percentage of completed work that was usable directly by the consumer without rework. This metric tells you a hell of a lot about the quality of the process. Naturally, this metric aims for 100%. 100% may not be achievable but the challenge is continually moving closer to it.
Let’s apply this metric to a common process in software development: deployment pipelines. This was the topic a few episodes so back, so check that if you need a refresher.
The commit stage of the deployment pipeline aims to produce valid input to the artifact stage. So, changes that pass the commit stage should not fail for known reasons in the artifact stage. If they did, then that means the work is complete but not accurate. "Accurate" in this case means the results of the commit stage resulted in a predictable failure in the artifact stage. If that sounds a too abstract then allow me to make it concrete.
Consider an artifact stage that builds a Docker image. The Dockerfile is an input to the artifact stage. The commit stage can check the Dockerfile before proceeding to the next stage in the pipeline. Often times a simple static analysis check for syntax errors or other known defects is enough for a sanity check. This way if a change introduced a defect in the Dockerfile, it would fail in the commit stage before proceeding to the artifact stage.
The idea is recursive, so it applies to the preceding step in the process. The commit stage starts on changes to SCM. The commit stage begins with a build on the CI provider like Github Actions or CircleCI. These builds require valid configuration files. So what can the provider do? Add a precommit hook that rejects commits that introduce defects into the build configuration files. Again, this can be done with a simple static analysis tool or the providers built in validator.
Both these examples aim at improving the %C/A for the step in the process by ensuring that its outputs are directly usable by the consumer and free of known defects.
The %C/A metric connects to the first capability mentioned in The High-Velocity Edge. That is system design and operation. High-Velocity teams understand the nature of sequential processes and apply jidoka style checks to stop the process when problems are detected. Stopping the process when problems are detected allows teams to solve the problems. More importantly, it stops the problem from propagating downstream where the impact is worse.
The same concept applies to the larger value streams we participate it in. Consider the measure of lead time across the value stream. This is one of the four DORA metrics.
Visualize the value stream as series of connected processes. Measure the lead time across all the processes. Now add the %C/A to each process in the value stream. Processes with low %C/A add wait times and rework, thus negatively impacting lead time. The visual map of the value stream annotated with %C/A is a powerful tool for improving the quality across the value stream.
All right, that’s all for this batch. Find links to past episodes on The DevOps Handbook, my series on The High-Velocity Edge, a link a great book on value stream mapping at SmallBatches.fm/72.
You can also find a link to my slack app that posts daily small batches of software delivery education on topics like %C/A. I’ve recently loaded the app some of best passages and pro-tips from the best books on lean thinking. There’s already over a years worth of posts, and more are continually added. The app is currently free in beta so get it today and start learning as a team.
I hope to have you back again for the next episode. Until then, happy shipping.