Software Delivery in Small Batches

Patch level improvements to the original 12 factor apps. 4 points to improve the configuration factor in your application.

Show Notes

The 12 factor app states that applications should read config from environment variables. It implies separation of code and config. That’s about it, but there’s good bones here. I want something bigger from this factor. Specifically that applications may be deployed to new environments without any code changes. This requires a few additions:
  1. Configure the process through command options and environment variables
  2. Prefer explicit configuration over implicit configuration 
  3. Use a dry run option to verify config sanity
  4. Fail fast on any configuration error
Mentioned in Show
Resources
Books
★ Support this podcast on Patreon ★

Creators & Guests

Host
Adam Hawkins
Software Delivery Coach

What is Software Delivery in Small Batches?

Adam Hawkins presents the theory and practices behind software delivery excellence. Topics include DevOps, lean, software architecture, continuous delivery, and interviews with industry leaders.

Hello friends, it’s me Adam. Welcome back to Small Batches. I introduced the 12 factor app in the previous episode along with areas where I think it may be improved upon. I’m calling these improvements the “12.1 factor app”. So new features and fixes, but no breaking changes; more like clarifications. I’m going to cover these in the coming episodes. Enough preamble for now. On with the show.

The 12 factor app states that applications should read config from environment variables. It implies separation of code and config. That’s about it, but there’s good bones here. I want something bigger from this factor. Specifically that applications may be deployed to new environments without any code changes. This requires a few additions:

1. Configure the process through command options and environment variables.
2. Prefer explicit configuration over implicit configuration.
3, Use a dry run option to verify config sanity.

These three points force applications to be explicit in configuration, which in turn requires engineers to take more responsibility for bootstrapping the process. This has proven to be a good thing in my experience. Consider the first point regarding command-line options and environment variables. Developers interact with command-line tools every single day. There’s a standard interface for passing flags: command-line options. You’ve likely used curl -X , grep -E, or MySQL-u. These tools may even use values from environment variables when command lines options are not provided. This is wonderful because processes may be configured globally with environment variables then overridden in specific scenarios with command-line options.

This simple interface also supports another common use case: looking up configuration options. Running a command followed by --help or -h typically outputs a usage message listing all command options. How many times have you struggled to learn which configuration files or environment variables are required to start a service developed by other engineers in your company? Now compare that to how many times you have struggled to find all the options to the grep command? There is no struggle because grep --help tells you everything you need to know. On the other hand, you’re left hoping that your team members put something in the README or you can find a page on confluence.

Moving on to the second point. I’ll explain this by contrasting software produced by two ecosystems.
Rails applications use a mix of configuration practices. They may use environment variables but there’s also a mix of YAML files and environment-specific configuration files (such as production.rb or staging.rb). Internal code uses a preset number of environments (namely production, test, and development) to implicitly change configuration. Deploying to a new environment requires creating new configuration files and/or updating some of these YAML files. Starting a rails application requires running the rails command. As a result, developers are disconnected from the code that bootstraps application internals, then starts a web server.

On the other hand, consider software produced by the go ecosystem. It’s more common to write a main method that configures everything through command-line options. In this case, there is no need for extra configuration files or implicit configuration based on environment names since the concept is irrelevant here. Naturally, this requires developers take more responsibility, but as I said early on, I think it’s worth it in the end. Configuring these applications is easier to grok as well as deploying them to a variety of environments. That’s what the 12.1 factor app is going for.

The command-line interface approach enables DX improvements too. One of my pet peeves is when a process starts then fails at runtime due to some missing configuration option. This grinds my gears because developers devote huge effort to validating user input through web forms or API calls but tend to neglect configuration validation entirely! Plus, it’s just frustrating to learn which values are required through runtime errors. The 12.1 factor app can do better than this.

The 12.1 factor app will fail and exit non-zero if any required configuration value is missing. The main method that processes command-line options and environment variables makes this possible.So, Does the process require connection to a database and no --db-url or DB_URL provided? Then Boom! Error message and exit non-zero. The goal here is to make it impossible to start the process without sane configuration.

Failing with a non-zero exit status integrates nicely with deployment systems. Recall that 12 factor “release” is the combination of code and config. Therefore it’s possible for a config change to result in a broken release. Now, given that 12.1 factor apps fail fast, it’s possible for the deployment system to recognize the failed release then switch back to the previous release. Contrast this, with a “fail later” approach. The release may be running but failing at runtime. This looks OK from a deployment perspective since the release started, but it’s totally broken from a user’s perspective. The 12.1 factor app easily avoids this scenario.

The “fail fast” approach catches simple user errors such as all values are provided. However, that only solves part of the problem. Provided values are not necessarily correct.

Here’s an example:

Say the application requires a connection to Postgres, so the user sets POSTGRESQL_URL. However, the application cannot connect to the server for some reason. It could be networking, mismatched ports, or an authentication error. Whatever the reason, the result is the same: no database connection thus a nonfunctional application. This would cause downtime if deployed to production. I can’t tell you how many times this has happened to me for legitimate reasons or just by own failure (like mistyping a hostname or specifying the wrong port).

My point is this type of error may be eliminated by simply trying to use the provided configuration before starting the process. The idea here is to use a kind of “dry run” mode to check this kind of things. I’ve used the dry run mode to check connections to external resources like data stores or that API keys for external APIs are valid. This aligns nicely with the “trust but verify” motto. Look, it’s simple. At the end of the day, developers make mistakes. It’s our job to ensure those mistakes don’t enter production.

Alright. That’s enough for the 12.1 config factor. Here’s a quick recap:

1. Configure your process through command-line options and environment variables.
2. Fail fast on any kind of configuration error.
3. Use a “dry run” mode to verify as much config as possible.
4. Prefer explicit configuration over implicit configuration based on environment names.

Well, what do you think of these practices? Have you done anything like this before? if so, how did it turn out? Hit me up on Twitter at smallbatches.fm or email me at hi@smallbatches.fm. Share this episode around your team too. It’s great reference material for that “new service checklist” or “best practices” on Confluence you’re always trying to write.

Also, go to smallbatches.fm/6 for show notes. I’ll put a link to my appearance on the Rails Testing Podcast where I talk about this topic in more technical detail. If this episode piqued your interest, then definitely check that one out. We talk about preflight checks and smoke tests.

Alright gang. That’s a wrap. See you in the next one. Good luck out there and happy shipping!