Software Delivery in Small Batches

Adam describes using Hexagonal Architecture, also known as Ports and Adapters, for software delivery excellence.

Want more?
Chapters
  • (00:00) - Hexagonal Architecture
  • (00:23) - Reflections on Ruby and Rails
  • (01:41) - What is Hexagonal Architecture?
  • (02:45) - The Repository Pattern
  • (04:32) - Dependency Injection
  • (06:43) - Thinking in Pipelines
  • (08:03) - Development Environments
  • (09:09) - Outro
ā˜… Support this podcast on Patreon ā˜…

Creators & Guests

Host
Adam Hawkins
Software Delivery Coach

What is Software Delivery in Small Batches?

Adam Hawkins presents the theory and practices behind software delivery excellence. Topics include DevOps, lean, software architecture, continuous delivery, and interviews with industry leaders.

Hello and welcome to Small Batches with me Adam Hawkins. Iā€™m your guide to software delivery excellence. In each episode, I share a small batch of the theory and practices along the path. Topics include DevOps, lean, continuous delivery, and conversations with industry leaders. Now, letā€™s begin todayā€™s episode.

The other day, DHH posted LinkedIn about his reflections on the Rails framework. This prompted reflections on what I learned from the Ruby and Rails community.

I owe the framework and the larger Ruby community a great debt because it was the genesis for deep professional growth through two inflection points. The first is learning TDD in an encouraging environment. The second was challenging the Rails MVC orthodoxy.

Rails applications, and others following the typical MVC backend architecture, accumulated predictable kinds of tech debt and code smells as they grew.

This was a recurring problem stemming from the blurring boundaries between models, views, and controllers. The end result was low cohesion and strange coupling. So what was the antidote?

Practice and the community led me to hexagonal architecture. That was about ten years ago. Ten years on, nothing resonates with more than continuous deployment backed by hexagonally architected systems built with TDD. Done well, the stack is pure flow.
So todayā€™s topic is hexagonal architecture and using it for software delivery excellence.

Hexagonal Architecture, also known as Ports and Adapters, was introduced in 2005 by Alistair Cockburn. The concept is simple. Draw a boundary around the core application domain. Next define an APIā€”the portā€”to interface with the external world, then write the codeā€”the adapterā€”behind it.

The architecture is typically visualized with application domain at the core, then various things plugged into it like a web UI, CLI, database, or notification systems.

This is fundamentally about boundaries. Declare the boundary. The boundary is an interface (in the classical OOP sense) or just an API. After the boundary is established, callers can ignore the implementation.

Software architecture is all about boundaries. Proper boundaries create systems that are easy to reason about and change.

OK, so letā€™s see it in action with a few examples.

The easiest way to explain this is in contrast to something else. Consider the typical web MVC systems. There will be an ORM. The ORM provides access to database records via high-level concepts like Objects or Classes. Other application code consumes these Objects or Classes by calling their functions and methods.

All well in good to a degree. Hereā€™s the challenge: the code is coupled to the database via the ORM through all layers the objects are used. This means DB calls can happen from anywhere in the system.
Hexagonal architecture works differently. It treats the DB as an implementation detail and boundary as a first-class citizen.

Imagine an interface called ā€œDatastoreā€ that defined every single operation needed to serve the core domain. The methods may be ā€œQueryUsersā€, ā€œCreateSubscriptionā€, or ā€œFindAllPostsā€. Next you create a ā€œPostgresDatastoreā€ that implements the interface. Now your code instantiates instance of that class, then uses that for anything data related.

The big difference here that every use case is accounted for the interface. Now, all consumer code writes to the interface. More importantly, you-the developer-can change the behavior on both ends of the boundary independently.

Leaning into ports, declared APIs or interfaces, and adapters, the implementation details, creates many downstream benefits.

The first benefit comes through dependency injection. If code requires adapters, then they must be explicitly be passed in. So if the code requires access to a database, then pass in the argument representing the DB. If the code requires access to the cache, then pass in the argument representing the cache.

This makes dependencies explicit instead of some static classes or other global state.
Dependency injections makes two crucial things 10x easier: test driven development and behavior driven development.

Dependency injection enables you to write tests using mocks and stubs. That means fast and isolated tests. Using dependency injection also acts a balancing force on complexity and cognitive load. Ask yourself: would you rather test and write code with two or seven dependencies? These are the cues for refactoring the code, all contributing to higher cohesion and less coupling. This one pillar supporting great TDD workflows.

The power multiplies when stepping up to a BDD or integration level workflow. Given the preconditions of all communication over declared APIs and dependency injection, then itā€™s possible to easily control the system-under-test and its environment. Hereā€™s an example.

Say the system sends email notifications when a user signs up. The test obviously should not actually send the email, though it needs to test that email should be sent as expected. No problem. Use a fake.

A fake is a simple implementation of an adapter, typically used in dev or test environments. Say the API has a method for ā€œdeliver emailā€. The fake takes the arguments and add them to an in-memory array. You pass the fake to the application, then assert on the emails received.

Youā€™ll see this approach in many places. A notable example is Rails implementation of ActiveMailer for sending emails. The approach is not limited to emails. It applies to anything behind a boundary. It could be a database or an entirely separate service. Create a fake implementation then use that. The tests are decoupled, fast, and isolated.

We can leverages these characteristics across the SDLC and continuous delivery pipeline. The pipeline begins at the commit stage, so letā€™s start there.

Run all the tests using fake or in-memory adapters in the pre-commit hook. These tests should be fast enough to fit here, even in a trunk-based development workflow. This provides the developers fast feedback on the entire system on every commit. Plus, it acts like a jidoka step that prevents pushing known broken commits upstream to CI.

Then on the build step you can choose how much of the real world to connect to the app. You can run the tests against a real DB or with a fake cache. Generally speaking, I always run with real primary data stores in CI and fakes for any external systems.

You can modify the adapters to fit the environment as code moves through the pipeline. For example, the adapter for ā€œSMSā€ can capture everything and provide a UI where anyone can see the outgoing messages, never hitting an actual phone number. Or conversely, maybe you do want to send actual messages. Your choice based on requirements in each environment, all the way up to production where you use the ā€œrealā€ version of all adapters.

This brings us all the way back to the development environment. Thinking hexagonally shifts the mental model from integrated to isolated. This especially obvious for frontends apps. Their architecture assumes almost complete reliance on some number of external APIs. So why not leverage that boundary to your benefit?

The same way the system-under-test and environment are controlled in the test environment applies to the dev environment.

The app can be started against relevant fakes or development-environment specific adapters. For example, the dev environment adapter could automatically include relevant mock data for a variety of common use cases.

Following this approach completely removes the need for increasingly complex and integrated dev environments, all by leveraging the architecture boundaries created by ports and adapters.
There is a time and place for more integration. Thinking in boundaries, ports, and adapters create optionality for more or less. You cannot have that without them.

Alright, thatā€™s all for this batch.

Think about how you can put hexagonal architecture in action.

Tip one. Get Dave Farleyā€™s book ā€œModern Software Engineeringā€. Iā€™m giving away a free copy this month. Thereā€™s great stuff here that builds on the concepts in this episode. Dave also told a story about leveraging hexagonal architecture in his last appearance on the show.

Tip two is Alistairā€™s Cockburnā€™s new book on Hexagonal architecture. I have not read it, but I trust the source, so check it out to learn straight from the horses mouth.

Go to SmallBatches.fm/111 for a link to enter the giveaway, Alstairā€™s book, and more on hexagonal architecture.

I hope to have you back again for the next episode. Until then, happy shipping