Engineering Evolved

Engineering leader Tom Barber challenges the default adoption of Kubernetes, sharing why simpler alternatives often serve mid-sized companies better and how to make pragmatic infrastructure decisions.

Episode 12: Why Kubernetes Is Probably Wrong for Your Mid-Sized Company

Key Topics Covered
The Kubernetes Reality Check
  • Why most mid-sized companies don't need Kubernetes complexity
  • The hidden costs: maintenance, YAML management, and developer experience
  • Real-world example from NASA: when impressive engineering doesn't solve business problems
Understanding Kubernetes Context
  • Origins from Google's Borg system designed for massive scale
  • Core benefits: fault tolerance, auto-scaling, declarative infrastructure
  • Why these benefits require significant investment to realize
The Real Downsides
  1. Complexity: Even cloud vendors are building products to hide Kubernetes
  2. YAML Everything: Config management becomes a people and process problem
  3. Cost at Scale: Engineering hours, infrastructure, and mental health costs
  4. Developer Experience: High barrier to entry and friction in feedback loops
  5. Portability Mirage: Cross-cloud deployment still requires deep vendor knowledge
When Kubernetes Makes Sense
  • Genuine scale requirements (dozens/hundreds of services)
  • Multiple teams with dedicated platform engineering capacity
  • Complex deployment patterns that serve real business needs
Practical Alternatives
  • VMs with Docker: Boring is good, boring is maintainable
  • Managed Container Services: ECS/Fargate, Cloud Run, Azure Container Apps
  • Serverless: Lambda, Cloud Functions for event-driven workloads
  • Simple Deployment Scripts: Often cheaper than cluster management
Decision Framework: Do You Actually Need Kubernetes?
  1. What specific problem are you solving?
  2. Do you have dedicated team capacity?
  3. What's your actual scale (services, teams, traffic)?
  4. How frequently do you deploy?
  5. Have you exhausted simpler options?
Resources Mentioned
  • Free Download: "You Actually Need Kubernetes" Checklist (available in show notes)
  • Consulting: Concept Cloud - Pragmatic infrastructure decisions for mid-sized companies
  • Website: www.conceptcloud.com
  • Contact: tom@conceptcloud.com
Next Episode Preview
Episode 13: "Why Your Engineers and Product Managers Still Don't Talk to Each Other (And How to Actually Fix It)"
Engineering Evolved is the podcast for engineering leaders at mid-sized companies who are tired of getting advice that only works for startups or enterprises.
Chapters
  • 0:00 - Introduction: The Kubernetes Controversy
  • 3:00 - A Personal Story: Getting It Wrong at NASA
  • 4:58 - Understanding Kubernetes: Context and Core Benefits
  • 7:07 - The Real Downsides: Complexity, Cost, and Developer Experience
  • 10:49 - When Kubernetes Actually Makes Sense
  • 13:39 - Practical Alternatives to Kubernetes
  • 15:51 - Decision Framework: Do You Actually Need It?
  • 18:36 - Wrap-up and Next Episode Preview

What is Engineering Evolved?

Where business meets innovation and technology drives transformation.
Engineering Evolved is the podcast for leaders navigating the forgotten ground between startup chaos and enterprise bureaucracy. If you're building and scaling teams at organizations in the middle — where startup rules no longer apply and enterprise playbooks are far too large — this show is for you.
Hosted by Tom Barber, each episode explores the real challenges facing today's engineering leaders: scaling systems without breaking them, building high-performing teams, aligning engineering strategy with business goals, and making technical decisions that drive measurable impact.
Whether you're a Director of Engineering, VP of Technology, CTO, or an IC engineer stepping into leadership, you'll find practical insights drawn from real-world experience — not theoretical frameworks that only work on whiteboards.
Topics include:

Scaling engineering teams and systems for growth
Building effective engineering culture
Bridging the gap between technical and business strategy
Leadership tactics that actually work in the messy middle
Making architectural decisions with limited resources
Navigating organizational complexity

Engineering Evolved — guiding today's leaders through the evolution of engineering.
New episodes drop weekly. Subscribe now and join the conversation about what it really takes to lead engineering in the modern era.

Tom:

Tap, tap, tap. Is this thing on? Welcome back to engineering evolved. I know I've had a week off, bit of travel for work and a bit of vacation. So good to have your back.

Tom:

I'm going to say something that might get me a few, well, get me uninvited from a few DevOps meetups. For most mid sized companies, Kubernetes is probably the wrong choice. Not because it's bad technology. It's generally, genuinely impressive engineering that came out of Google's borg system. But here's the thing.

Tom:

You're not Google and neither am I. And when we pretend otherwise we create maintenance nightmares, burn out our teams and spend money solving problems we don't actually have. And so today I'm going to help you figure out if you actually need Kubernetes and spoiler alert. The answer is probably no.

Intro:

Welcome to engineering evolved where business meets innovation and technology drives transformation. Each episode, your host, Tom Barber, explores the challenges and opportunities facing the organizations in the middle. The forgotten ground where startup rules no longer apply and enterprise playbooks are far too large. From scaling systems and leading teams to aligning engineering with business goals. This this is where practical insight meets real world experience.

Intro:

Engineering evolved, guiding today's leaders through the evolution of engineering.

Tom:

So like I said, welcome to engineering evolved, podcast for engineering leaders at midsize companies who are tired of getting the advice that only works for startups or enterprises. I'm your host, Tom Barber, and this is episode number 12. If you caught the previous episode on platform engineering for midsize companies, you'll know that I'm pretty opinionated about the build versus buy decision for infrastructure. Today, we're drilling into one specific technology. And I, I think has become the default answer to questions nobody's asking, which is Kubernetes.

Tom:

I saw a LinkedIn post recently where someone was genuinely, where can I say that word tonight? Genuinely arguing that SSH ing into a VM to deploy your application was outdated and you should use more modern tooling. Air quotes. And look, the argument that you should adopt technology because it's shiny and new has been raging since the dawn of computing. But when I read this, when I read through the comments and realized this person wasn't joking, that's when I knew we needed to have this conversation.

Tom:

So today I'm going to give you an honest assessment, not a vendor pitch, not the resume driven hype, just practical look at when Kubernetes makes sense and when it absolutely doesn't for companies your size. But first a word from our sponsors. Okay. Let me tell you about the time I got this wrong. Not the only time, but the one that will fit into this story.

Tom:

Back when I was working at NASA, I was part of a team building a data platform. We were processing flight data that had a hard requirement. Scientists needed that data in their hands within thirty minutes of it reaching earth. Not a nice to have a hard SLA. Now this was a small team and somewhere along the way, we decided to deploy this thing into Kubernetes.

Tom:

Production Kubernetes cluster, all the YAML, all the complexity, the full experience. Was it resume driven development? No, actually it wasn't that cynical. We generally liked working with shiny new technology and Kubernetes at the time was interesting. It was challenging.

Tom:

It gave us problems to solve that felt satisfying. But here's the question we never asked ourselves. Did we actually need it? We had a small team. We had a specific well defined workload.

Tom:

We had a clear SLA and we chose the most complex cluster container organization orchestration platform available because we enjoyed it. Now that's I admit in hindsight, not a good reason. The maintenance burden of running a production Kubernetes cluster, the upgrades, security patches, config drift, the YAML management, none of that was serving our mission. It was overhead. And when team dynamics shifted, when people moved to other projects, the overhead became a real problem.

Tom:

If I could go back, I'd run that workload on a VM with a Docker container, maybe two VMs for redundancy, simple deployment script, all done. So when I talk about Kubernetes being overkill, I'm not speaking from theory. I've lived this mistake more than once. And I've spent time since then helping other teams avoid making the same one. All right.

Tom:

Let's start with some context because I think understanding where Kubernetes Kubernetes came from helps explain why it might not be right for you. So Kubernetes spun out of Google specifically. It was inspired by the bog of all names, Google's internal cluster management system that was D that was dealing with well, Google scale deployments. We're talking about a system designed to manage containers across tens of thousands of machines, handling workloads for billions of users. If that context doesn't tell you what Kubernetes is good at nothing will.

Tom:

At its core, Kubernetes takes Docker images and a lot of YAML descriptors to deploy your applications into a cluster of machines. The cluster then manages your containers, your networking, ingress, egress, permissions, the work. And in the right context, this is genuinely powerful. It can provide fault tolerance, auto scaling, auto healing, declarative infrastructure, where you describe the state you want and Kubernetes works to maintain it. Self healing, rollbacks on failure, consistency across environments.

Tom:

And that sounds absolutely amazeballs, right? The problem is most of these benefits require significant investment to actually realize. And for mid sized companies, the 200 to 1,000 employee range, that investment rarely pays off. I'm talking about working at NASA with like, you know, tens of thousands of staff distributed across different sites and people from around the globe interacting with this platform. Was still telling you the investment didn't pay off.

Tom:

Here's a question I want to, I want you to sit with. What problem are you actually trying to solve? And can you solve it by doing less? If the answer is yes, I can solve this with less complexity, then you should do less every time. Let's talk about the downsides because this is where the vendor pitches get quiet.

Tom:

First, complexity. Kubernetes is genuinely, legitimately complex. Sure. Every cloud vendor has their hosted solution, EKS, GKE, AKS. And there are vendors offering support for their Kubernetes variants if you'd rather deploy elsewhere.

Tom:

But that doesn't mean it's easy to wrap your head around. Azure has tried to wrap Kubernetes in Azure app service to make it easier to deploy apps. AWS just launched EKS auto mode to reduce the management burden. They all know it's hard work. Even the vendors and the vendors are building products to hide Kubernetes from you.

Tom:

That should tell you something. Cluster security, cluster upgrades, managing your pods. And if you're self hosting your VM also needs patching and remediation. It all requires planning and forethought on top of your actual application work. Second reason, YAML everything.

Tom:

The majority of Kubernetes is driven by YAML in some form. Sometimes simple, often not. But here's the real issue. How do you deal with the change requests? The version control, the config drift, different people applying policies.

Tom:

All of this requires careful management. And honestly, it's not really a technical problem. It's a people and process problem. Do you have the bandwidth to manage it? For most midsize companies, the answer is likely no.

Tom:

Third, cost at smaller scales. This manifests itself in multiple ways. The additional complexity means more engineering hours. You've got more nodes, a control plane. Sure.

Tom:

It may be free from your cloud vendor, but you're paying for it somewhere. More infrastructure for ingress. It all adds up. And I'm not just talking about hosting costs. People jump straight to that number, but when you've got a platform as complex as Kubernetes, what's the human cost, both in money, raw time that could be spent building features, but also just developers and infrastructure staff mental health.

Tom:

Fourth developer experience. You can, if you're particularly masochistic run Kubernetes on a local machine, but this still requires the same YAML setup, the same config tweaking, the same maintenance burden. It could be useful for debugging cluster issues, but in practice, it's not developer friendly. It's not fast. It's not easy.

Tom:

You know, there's, there's a high barrier to entry there. Your developers want fast feedback loops. They want to write code, run it, see the results. And Kubernetes a 100% adds friction to that loop. Fifth is the portability mirage.

Tom:

And this one gets me because Kubernetes is supposed to offer universal configuration across cloud providers. In practice, it often doesn't work out this way. You still need deep understanding of your vendors, networking ingress options, available storage types. When you deploy storage, do you need slow disks, cheap disks, high performance IO, maybe both, maybe all of the things. How do you pick them?

Tom:

How do you configure your setup to use them correctly? Each cloud provider has its own opinions. It's off its own defaults, its own quirks. The promise portability often requires significant work to actually achieve. Now I've been pretty negative.

Tom:

So let me be fair. There are genuine upsides to Kubernetes and there are situations where it makes sense because whilst I've just said negative things about it, the portability is real, just not as seamless as advertised. You can deploy Docker containers into almost any environment on prem, any cloud, Raspberry Pis, like you can run it anywhere. The core product probably doesn't need changes and developers can run the same containers locally with Docker desktop or PubMAN. And so that flexibility has real value.

Tom:

The declarative infrastructure, the thing that makes Kubernetes hard to manage from a process perspective is also a positive. Once you've described the state you want Kubernetes works to maintain it. Self healing, rollbacks on failure, consistency across environments. When it works, it works well. The ecosystem, of course, there's a rich collection of additional services and tools.

Tom:

Argo for application management, flux CD, I think is like been resurrected from the dead. Prometheus for monitoring out of the box. The ecosystem can both help and hinder, but there's genuine, genuine value there. If you need it. Resource isolation.

Tom:

Kubernetes provides multi tenancy patterns, RBAC roles, network boundaries. These can be hard to configure and maintain, but they do exist. Extensibility of course, is there if Kubernetes can't do what you want, the operator flame framework and various other API extensions have been standardized for years and allow you to extend it to your heart's content. Deployment patterns, rolling updates, canary releases, blue green deployment. These are all doable though.

Tom:

I'd ask, do you actually need blue green releases or did you just read about them and think they sounded cool? Most organizations don't need blue green releases. They just need to schedule the race at a sensible time. Observability hooks. One of the biggest problems with containers is knowing what's happening inside Kubernetes provides hooks for logs, metrics, traces, standards like open zipkin have really grown this ecosystem.

Tom:

So when does this all add up to yes, use Kubernetes? It's when you have genuine scale requirements. When you're running dozens or hundreds of services across multiple teams, when you have dedicated platform engineering capacity to manage it, the complexity serves a real business need. For most midsize companies, that's not where you are and pretending otherwise doesn't help anyone. And now a word from our sponsors.

Tom:

Okay. We're back and let's roll onto the alternatives. So if not Kubernetes, then what? And this is the million dollar question. Sometimes quite literally.

Tom:

Start with the simplest thing that could work. If you can solve your problem by spinning up a VM and running a process on it, even if that processes process sits inside of a Docker container for convenience, and I'm not suggesting you don't use Docker containers, then you probably should. VMs are boring. Boring is good. Boring is maintainable.

Tom:

Boring lets you ship features instead of fighting the infrastructure. Got managed container services. AWS has ECS and Fargate. Google has Cloud Run, Azure as app service and container apps. These give you container benefits without the cluster management overhead.

Tom:

You describe what you want to run. They figure out where to run it. And of course this serverless for appropriate workloads as well. Lambda cloud functions, Azure functions for event driven bursty workloads. Serverless can be dramatically simpler than managing containers yourself.

Tom:

Docker on VMs. And this is my default recommendation for most sized companies. You're getting the container portability, reproducible builds, easy local development, but you deploy to a VM. You can SSH into update and understand, right? Simple deployment script, use Ansible or Terraform.

Tom:

If you want infrastructure as code, it is done. So here's a thought experiment. What would it cost more to package your app as a Docker container and then write a separate deployment script for each cloud provider's basic application deployment than to manage a Kubernetes cluster? Because in most cases, the answer to that question is no. The deployment scripts are simpler.

Tom:

The maintenance burden is lower, and you can always migrate to Kubernetes later if you genuinely need it. You're not locked in. You're not cutting off options. You're just choosing appropriate complexity for your current scale. So how do you decide?

Tom:

How do you actually make this decision for your organization? I've put together a checklist that I'm calling the, do you actually need Kubernetes checklist? And you can download it from the show notes, but let me walk you through the key questions first. What problem are you actually solving? Not what technology do you want to use?

Tom:

What actual business problem? If you can't articulate a specific problem that Kubernetes solves better than simpler alternatives, you have your answer. Second, do you have the team? Kubernetes requires ongoing care, upgrades, security patches, configuration management. If you don't have at least one person who can dedicate significant time to cluster operations, you're setting yourself up for trouble.

Tom:

Third, what's your scale? If you're running fewer than 10 services, if you have a single team managing deployments, if your traffic patterns are predictable, you probably don't need Kubernetes. Fourth, what's your deployment frequency. If you're deploying once a week or less, the sophisticated deployment patterns Kubernetes enables aren't buying you much. Fifth, have you exhausted simpler options?

Tom:

Can you use managed services? Can you run on VMs? Can you use your cloud providers container platform without managing the orchestration layer yourself? If the answer is no to most of these, and you're still considering Kubernetes, I'd ask you to honestly examine why. Is it solving a real problem or do, does it just feel like what a real engineering team should be using?

Tom:

So look, I know there's a lot to process and every organization is different. The right infrastructure choice choice depends on your specific context, your team, your scale, your constraints, and your goals. If you're wrestling with this decision, whether to adopt Kubernetes, how to simplify your existing infrastructure or how to modernize without creating new maintenance nightmares. This is exactly what I help companies figure out with through my consultancy concept cloud. I work with engineering leaders at mid sized companies to cut through the hype and make pragmatic infrastructure decisions, no vendor agenda, no resume driven recommendations, just practical guidance based on what actually works.

Tom:

If that sounds valuable, head over to www.concept2cloud.com and let's have that conversation about your specific situation. And don't forget to download the, do you actually need Kubernetes checklist from the show notes. It's a simple decision framework you can use your team to have an honest conversation about whether Kubernetes is right for you. So there we go. If you found this episode valuable, I'd really appreciate it.

Tom:

If you'd share it with another engineering leader who might be facing the same decision. And if you have a Kubernetes story, whether it's a success or a cautionary tale, I'd love to hear it. Find me on LinkedIn or send me an email, Tomconcept2cloud dot com. Your experiences help shape future episodes. Next week on engineering evolved, we're shifting gears to talk about something every engineering leader struggles with why your engineers and product managers still don't talk to each other and how to actually fix it.

Tom:

It's episode 13, and I've got some strong opinions about the rituals that work and the ones that don't. Thanks for tuning in. I'll see you then.

Intro:

Thanks for listening to engineering evolved with Tom Barber, where ideas meet innovation and leadership drives change. If you enjoyed today's episode, please leave a rating and review wherever you listen. It helps more leaders discover the show and keeps the evolution moving forward.