Kube Cuddle

In this episode Rich speaks with one of the creators of the Kubernetes project, Joe Beda.

Topics include: KubeCon NA 2022, the Kubernetes Documentary, how the Docker image changed everything, designing early Kubernetes, what Kelsy Hightower brought to the community, founding Heptio and the VMWare acquisition, and what Joe would do differently designing Kubernetes today.

Show Notes

Show notes:

Joe’s Twitter | Joe's Mastodon
Rich's Twitter | Rich's Mastodon
Kube Cuddle Twitter

Links:

Developers, developers, developers
The Kubernetes Documentary: Part 1 | Part 2
Brendan Burns | Craig McLuckie
Dark Side of the Ring
LXC | BSD Jails | Solaris Zones
Tim Hockin | lmctfy
Docker in dev vs prod meme 
Joe’s slides from his 2014 Gluecon talk
Mesos
Kelsey’s Tetris talk (a later version than the one I saw)
go fmt | Rubocop
Mesos
Bryan Liles | Naadir Jeewa | Kris Nova
TGIK
kubectl apply and the 3 way diff
SPIFFE
Leigh Capili’s talk on auth and RBAC

Bonus link: Joe sent me this on Twitter after the interview, some notes he wrote on what a production stack should look like, from 2015. 

Listener questions from Bill Mulligan, Bryan Liles, Thomas Güttler, Ross Kukulinski, and Saim Safdar. Thank you!

Episode Transcript

Logo by the amazing Emily Griffin.
Music by Monplaisir.

Thanks for listening.

★ Support this podcast on Patreon ★

What is Kube Cuddle?

A podcast about Kubernetes, and the people who build and use it.

Rich: Welcome to Kube Cuddle, a podcast about Kubernetes and the people who build and use it. I'm your host, Rich Burroughs. Today I have a very special guest, one of the creators of the Kubernetes project, Joe Beda. Welcome, Joe.

Joe: Thank you so much for having me on. I'm excited to be here.

Rich: Uh, I, I wanna thank you for coming on. I've honestly wanted to ask you to be on the show for quite a while, but I guess I had some imposter syndrome or something and, and hadn't reached out. And when I did, you were very quick to say yes and very gracious. So, uh, thanks a lot for coming on.

Joe: Well, yeah, I, you know, I, I, uh, one of the things that I love about the community is that ability to be connected to folks and, um, there's so many times where folks are sort of, you know, behind these walls and, and, my entire career I've been breaking down those walls, um, even when I was early days at Microsoft, working on platforms there. And so, yeah, I, I really enjoy doing stuff like this when I can really, really connect with folks.

Rich: Fantastic. Yeah. I mean, to me this, uh, podcast really is about the community, you know, and trying to get to know the people who are, who are doing things that help us all. So, uh, you, you are definitely part of that group. So, I, uh, usually do listener questions at the end, but, um, I had one that I wanted to get to right off the bat.

Um, your Twitter profile says that you're semi-retired and Bill Mulligan asked, um, if he's only semi-retired, what is he doing with the not retired part?

Joe: I don't know yet, so I'm still, you know, so for those who don't follow my every move, um, you know, my history is we, you know, started Kubernetes at Google. I took some time off then to explore what was next and ended up starting a company called Heptio. We, we were in business for a couple of years and then, um, got an offer to join VMware. So we ended up selling the company there. And I just recently left VMware. So I'm a, I'm a couple of, couple of months out, um still figuring out what it means to me. Been spending a lot of time with the kids and my, my parents are aging too. And so, um, it's been good to have less distractions as I do that.

And the, the truth of it is, is that, you know, I was pretty burned out when I left VMware. And so for those first couple of months, I honestly couldn't see a computer and like, you know, I'd watch TV on my iPad. But the reality is, is that, you know, I went from spending, you know, eight plus hours a day here in my office at home to being like, you know, the plants are dying because I'm not watering them cuz I'm not down here.

Um, I'm starting to get past that and I think, you know, Uh, uh, going to KubeCon in Detroit was really great because it reconnected me with the community. Saw some of what folks are doing, you know, saw some places where I can start some conversations and I don't know why I'm doing it yet and I don't know what it's gonna turn into, but there's a certain sort of satisfaction being able to chase things that I find interesting, even if I don't understand how that might relate to a business or where it might lead to.

And so, you know, I think right now I'm just kind of following my nose and figuring out what's interesting to me.

Rich: Well, the people that I've talked to, I've had conversations with a few friends where it came up that you're, um, retired or semi-retired, and we all have been in agreement that you deserve some time off because, um, you've obviously worked really hard and contributed a ton to, um, how we all do our jobs, you know?

Um, I'm one of those people who's in a situation where I'm working for a Kubernetes vendor, you know, uh. My job maybe doesn't exist if it wasn't for the work you all did, you know, back in 2014, 2015, you know, back then. So, um, we all, I think owe you and Brendan and Craig and everyone else, you know, involved, like a lot of thanks.

Um, you mentioned KubeCon, um, what was your feeling, being there as somebody who's not like working at a Kubernetes vendor?

Joe: You know, I have complicated feelings about KubeCon and every time I go to one, it's honestly an emotional rollercoaster for me. Um, I tend to find that I get the most out of, you know, the first couple of days before KubeCon and so especially the contributor summit there is really a great opportunity to reconnect with folks, see old friends and, and also like look around and go like, I don't know a lot of these people.

And that's actually awesome. What that means is that there's still new folks that feel a sense of ownership around the project and are taking up leadership positions and stuff. So for me that's really invigorating to be able to do that. Um, on the other hand, you know, you walk the show floor and it's, you know, it's a vendor fest and that's not the stuff that draws me to it.

I think it's, it's, it's, it's a necessary part of being able to support the larger community. But there's always sort of that, that product versus project push pull that I, I think, you know, we all sort of deal with. I did enjoy not having, like, a lot of times for somebody like me, I go to KubeCon, I get scheduled with meeting with customers and, you know, potential customers and, you know, working the booth.

And so you're, you're, you're taking both that commercial and responsibilities on the commercial side and on the community side. Um, not having to do that stuff was nice. It made it a, a more enjoyable and relaxing experience. And then finally like, you know, there's always a challenge when you see people like say at keynotes or talks and they're talking about something and you wanna be like, you're wrong. That's not the way it should work, right. And I think, you know, as engineers, we all look at this and like, somebody's wrong on the internet, I must correct it. I think one of the challenges, again, around sort of supporting this community is, is recognizing that not everybody's gonna do it the way I would've.

And that's,

Rich: Yeah.

Joe: So, you know, you thanked me and Brendan and Craig for starting this, but the reality is, this thing has taken on a life of its own. And, and I think part of making it successful is, you know, letting it launch, letting people make mistakes even if, uh, maybe they're not mistakes. 'Cause you know, I'm not omniscient. So that's always a real challenge also, you know, at, at KubeCon to, to sit there in the audience and keep my mouth shut.

Rich: Yeah, I totally understand. I mean, I think that like, that's an important part of leadership, right? To be able to step back and let people, you know, have some agency and do things the way they want to do them. Because if you do speak up, you know, you as one of the people who's seen as a creator of Kubernetes, if you come out and say, Hey, this is all shit, you know, um, you're gonna stir up a, a real big mess.

Joe: Yeah. So I'm always really careful to, you know, to have opinions, but, and then also be, you know, recognize that. You know, I've been wrong before. I'll be wrong again. And part of the, the strength of these communities is that there's so much room for people to try stuff out. Even if I personally think it's ill advised, you know, that's, that's actually a good thing that folks can do that.

Rich: Yeah. Yeah. Agreed. Completely. Yeah. I think that one of the things that's been bothering me, uh, like I love the CNCF and like, uh, you know, thank you all for what you do, but I do have one big complaint about, um, something that's happened over the last few years, which is that the sponsored keynotes are no longer labeled clearly in the schedule. And that really bothers me. Like we used to have that transparency about who was paying for a slot to get up on the stage and speak, and, and that's gone. And you can usually kind of guess, you know, but, um, but I don't like it.

Joe: Yeah, that's something that I, you know, I've brought up the CNCF again and again, and, and you know, the point gets across, but then somehow it gets lost again. And so, um, you know, and actually understanding where that line between, like I said, project and product or community and commercial, where that sits is, you know, is always a difficult and subtle thing in this world.

Um, yeah, another thing that I had a hard time with at this KubeCon is that, you know, the, the conference started on Wednesday officially. But on Monday and Tuesday, there were all these mini conferences and these things, like I paid, you know, since I was going as myself, I got the early bird. I think I paid $500 to, to attend KubeCon.

Um, it would've cost me another $500 to go to one of these mini conferences. Right. And so it kind of, you know, the, the way that that stuff was run kind of fractured some of the opportunities in the community there. And, uh, and also wasn't as, uh, I think, you know, there's some cases where the, the, the project versus product lines were even blurrier in some of those mini conferences, um, in a, in a way that I didn't appreciate. I've given that feedback to the CNCF. I know they're, they're listening and, you know, always trying to improve, but you know, there's, there's always that push pull there that I think we deal with.

Rich: Yeah, absolutely. This was actually my first time going to any of those, those pre events. And I went to the eBPF day, um, and it was pretty fun. And it wasn't just all one vendor, right? Like there were even multiple vendors involved who were giving talks and stuff. And so I thought they did a decent job of balancing that out, but I totally could see how that could, you know, go the wrong way.

Joe: Yeah, and I think it's pretty uneven with some of these things, how much of it is sort of focused on a single vendor versus how much of it is more community driven. Yeah.

Rich: Um, so you mentioned a little bit about your kind of path to, to get to, um, Google. Um, I wonder, I usually start off asking people, um, about their background, like how they got into computers and all of that. I, I have a whole lot of stuff I wanna get to with you, so I wanna see if maybe we could do like a, brief version of that?

Joe: I'll, I'll summarize my career and sort of how it sort of led to Kubernetes.

Distinctly as I can. Um, my father was a computer programmer. He worked, you know, on, you know, IBM mainframes and so always grew up with computers in the house. I went to school at Harvey Mudd in, in California, thinking I wanted to be, uh, an engineer, like a, maybe an electrical or mechanical engineer.

But, you know, I, I knew the computer stuff so well. I think I just got lazy and sort of defaulted into it. Did a couple of internships at Microsoft and then joined Microsoft, uh, outta college, working on IE. And I think that that had a real impact on me in terms of my career because, you know, even back then, and, you know, and, and even through the Ballmer years, I think there was a true sense that, you know, Microsoft understood platforms and developers in ways that few other companies have.

It really is in the DNA, it's in the, it's in the air at Microsoft. And,

uh,

Rich: Developers, developers, developers.

Joe: Yeah. And, and this is

Rich: that

Joe: he wasn't wrong , right? Um, you know, he, he was jumping around, Ballmer, but like, he wasn't wrong. And so that definitely had an influence on me. And, and then, you know, I, I left Microsoft to join Google, worked on everything from Google Talk to Ads, to, to telephone systems, to uh, uh, and then cloud stuff.

And, you know, through that I always brought this sort of platform type of thinking to it even times when Google was kind of allergic to platforms. Um, um, we viewed the early Google Talk stuff, basing it on XMPP, all that stuff was really thinking about enabling ecosystems and platforms. Didn't pan out like that, but like we tried to at least have the underpinnings there.

Um, and then that's why cloud was a natural for me also. And so I started the, the Google Compute Engine project, and at the time, Google thought cloud was a bad business. And so there was a certain amount of, you know, having a sense that this was something that had huge opportunities and then being able to be out in front of that, um, similar type of thinking led to, to Kubernetes while I was at Google.

Um, and so that platform thinking is something that I think permeates my career.

Rich: That's really interesting. Um, I'm behind the times, but I just managed the other night to watch the fantastic Kubernetes documentary.

Joe: Yeah.

Rich: Um, so if there are people who haven't seen that, I will put some links into the show notes. It's a two part thing. It's like an hour total. Very much worth watching.

And it really got me very nostalgic for that period of time because, um, I was, you know, in the industry then, I'd been doing operations stuff for many years. Um, I was working in roles where I was kind of embedded with developers, almost like an SRE would be nowadays and doing like deployments and app configurations and, and troubleshooting problems.

And, um, there's a thing that Kelsey says that that really rings true with me, which is that, um, we were the schedulers. Like we had the spreadsheet of like, which service ran on what, you know, host and, and uh, knew all that stuff. And, and I think that's maybe one of the reasons why I connected with Kubernetes so much because it almost seemed to be designed for someone who was like, specifically in the role that I was in.

Joe: I mean, people are talking about platform engineering teams and all that now, and that was very much the, the thinking behind, behind Kubernetes was to provide those that, you know, that that system, that upleveled, you know, power tools for folks who were playing those types of ops roles. Um, another thing Kelsey said, I don't know if it showed up in the documentary, was that, uh, DevOps is group therapy for big companies.

Richard's like

Rich: I've not heard that

Joe: But it's great because what it really goes to show is that, you know, there's a technical side to this, but like, so much of this is really about how do we enable new patterns for teams and different disciplines to work together. And that was definitely something that I think was, was, you know, top of mind or we were aware of.

I mean, one point on that, on that, um, documentary, I thought it was excellent. I'm glad that they were able to include as many voices as they did. When they first pitched it, it was just gonna be me and Brendan and Craig. Um, and we were all like, oh no, this is so much bigger than the three of us. Even so, you know, I know there's a lot of folks out there that felt like there were, there were aspects and sides to the story that totally got missed.

And I, you know, it makes me sad because I wish we could tell everybody's story, everybody's involvement, but, um, you know, for squeezing what they could into an hour, I think they did a really good job.

Rich: Yeah, I mean, I think especially someone coming from the outside, right? Like, you're never gonna learn all the stories and be able to tell things perfectly. Um, there's this show that I really enjoy about professional wrestling called Dark Side of the Ring, where they tell, they tell all these stories about crazy stuff that happened in professional wrestling, you know, and it's the same thing.

You hear people all the time complaining that, oh, they got this wrong or that wrong, but it's like they can, they're never gonna know everything. Um, I can recommend it if you wanna check it out sometime, . Um, also, also, if any of those folks are listening, um, who feel like their stories weren't told, I, I'd love to have you on and chat about it, so feel free to hit me it.

Joe: Yeah, I think there's definitely podcasts like this are a chance to really sort of, you know, dig that next level and, and uh, and, and, you know, tell other sides to the story.

Yeah. absolutely. Um, so let's step back to like 2013. Um, uh, Docker shows up. Um, you all had been doing containers at Kubernetes or at Google for how long? By that

Um, oh gosh. I joined Google in 2004 and, uh, Borg had existed and they were in the process of moving stuff to Borg and, you know, the original Google Talk servers stretched Borg in some interesting ways. There were still, you know, places like, say Search and, you know, Gmail didn't run on shared Borg infrastructure and stuff like that for quite a while.

But like, you know, it had already been going on for, you know, probably, almost, almost 10 years at that point when Docker came on. But you gotta understand the, the containers at Google is not what we mean when we say containers today.

Um, Docker containers, there's a lot of similarities, but I, I don't want to downplay the genius of what Solomon and Docker did in terms of being able to, to create some, some abstractions that made this stuff that much more approachable. And you look at things like, you know, um, you know, BSD jails or LXC or, or whatever.

Rich: Yeah, we used Solaris Zones where I

Joe: Yeah, it uses the same underlying, you know, ideas and technology.

But I think the, the, the fascinating thing that came out of Docker, and I think, you know, we recognized this immediately and I think it's pretty obvious now. It wasn't necessarily Docker itself, it was the Docker image. This idea that you could have this packaged up artifact that you could then run in a bunch of different places. That didn't exist in the same form at Google, and it didn't exist in things like Zones or LXC or BSD Jails. And so that I think is, you know, a, a key part of the experience in innovation there that I think a lot of folks gloss over when they say, well, we've been doing this forever. So I, you know, Google was doing it for a long time. But like, there was, there were some pleasant surprises as we saw Docker come on the, the scene with this stuff.

Rich: Yeah. I think on the Linux side that like, you know, obviously those cap... some of those capabilities were in the kernel already, but it wasn't that, it wasn't like there was an easy way to leverage those things, right? People real really weren't doing that until Docker came.

out for the most part.

Joe: Yeah, so I mean, Google was putting a lot of those things in the kernel and working to upstream 'em. And you know, if you talk to somebody like Tim Hockin um, he was involved with this early project called, uh, uh, let me contain this for you, Let Me Contain That For You, lmctfy.

Not a great name. Um, but that was really about, that was a project trying to show the rest of the Linux community how Google was using these features in the kernel because Google was adding these features and people were like, what the heck are you guys doing? Like, how do you use these things? Um, and so that project fell by the wayside as Docker became popular because then it was obvious how these features were being used.

But, but at the time, um, you know, there were, there were already efforts to try and, you know, uh, uh, have folks understand how Google is using these technologies.

Rich: I really haven't seen many other tools show up that have the impact, in almost an immediate way that Docker did, just in terms of the excitement that people had about it and, and the, the ways that it made them see what we're doing differently, you know. I, I feel like, you know, I've, I've described it before as you know, uh, you know, crack for developers and I, I feel like it really was, you know, everybody was, was so excited and, um, when I talk to people about developer experience, it's like one of the canonical examples for me.

Joe: But I think it also goes to show a little bit the, the, the sort of myo myopia or sort of like specialization that we have in our industry. Because I think, you know, you look at something like Node.js or React or you know, PyTorch, you know, there's, there's in these other domains, whether it be front end or whether it be ML, there's these bombshell projects also that have huge impact.

It just goes to show that like for sort of this infrastructure sort of niche that we find ourselves in, you know, Docker is definitely a standout.

Rich: But even then it was like, um, it got people so excited. And, um, the shop that I was at, we didn't end up using it. I moved onto another shop not too long after and the developers all there wanted to use it. And I was the guy saying, Hey, hey, wait a second. You know, cuz um, there's that, uh, have you seen that? There's a, there's an old kind of meme thing where it's a picture of a whiteboard and it's like Docker on your laptop, you know, and then Docker in production. And there's like a list of like three things on the Docker on your laptop side and then on the, on the production side there's like security and storage and like all these other considerations that you don't have. And, and that was my reaction as an ops person was like, there is so much that we need to do to make this thing production ready,

Joe: I think, you know, um, that was where I think a lot of the opportunity around Kubernetes actually came into play. Um, you know, I think that, looking at sort of the point of view that we took with Kubernetes versus the point of view that the Docker folks ended up taking with Swarm, there was a philosophical sort of disagreement there that I think sat at the root of that.

Um, we viewed Kubernetes, like, you know, my take on it was, you go from one machine to many machines and you have to introduce a whole bunch of other concepts and considerations to be able to do it, right? Whereas I think the, the idea with Swarm is they wanted to make a bunch of machines just look like one big machine where, where folks didn't have to worry about it. Which sounds great for developers, but once you start taking into account all these other considerations, you need new concepts. Right? And just the idea of, of a volume, you know, persistent volume in Kubernetes broken out as a first class citizen. Obvious to us while we were making Kubernetes. It took a long, long, long time for Docker to get there and make disks or, or volumes be first class citizens with Docker.

Right? It's there now, but, but it took a long time for them to get there. And I think that's an example of, of where I think, you know, our, our approach with Kubernetes was more grounded in the experiences of what it took to run a production system at Google. And the Red Hat folks early on also brought in a whole bunch of perspective too, for sure.

Rich: No, that totally makes sense. And I think that like, you know, when I'm talking about them nailing the, the user experience, that persona for them was the developer, right? And, and for you it was more me, right? It was more that, that that person who was trying to deploy workloads and, and make sure that, they're running and, and all of that.

Joe: I had a Twitter rant on this a little while ago where, um, I don't like the over rotation and focus on developers in this world. Um, I much prefer to talk about application teams, because that actually talks about both developers and operation folks and all the other disciplines that go in to actually make an application or a service successful.

And, and it's not just about what is the experience inside your IDE or at the command line while you're writing code. It's about, you know, that full sort of life cycle of how do you create projects, how do you deploy them, how do you secure them, how do you debug them? And so much of that stuff goes beyond sort of like, you know, a developer in their IDE, which is I think what a lot of folks think when you talk about developer experience.

Rich: Yeah, that's, that's really interesting. Um, I think that there's a, a lot of focus on velocity, especially, you know, and, and you obviously want that, and you want people to be happy in their jobs, but, but that goes for the, the ops folks too.

Joe: Yeah.

Rich: So I heard about Kubernetes in 2015, but um, things started before that. The documentary talks about, I think at one point it mentions that, that Brendan had sort of written some scripts to take, like a stab at like a first sort of prototype of what this platform thing might be. Um, were you involved at that point or did, did you get pulled into it later?

Joe: Yeah. I mean, me and Brendan and, and Craig, you know, we were all working together and we were trying to figure out, you know, I mean, looking at sort of Google Cloud, at that point it was clear that GCE was necessary, but not sufficient to make Google successful in this space. And the, the lock that AWS had on the industry was stronger than it is now.

We have viable competitors here, you know, now. Um, but at that point it was pretty insurmountable. And so our, our overarching idea was like, how do we, how do we sort of shake things up so that Google has a fighting chance? Um, cuz if we just go toe to toe, VM to VM with Amazon, there was a sense of like, we're gonna grind it out and there, it's a long road to being successful. But if we can change the way people are thinking about cloud in some ways, and maybe do it in a way that like builds on some of Google's experiences and strengths, that's something that, that opens up commercial opportunities for us.

And uh, and then doing it as open source was natural because if it was just something that worked on Google, nobody would've cared. And so, um, so, you know, there were a lot of efforts both in terms of trying to understand where to put our efforts and why. And then also just playing with the technology. So I think there was a general feeling between, me and Brendan that, you know, I used to say that Docker is like sort of, you know, half a kube... half a borglet, right?

The, the thing that sits on the, right? And some of our first efforts were essentially doing declarative APIs on top of Docker. Um, there, there's some code out there that Tim Hockin wrote in, in Python that I talked about at Glucan the year after Docker was announced, or, you know, um, which is essentially starting to build up the rest of that, you know, what became the kubelet.

How do we actually have an agent that, you know, integrates with higher order systems. Um, so some of that work definitely started out early. Uh, Brendan wrote some of the first stuff in Java and you know, as we sort of went forward, I'm like, we gotta do this stuff in Go., it's, it's gonna, we wanna have other people writing it, Java's not the, not the answer there for that community.

Rich: Go is still pretty early at that point, right? Like

Joe: Um, I wanna say that we thought super deep about that stuff, but the reality is, is that, you know, we didn't know what it was gonna turn into, but we definitely saw the energy of the people contributing to Docker and definitely felt like Go, was part of the, part of the magic there that made that happen.

Rich: Yeah.

Joe: Um, you know, and my take on it is I looked at, at, at, you know, at Mesos at the time also, and I'm like, Mesos is all C++ typically. And so, you know, you think about sort of like, what is the typical Java developer? How do they approach things? C++ is so unapproachable for folks who, you know, haven't studied the priesthood, right?

Uh, apologies to Matt Klein and the Envoy folks, but like, but like Go, you know, was both a capable and approachable language that I think really created a welcoming way for folks to get involved. Now, looking back on it, I think there's ways that, you know, Kubernetes used Go that maybe made things more complicated than they need to be.

Mistakes were made. And I wonder if, you know, if, if Rust were where it's at now, we would've chosen a different thing. Rust still has a little bit of that, you can be too clever and write code that nobody can read to it. Um, that you know harder to do that in Go, but I think Go was was part of the magic that that made Kubernetes take off.

Rich: Yeah. So I heard about Kubernetes in 2015. Um, this was after the 1.0 release. Um, Kelsey was speaking at a small, like tech conference here in Portland. It was probably like a hundred people or something, right? And he gave the talk, you might have seen it or heard about it at some point where he was playing Tetris. And, and that was sort of his analogy for this new world, right?

Where like, you as somebody who's operating these applications, you aren't worried about which server things are running on anymore, right? It's just a bunch of compute and a bunch of memory. These things are just resources and, and the scheduler's gonna just take care of all the magic.

Is that, um, when you were building the initial, like Kubernetes, how, how much did you think about these, you know, which of these pieces you want to include and, and how to look at this platform?

Joe: I mean, so one of the lessons that I took away from building GCE was that the API's the thing. And so our focus early on with Kubernetes was making sure that we had the right concepts so that we could express a lot of the deployment patterns that we wanted to express. And then, you know, what is the minimal set of capabilities that we needed to be able to include in the system to be able to do that.

And so a great example there is is, you know, pragmatically, like pods move around all over the place, or not, pods don't move, but things get scheduled into pods and they get killed and restarted. Like all that stuff with the, uh, the, replica set, which was called replication controller at the early stuff. Um, what that means is that IP addresses are gonna change a heck of a lot faster than anybody else does today.

And, and then you look at Java, at the time, it would resolve DNS and then never, it didn't respect TTLs by default.

Rich: Yeah. Yeah.

Joe: And so that's a nasty combination, right? Cuz now what you say is like you use Java and Kubernetes, something gets killed and restarted someplace else. Even if DNS updates, Java is never gonna notice that. Right. That led to the service and, and, and, and cluster IPs and the idea that, or, or service IPs right where you can actually, it was like, Hey, we need a stable IP address for a, a set of pods that are super dynamic. And so things like bridging that dynamic world to the static world in this smart as way as we could, was a key piece of, of deciding what was important.

And then it was very much driven from, you know, providing that experience early on. There was a lot of shade thrown around like the, the early versions of Kubernetes around then. We didn't target scaling beyond say, like a hundred nodes.

And that was a conscious choice cuz we knew we could solve that eventually. But the thinking was, let's make sure we get the experience right because nobody's gonna be using this past a hundred nodes early on anyway.

Um, but, you know, but that led to, you know, a lot of people throwing shade. And so we eventually decided to focus on that. And that led to the first SIG uh, because we, we sort of forked off a set of folks to start looking at that, and that ended up becoming SIG Scalability.

And that idea of being able to sort of carve off groups of people to focus on something led to sort of the way the project is organized now around SIGs.

Rich: Wow, I didn't know that. Um, one of the things that leapt out to me personally was, uh, like the liveness and the readiness probes. You know, I think, I think the service primitive was the thing that really kind of spoke to me, you know, again, because of that sort of role that I had and, and you know, I had been in this shop where we had figured out sort of how to do health probes with our Cisco switches, right?

And so we had, um, our, our apps would, you know, send requests to the Cisco Switch and say, take me out of service now, you know, I'm shutting down. And we, had taken all this time to get that implemented, but then somebody would show up and they'd wanna write something in another language and they wouldn't, they wouldn't use the libraries that we already had, you know, and it, it was always like reinventing the wheel.

And, and the thing that stuck out to me was that we no longer were in that situation where we have to like litigate, you know, how these things should work or, or remind people or any of that because it's just all part of the platform, right. And there's no room to disagree about how it works.

Joe: I think there's three lessons from that. I think, you know, first off is, um, anytime you have something that's language specific, you're gonna have a long tail of languages and it, it gets very difficult. I think there's lessons, things like open telemetry as people banding together to be able to say like, you know, there's gonna be this slog to add telemetry capabilities to every language. Let's all do it once instead of each vendor doing their own thing. Um, so that's definitely one lesson out of that.

Uh, I think the, you know, oh man, I do this. I have three things and then they escape me after I,

Rich: Um,

Joe: But I think that there's, there's definitely a, um, oh man, now it's escaping me. It'll come to me.

Rich: okay.

Joe: I usually take notes. I forgot to write stuff down as

Rich: No, no. Totally fine. I think it's good that people understand that you're not perfect too.

Joe: Oh, I know, I know what I wanted to say. So, so one other aspect there is that right now when you write a program, what are the systems that that thing interacts with, right? And that being, um, the underlying syscalls on the operating system. Um, one of the interesting things that you see with cloud, whether it be VMs or whether it be, uh, Kubernetes, is that we're starting to see that there's a whole set of syscalls that are off machine that actually help you to integrate with, with, uh, with other systems, right?

So if you're writing something on EC2, you can easily hit the metadata server, get credentials, and then talk to S3 and it's all very smooth. And so in some ways, S3 is another syscall that you can actually go ahead and talk to. So the set of tools that programs have to play with the primitives that the programs can depend upon fundamentally changes.

And so the point of view when you look at liveness and readiness probes is it's this idea of like, hey, if you sort of configure this, this path for something to hit, or this liveness or readiness probe, that's another sort of syscall or sort of connection between a program and the system. But the system is no longer the kernel, it's now this larger environment that includes things like Kubernetes or cloud services or what have you.

Um, and the ultimate extreme of this is something like Lambda, where you know you have all sorts of things. You can't do anything interesting with a Lambda unless you're actually calling out to these sort of cloud syscalls to some degree. And then the last thing is like, I think that point around, um, um, conventions are super powerful and I think this is one of the things that I enjoy when I program in Go is "go fmt" right?

Nobody argues about, you know, I'm a, I'm a I'm a spaces person, but "go fmt" does tabs. I'm like, oh, well, you know, that's just the way it is. And so,

you know, you know, oftentimes, and I, again, there's, there's, there's like, you know, a dozen ways to solve a problem, but, and there may be subtle pros and cons to those things, but deciding on the one way to solve the problem actually brings so much efficiencies that it far outweighs the, the individual pros and cons.

And I think that speaks to Kubernetes in general, where it's like Kubernetes does a lot of stuff. You may disagree with how it does it, but having something that has sort of those off the shelf patterns so that you're not reinventing the wheel and mix, mixing and matching your own solutions, I think really elevates everybody in terms of having that common language and that common way of doing things,

Rich: Yeah, I completely agree. I've actually mentioned "go fmt" before for the same reason. I'm somebody who did some time in the Ruby community, you know, and like, uh, Rubocop,, um, as, as folks say, ACAB applies to Rubocop too. Um, it's, uh, it's, it's just dumb, you know, to spend your time like arguing about how things should be formatted really.

Um, and people get so passionate about it too. Um, yeah, I, I think that one of the things that struck me too when I saw that talk of Kelsey's and started to look at Kubernetes was, um, that you all had really, I felt like you really nailed like a lot of the good operational practices, when you looked at the set of things that were in the 1.0 release.

I mean, these were things that a lot of people were doing already, you know?

Joe: Yeah, I think, um, Brendan had a saying when we were doing the early sort of stuff with Kubernetes, which is, you know, everybody at that time it was, you know, and I think he said this in the documentary, it was, we were waiting for the next project to come out and actually sort of nail this stuff. And so there's a real sense of urgency around getting stuff out there.

But we also felt like, you know, um, he liked to say that like everybody has the same puzzle pieces, but because of the experience that Google had, uh, on putting these things together, you know, we had the picture on the front of the box where other folks were trying to, to put the puzzle together without the picture on the front of the box.

And I think that's sort of like, you know, I think you see that a lot in Kubernetes because we'd, we'd sort of lived this so we were able to actually pick the right, the right things to, to pull together to, to hit a lot of those high points.

Rich: Yeah, I thought it was very clear to me for sure. Um, you mentioned Mesos a little bit ago, and, um, that to me is an interesting topic because I think that like at the time, you know, when Kubernetes showed up, um, certainly when the 1.0 came out, I think for a lot of people, if you were to have done a bake off between Mesos and Kubernetes then, you would've probably have chosen Mesos, right? Because it was the, it was the stable thing, it was the mature thing. Um, it had a lot of scheduling capabilities. All of these things.

Joe: I think this comes down to that. You know where I started, where the API was the thing with Kubernetes, and I think that's the reason why, that was a philosophical differentiator versus Mesos. Um, people, when you were using it as sort of an application team working with Mesos, you didn't talk to Mesos, you talked to Marathon, or Aurora, or Kronos. And those things didn't have a lot of ways to actually say, have something running on Marathon talk to something on Kronos or vice versa, right?

That these things were, so you created these systems. It was a toolkit for building something Kubernetes like, but there wasn't that commonality that you see with Kubernetes. And this only intensified over time because, as you know, we made the conscious decision to focus on extensibility versus adding more features to Kubernetes.

That led to things like CRDs and the operator pattern. And that's really giving Kubernetes sort of a, you know, it's a second stage right of, of the rocket, right? Um, and so now you look at Kubernetes and you're like, yeah, it does scheduling, but also it's a framework for solving all sorts of other sort of similar control plane problems in a way that, you know, that, that Mesos and, and Swarm and even things like Nomad were never set up to do.

Um, and so we, you know, we kind of stumbled our way into it, but at the end of the day, the scheduler and the, the container scheduling, it's only a part of, of what Kubernetes is. And I think that's, that's really given it a, uh, you know, that that second stage of, of excitement and success that, uh, we wouldn't otherwise had if, if we hadn't focused on, you know, extensibility and building sort of that larger community and ecosystem.

Rich: Yeah, I mean, I keep, uh, referencing Kelsey, but you know, uh, I've seen him say so, so many smart things about Kubernetes and the space. And, um, I saw him give another talk here in Portland at a little meetup years ago, and he, he was saying even back then that like, this was probably like maybe 2018 or something.

And, and he was saying that to him, Kubernetes wasn't even the interesting thing. It was like the things that people were gonna build on top of Kubernetes.

Joe: Yeah. No, I've had VCs like, Hey, where should we invest in the Kubernetes community? I'm like, No, invest on the things that Kubernetes gonna enable.

Not, not necessarily Kubernetes. Kubernetes itself.

Rich: Yeah. So, so I think that, um, if I understand the timeline right, I think that the 0.1 release shows up at DockerCon in 2014, and then DockerCon 2015 was the 1.0 release.

Joe: Um, no. I think 1.0 was at Oscon in,

Rich: that's

right. It was

Joe: Yeah, Yeah, I wasn't able to be there. I was on, you know, road trip with my family, so I I missed out on some of these sort of seminal mo moments, but there was a party, and I, I have it around here somewhere, um, where the the, the drink tokens at the party, for the launch party for Kubernetes, were these poker chips with the Kubernetes logo on 'em. And, uh, uh, people were like not drinking, so they could save one of those as a souvenir. So that was kinda cool.

Rich: I guess, I guess they were pretty forward thinking. They, um, uh, Tim did the logo, right? Tim Hockin?

Joe: Yeah. He has an arts degree. Um, and so, uh, yeah, I think we were just BSing and we did the seven sided logo ship's wheel there. You know, I think our marketing guy at the time said, it's open source. I don't give a crap, he used more colorful language. And so there wasn't a lot of adult supervision in terms of doing the branding and the logo. And I think we, you know, we really lucked into a lot of that.

Rich: So this period between that DockerCon and the Oscon, there's some talk in the, in the documentary about, you know, people really kind of crunching during that period of time. Is that, was that your experience? Were you like sleeping at your desk and stuff, or?

Joe: Um, I was already, I think I'd left Google at that point, so I was already slowing down a little bit. Um, and I was staying involved, but not to the, to the level of some folks were, but there was this, there was this excitement and this urgency and, I dunno, have you ever done, like I did one of these like ropes courses, team building exercise, I dunno if you've ever done this, where you get a bunch of people and there's like a light stick and you're like, okay, everybody, like balance it on your, on your fingers.

And everybody does that and there's this tendency for the stick just to rise, right? It's because somebody does it a little bit and then everybody else goes to catch up and sort of like, you know, even sort of unintentionally, everybody challenges everybody to, to go. And I think there was a little bit of that happening where, you know, everybody was so excited.

Somebody else would be, you know, would check something in and then they wanted to. And so it really became, you know, a really hectic time. And so there were definitely some efforts to say like, Hey, you know, we're allowed to take the weekends off. We can slow things down. We can, you know, but, but there, there was a lot of excitement and a lot of people pushing each other. I don't think it was intentional, but that's just kind of how, how things kind of panned out in the early days.

Rich: That's interesting. I think it is, it is, uh, interesting to, to think about, and this is one of the things that the documentary really did for me, the fact that this wasn't inevitable, right? That, you know, when people were working on this thing, there was a lot of competition and people were, like you said, looking over their shoulders and not knowing, you know, which of these, like many bets that were out there, was gonna be the one that was gonna win.

Joe: Yeah, for sure.

Rich: Can you talk about, you know, I've, I've referenced Kelsey so many times already, but, but I wondered if you would, uh, maybe talk about specifically his involvement, like in those early days and what he brought to the community?

Joe: Um, so Kelsey, at the time when we launched Kubernetes was working at CoreOS, so he did a couple of things. You know, he, like, he does, he started, you know, getting involved with the project and, you know, he's never one to hesitate to roll up his sleeves and play with stuff. And, and so he started talking, you know, about Kubernetes early on, even though it wasn't an officially supported or strategic thing for CoreOS at the time. Um, and in doing so, you know, we definitely saw the amount of excitement, involvement and I believe, you know, outside of Red Hat and Google, Kelsey was one of the, the first folks that we, you know, gave permissions to start doing stuff in the, in the Kubernetes repo.

I don't think we had sort of a official maintainer title, but like, but we're like, Hey, you know. And, and I think, you know, number one it speaks to, to Kelsey just, you know, being able to, you know, just get involved and do stuff, but also it, you know, our strategy for open source, cuz there's different flavors of open sources, that we really wanted this to be a community thing from the start.

And so it was important to us to be able to let people in who didn't work for Google, who didn't work for close partners like Red Hat at the time. And it's, it's, it's kind of serendipitous that it, that it ended up being Kelsey. Um, he also did a lot to evangelize you know, Kubernetes inside of CoreOS and they, they definitely had a, a, a big, disproportionate early impact on the, on the project.

Rich: Yeah. It's funny. I think that when you look around at the community nowadays and you start looking at people's job histories, you know, there still are so many influential people who were at CoreOS or were at Heptio. You know, those, those people are everywhere.

Joe: Yeah, I mean, it's like, uh, one of the favorite sayings of the community is like, you know, um, different company, same team,

Rich: Yeah. Yeah. You, uh, and um, Craig went off to start Heptio. Can you talk a little bit about the thinking behind that? Like, um, you know, you, you guys go and do that. Brendan goes to Microsoft, you know, Tim and Brian Grant and folks are still at Google.

Joe: I mean, I was a little burned out from Google, and so I took some time off. Um, also I think that there wasn't a great recognition at Google around what they had, even, even early on with Kubernetes. Um, and, uh, you know, and I definitely felt like the room for me to have impact inside of Google around Kubernetes was not the same as it would've been if I'd gone off and done something different.

Um, so, you know, I took some time, I explored a bunch of ideas and it was clear that, you know, Kubernetes was still growing, was, was, you know, becoming almost inevitable in some quarters. And so it seemed like the perfect time to take Kubernetes places that it wouldn't naturally go if it was just being driven by, by uh, say companies like Google.

And our, our theory with, with Heptio and you know, was Kubernetes is the start. What does it enable after that? Now, you know, the hardest thing with startups more than anything else is just timing. And so I think, you know, there's places where we assumed that getting Kubernetes installed and running would become a solved problem with the various clouds.

It's kind of getting there now, but there's still a lot of places where it's difficult. Um, and so we were focused on how do you build experiences and tooling on top of that? So early on we were looking at things like multi cluster and multicloud with, with Heptio. Um, and some of the products that we were building that we hadn't launched yet, made their way into some of the thinking for, for what we did at VMware.

Um, and uh, and so yeah, I think our thinking there was we can bring Kubernetes to the enterprise, but also we can start building things on top of Kubernetes that enable, you know, developers and, and application teams to be more successful.

And so it's not just about Kubernetes for Kubernetes sake.

Rich: Yeah. I remember some of the work that, that Brian Liles was doing, you know, where, where he was building things to make it easier to manage the clusters and, and stuff like that. And, and, um, I don't know. I was just, uh, super impressed with you all when you started up and, and, um, you did a thing that I talk to people a lot. This thing called TGIK, um, that went through the time that you were at VMware. It just stopped apparently a few months

Joe: Uh, Nadir, um, you know, is, uh, supposedly can pick it up, but he's been busy, but I think he wants to do some stuff, so we'll see if he, if we're gonna see some more. But yeah, it's, it's, it's, you know, the TGIK stuff is definitely owned by, by VMware at this point, so it's, it's up to somebody there to, to pick some of it up.

Rich: That's really interesting. Nadir,s actually, a friend of mine, he's great. Um,

Joe: Yeah so encourage him to do it.

Rich: I'll, I I will do that. I think he might, he might listen to this, so Nadir do it. Um, but, but it's actually something that, I talk to people a lot and I bring it up a lot. Um, the phrase that I use a lot of times is that to me it's like one of the best examples of evangelism that I've seen.

And, and, um, you know, especially in those early days when you were at Heptio, right? Because, you know, you're there, you're drinking a beer, you're, uh, kind of on the stream, you're playing around with something. And like the pattern a lot of times was you pick up some new thing that you've never heard about or maybe you've heard about but not actually used. And as an audience, we get to see, you kind of figure out how to use it, right? And so you're reading the docs and you're doing all of those things that somebody would do when they're first, uh, approaching a project.

Joe: Yeah, I think, you know, honestly, you know, as a senior engineer, whether I was, you know, CTO title at Heptio or Principal Engineer at, at VMware, you don't get as much time coding as you'd like. And um, and so it's kind of one of the trade offs that you make is that your impact is usually through people. And so you write documentation or if you do code, you're acutely aware that you can't put yourself on the critical path because other stuff will come up.

Rich: Yeah.

Joe: And so, you know, for me it was like, Hey, I wanna just spend Friday afternoon doing something technical and I might as well just broadcast it to the world. So it was really, you know, I was doing it for me

Rich: Yeah,

Joe: To sort of play with stuff and to sort of get my hands dirty, you know? Uh, and you know, and it did turn into that, that, you know, that evangelism or advocacy stuff.

Um, but I think at the same time I was, you know, with Heptio, you're a small company. The role of developer advocate or evangelism, it's still, even today, it's, it's an under defined role and there's definitely places where it's successful and not successful, and it's pretty fraught. And so I was relatively reticent to start bringing on somebody to play that developer evangelism role until I knew exactly what it is and what we wanted out of it.

And, uh, and I, and I think that's still an ongoing journey. And so that was something that I think TGIK was like, Hey, you know, I can do this job, I can learn about it. And then we can define what success looks like here in terms of bringing on other people to do stuff like this. And so, you know, that was, that was definitely part of the thinking there.

Um, and we didn't build a huge evangelism team at Heptio. I think it was like me and Nova were, were really, you know, it. And uh, yeah. And I still struggle with defining, you know, what developer evangelism is.

Rich: I'm, I'm a developer advocate and, um, it means something different depending on what company you're working at, you know, that's just the way it is right now.

Joe: And I didn't wanna get into a situation where we overhired and then we had to be able to sort of realign people. Cuz I knew that would be painful for everybody. And you don't wanna, and anything you do with developer advocates you're doing in public. And so, you know, definitely was acutely aware of that also.

Rich: Yeah. I, I think that that vibe that you're talking about though, where you were just doing it for yourself, was, was one of the things that made it so fun. And, um, I feel like, um, when I say I thought it was a great bit of advocacy or evangelism, I don't really even mean for heptio, I mean for Kubernetes, right?

Joe: That was my thinking. You know, we were, we, our boat, you know, our, our, our boat was tied to Kubernetes, and so if we made Kubernetes successful, if we made the community successful, then you know, there would be dividends for everybody, for sure.

Rich: Yeah. So when the VMware deal got announced, I was one of the people, and I, I kind of have a feeling that I wasn't the only one, who was really kind of shocked and surprised, right? Because. I don't know. I was under the impression, you know, here you are, you've got two of the three people who founded the project.

There's a lot of excitement about Kubernetes. It's, I, I just saw you all heading to an IPO. That was kind of the exit that I expected. And, and then when it was VMware, which is a company that I thought of at that time at least, as not like super cutting edge. Right. Um, I, I just wonder if you could maybe talk about the thinking behind that a little bit.

Joe: I mean, that was one of the hardest decisions that we've ever made. I mean, you know, first of all, when you're a founder in a company you feel a deep sense of responsibility to the folks that are working with you and that are going on that journey with you. And going through an acquisition, making decisions that will impact people's lives in some dramatic ways, and you can't really consult them on it. And so,

Rich: Of course. Yeah.

Joe: And in some cases you have to tell some white lies, um, just because you know, you're doing, going through some diligence and you need somebody to like, talk to somebody to review some code and it's like, you're like, oh, we're doing this for our C round and no, really with the acquisition.

Um, so that was, it was a really, really difficult decision on a, on a lot of fronts. Um, the reality is, is that, you know, we had been in business for two years, things were going well, but you know, things don't go well always forever, right? We knew that there were gonna be some bumps, there were gonna be some ups and downs. And um, and then from a financial point of view, for both me and Craig and a lot of the employees, it's very easy at the point that we were at where you take on more money, but you take on more dilution, the valuations go up, but the, the real money terms don't necessarily change as fast as it would be suggested based on these, these big valuations.

And we also knew, and you're seeing a lot of this now, is that money on paper is not the same as money in the bank. Um, and so even if, you know, we had sort of, you know, gone towards, you know, further funding rounds and an IPO, you know, there was no guarantee that that would turn into to real money for folks.

And it's part of that responsibility to the employees that we felt around that. Um, and then there's this question of like, you you know, being a founder of a public company is not all roses. Um, and I think a great, you know, and the reality is that, you know, and I was at a VC event last night and talking to folks and everybody's like, you gonna do another startup? And right now my answer is no. And a big part of that is because it's a journey that, you know, our, our journey with Heptio through VMware was like five and a half, six years.

But like that is as short as it gets, right? You do a startup, if you don't go outta business, right? I guess that's how it can be shorter, but if you don't go outta business, you're in there for eight to 10 years. And, and that is a huge commitment and there's no practical way for you to sort of, you know, decide you wanna do something else through that.

'Cause if you leave, you mess over your investors, you mess over your employees, you really screw things up for people. Um, and so one of the, the part of the calculus for me at least was, hey, if we go to IPO, you're just extending that almost indefinitely. And you can see founders start to back away from those leadership role.

I look at Mitchell at, at at HashiCorp, or eventually Larry and Sergey were able to extract themselves from Google. But it's a, it's a difficult thing to be able to do that. Whereas, one of the nice things about the, the acquisition is that, you know, I, I, I went to VMware, I spent my time there. I, I, I tried to hand off everything.

I gave it everything I had, right. Tried to hand it off responsibility. But what it meant though is that ultimately there was an exit path for me to go off and do something else without leaving everybody in the lurch. So that was definitely, I think some of the, some of the thinking there for me at least.

Rich: Yeah. That's really interesting. I know you mentioned that when you, when you did leave VMware, that that was already in the works before the acquisition happened

Joe: What's that though? The, the, the Broadcom stuff or? Yeah, yeah, yeah. No, that, that, that definitely, uh, um,

Was interesting in terms of timing, but I was already sort of, figuring my stuff out. Um, and then ultimately, and I just wanna say like the mission of Heptio to sort of bring Kubernetes to enterprises was very much aligned with what we had heard from the leadership at VMware at the time, working with Pat, working with, with, you know, the other leaders there.

And so we definitely felt like we could continue to do what we wanted to do with Heptio at VMware. And so it really felt like a, like a win-win for us.

Rich: Yeah. How do you, how do you feel looking back at it like, you know, Tanzu and all of that.

Joe: I mean, uh, things get complicated. I think that, you know, there's definitely lessons learned in terms of what it takes to be successful at a big company. Pat leaving, the Pivotal acquisition, the, the, the, um, looking forward to the Broadcom acquisition. Those things all had, you know, pretty big impacts in terms of, you know, the, how the team was structured, what the goals looked like, what the, um, what success looked like. Um, so I'm, you you know,

I think that there's, there's still a lot of work to do and I think there's still a lot of good folks there to do it.

Um, but the environment that sets people up for success and what success is has been a little bit of a moving target, I think as we've seen these different things happen. And I think that's definitely been a challenge. Whereas if you stay independent, you get to define what success looks like for you. Right?

And so there is not, you know, as much of, you know, at least this was the term we used at Microsoft, strategy tax, right? You could focus on versus having to actually, you know, like, like, I don't know.

Would we have supported vSphere in the same way that we did under Tanzu if we weren't part of VMware? I mean, probably we would've supported in some way, shape, or form because it's a critical thing. But it wouldn't have been sort of a centerpiece of some of the stuff that we were doing in the way that it was when we were at vSphere.

And how would that have changed things? Who knows, right? .

Rich: Yeah. Well I could talk to you all day, but, um, uh, we got a bunch of listener questions, so I wanna, I wanna kind of work through some of these, um, several of them. Oh, there was one other thing I wanted to ask you about before we get into this.

Joe: Yeah. I got some time.

Rich: Um, yeah. Can you talk a little bit about your journey, from like being an engineer to ending up in those leadership positions and like maybe what you learned along the way there?

Joe: Um, it's interesting. I always had this sense of not viewing myself as just an engineer. And I think as I became more senior, that sense definitely developed. I think ultimately I coach people that they should view themselves as business people with a strong engineering background. And so in doing so, what you find is, is you have to, you know, to be successful within an organization, you have to make that organization successful.

And that's not just a technology thing. That's a people thing, that's a business thing. And so, you know, uh, I've always viewed, you know, this is one of those things that's easier to do at a startup than it is at a big company, but I've, I've always viewed the title or the role that I'm playing as a suggestion versus a, you know, a lane to stay in.

So I'm not afraid to weigh in on product management stuff, design stuff, business stuff, um, marketing stuff. Right. And that's why I think I did, you know, well with the developer advocacy and I drove a lot of the early branding decisions around Heptio and stuff like that. And so, um, but yeah, I think being able to sort of make your own role and find the places where you have skills that go outside of engineering, I think is a big part of it.

And again, at a startup, I think this is one of the superpowers that startups have, is that you can look at people and you can understand that they're not one dimensional. They're not just engineers. They may have other skills that you can bring to bear. Um, and it's much harder to, to sort of break that mold when you're at a big company.

It's possible. You're, you're probably gonna piss some people off, but it's, it's definitely you're gonna be bucking the trend if you have that sort of willingness to get into all sorts of different things. And so I think that's fundamentally, like not viewing it just as an engineering job, I think is, is, is at least the path that I took.

Rich: Ah, that's really interesting. Um, yeah, I've worked at a lot of small companies and, and I think that that's one of the things I appreciate, right, is, and I feel like, you know, we were talking about developer advocacy earlier. I feel like as a developer advocate that part of my job is to sort of be a proxy for like the users and the community and stuff.

You know, that if I see something going on inside the company that I think isn't gonna serve the community well or is maybe gonna backfire even, you know, it's my job to kind of, you know, make sure people understand that.

Joe: I like to view advocacy as a two way street, advocating to customers and advocating for customers. Um, but it's also like, you know, does developer advocacy show up under marketing? Does it show up under engineering? Does it show up under product? There's no right answer. Right? And there's pros and cons to each of those things, and I think that's part of the, part of the, the reason why it's a little bit, um, under defined.

Rich: That's, that's absolutely one of the biggest parts of it is where you sit on the org chart and what you're being measured on, right. And, and people are, are naturally going to do the things that, you know, get them the best metrics and make them look the best. Right.

So listener questions. So, uh, we had a, a number of them that were variations of what would you do differently with Kubernetes now, and I wanna ask you a couple of the more specific ones and then maybe we could talk about it in a broader sense.

Um, but Brian Lyles, who I know, you know, um, asked, um, if you had a chance to redo the Kubernetes resource model, what changes would you make?

Joe: So for those not familiar, the Kubernetes resource model is essentially the, the, the sort of general schema patterns that Kubernetes objects follow. Um, and, you know, we, we break it down to essentially metadata, like, you know, naming, when it was created, labels, uh, and then, um, spec and status, right?

So what do you wanna have happen? What's really happening? I think, um, I would probably create stronger, um, patterns around who and when do these different sections get edited? Um, because I think what we find, we didn't have a good sense of the diversity of actors on these objects when we were first doing Kubernetes.

And I think a great example would be the horizontal autoscaler. And so you go through and you push to Kubernetes something like say a replica set. And in that you have like the number of replicas. And then you have this other component coming in, which is the autoscaler. And it goes in and it starts mucking with a number of replicas.

And now you wanna do an upgrade or you wanna change some parameter of that. You can then go through and step on the stuff that the, that the autoscaler did. And so you end up with different actors kind of fighting with each other.

Rich: Oh, sure.

Joe: The solution to this has been, you know, "kubectl apply," um, which is now moved server side. And if you look at apply, whether it's client side or server side, it's really complex because it essentially has to do a three way merge. There's the stuff that's in the system and the

Rich: I've read these docs. Yeah.

Joe: And, um, and I think, you know, I think we could have done a better job of creating structures to be able to support that, um, in a way that had first class support in the API server. The type of thing, like maybe, you know, the autoscaler has its own sort of layer overlay that gets compisited on top of the spec.

And so that way you can change your spec, but then the autoscaler spec will always take precedence over it. I feel like there could have been better first class support. Now server side apply starts to get at some of this stuff. But there's a lot of backward compatibility concerns that make it a little bit more obtuse than I think anybody would really like.

Rich: Um, and then another variation here was, uh, Thomas, I'm assuming it's pronounced Guler. Um, if you could start from scratch, would you use gRPC instead of OpenAPI?

Joe: Um, I don't know. I think from an efficiency point of view, I think there's a lot to like about gRPC, I think as a project, I think gRPC is still finding its legs. It's, it's heavily dominated by Googlers and the quality of the bindings for different languages and the generation for different languages can vary quite widely.

Um, so I think this goes back to sort of, you know, grinding out libraries for every language is, is, is a hard thing. Um, whereas, you know, Swagger, RESTful APIs, those things are just, you know, you can do that stuff from Bash with curl. Right? And so I, you know, I cut my teeth early on at Microsoft working on the browser, and so I have an appreciation for that pattern of view source, muck with some stuff, you know, upload it, right?

Like that level of obviousness and human readability is something that, that I, I don't want us to, to lose. And I think that as you move to some of more of these, you know, more specialized binary protocols, it gets that much harder to be able to interact with it. So I think, you know, it's a hard call, but I think the, the, the plain text nature of REST um, would, would for me still win out over over the, the potential efficiency gains with gRPC.

Rich: What about, um, kind of on a more meta level, like, um, if you were starting Kubernetes over today, are there, are there big choices you would make that would be different?

Joe: I mean, I mentioned earlier sort of like our focus on extensibility and things like CRDs was really sort of that, you know, the second stage of the rocket that really took Kubernetes to the next level. Um, we stumbled upon that. That wasn't super obvious to us at the beginning. I think if we were gonna go back and, and, you know, hindsight is 2020 or if there was ever gonna be a Kubernetes 2.0.

I think taking, you know, having no built in resources, having everything be a CRD, you know, really taking the, sort of the distributed nature of the controllers not being, you know, being separate from the scheduler, being separate from the API server. I think we would've taken that even further and I think we would've, um, uh, we probably would've, would've made that be the system, the core system to start with. And then built everything as extension extensions on top of that.

Um, I also think that there there's work that we could have done around getting things like identity and security, you know, plumbed in, in some really deep ways early on. Um, one of the projects that I started between leaving Google and starting Heptio was, was SPIFFE.

Um,

Which I saw as sort of a missing piece here, a building block that ultimately I would like to see Kubernetes start to use in a deep way. Um, maybe that'll be my fun project, I don't know. But like, but I still think that there's like, like I think we should have probably taken that stuff more seriously earlier than, than we, than we ended up doing.

Rich: Yeah. RBAC wasn't even in the 1.0, right?

Joe: Yeah. And I think the original version of Kubernetes had no auth but our, our standard setup scripts, which you know, was a pile of Bash, would essentially setup I think Nginx as a proxy in front of the API server with some password stuff going on there. So at least it wasn't wide open to, to whatever network, but you know, that was, that was definitely below minbar where, where I think we probably should have focused early on.

Rich: I mean I, I feel like this is sort of the history of, of these systems though, right? Because

if

Joe: Gotta get something out there.

Rich: Yeah. And I mean, if you look back at early Unix, you know, and even Linux, that stuff was like wide open, right? And it's the, it's the cleverness of the attackers, you know, and, and the advancements that are made on that and, uh, that end up driving the, the better security as things mature.

Joe: Yeah.

Rich: Yeah. Um, alright, a couple other questions real quick. Um, Ross, um, who I know, you know, um, um, you worked with him at Heptio,

Joe: Yeah. Is it

Rich: you most. Yeah. What are you most proud of regarding Heptio, Tanzu? What would you have done differently if you had the chance?

Joe: Well I answered on Twitter, I think like definitely the people I think. So much of the fun of, of doing a startup is, is, you know, you bring on new folks and it feels like you have new cylinders in the engine and you can go faster and everybody, you know, and, and there's, you know, I try and live my life through the lens of like, finding positive sum situations. Where you want, like, you bring somebody new on, you want everybody to be like, oh, I'm so glad you're here. Like, you can make us so much better. And like, you know, I'm so happy to see you. Which is like, in some ways a unique thing for small companies. Whereas a lot of times a big companies, you know, somebody new comes in and you're like, well, are they gonna take my thing? Right? Are they gonna, you know, I have my charter, are they gonna impinge on my charter?

And it's, it ends up becoming a zero sum type of thing. And so, you know, being able to build that team, bring the people in, and have that, that sort of, you know, hey, we're all pulling in the same direction type of feeling is, is something that I really valued and I think we did, did really well. Um, I know the second part of his question is, what do I think we would've done differently?

Um,

Rich: You maybe already covered that some.

Joe: Yeah. No, but I mean, for Heptio specifically, what I would say is that we did a lot of open source projects, and I think one of the things that, you know, the hard lessons that I learned is that you can have a smart person with a good idea, but if you don't surround them with a team, they kind of, they kind of, it's easy for them to go off the rails.

So you need to create, you know, it's, it's, it's, the unit is not an engineer with an idea. The unit really is a team, um, because they play off of each other, they keep themselves on the straight and narrow. And you need to have that, that sort of interchange of ideas and opinions and, you know, to, to keep any sort of project healthy. And so, um, there were places where we'd have an open source project and we'd have one person on it, or two people on it, and it just wasn't enough. It just wasn't enough to, to keep that thing on the, on the right path.

Rich: Then we had one last question from, um, @cloudnativeboy on Twitter, um, he's a friend of the podcast. Uh, why is, uh, RBAC so hard to learn? Is, is there an easier way to learn it.

Joe: So, um, so, when we were doing some of the, the IAM stuff for Google Cloud, uh, the system that they were, that they wanted to build on was actually built for Google Plus and Docs, right? It was internally called Zanzibar. Um, and there's a company building a startup around some of these ideas also. And the idea there was like, you have a resource and then you have a bunch of ACLs on it in terms of like, who can do that?

What, what with that resource. And when you're talking about a doc where you wanna say like, Hey, this doc is shared with these 10 people, that makes sense. Same thing with posts and stuff like that. Um, but one of the things that makes this hard in cloud infrastructure is that you often wanna write policy where you're not talking about a single resource.

You don't wanna go up to every resource and actually set who's allowed to do that. And so ultimately it's a problem of set theory. You're like, I wanted to define this set, working across these things. And then how do these things stack together? And it becomes a, a, you know, a very powerful system because you need that power to deal with things in bulk.

Um, but in doing so, it can be something that can be very difficult to reason about. And I don't think anybody has nailed, you know, IAM or RBAC or whatever for these types of systems in a great way. 'Cause you don't have strict hierarchy, you don't have, there's always overlapping concerns. And so whether you look at, you know, you know, IAM in the cloud or something like, you know, AWS IAM or whether you look at RBAC, all these things are really powerful and take real time to wrap your head around to be able to use That's not saying that there aren't things that could be improved, but I think it's fundamentally a very, very difficult problem because the, the, you know, the problem demands it.

Rich: I am gonna throw out one resource, I'll link to it in the show notes. But, um, Lee Capelli, um, from VMware actually gave a really good talk about this, about RBAC at, um, the KubeCon con that just happened and pointed out, yeah, yeah, pointed out some ways where like things maybe work a little differently than you would intuit them to work.

It had some kind of pro tips. So I'll, I'll link to that in the show notes. Um, I wanna thank you everyone for the listener questions. It was, uh, really, really great to have so much participation. And of course I wanna thank you so much for coming on to chat with me, Joe. This was really great to kind of relive the 20, mid 2010s or

Joe: The Container Wars. (laughs)

Rich: Yeah.

Joe: I'm actually, I'm kinda happy that we're past some of that. Right. It's a lot easier to, to, to feel like we're all on the same team, you know, in a lot of ways now.

So,

Rich: Yeah, absolutely. Um, I will link to your Twitter. Um,

Do you, do you have that Mastodon account yet?

Joe: Yeah. It's on, um, um, jbeda@hachyderm.io. This is, this is,

Rich: That's, uh, Nova's, yeah,

Joe: server that she's starting. So we're all playing with that, and that's a interesting experience to try and sort of look at that alternative.

Rich: Yeah. It's, I I really wonder like what impact this all might have on the community. Right. Because their Twitter really was a place where a lot of folks did gather and exchange ideas and it, it feels to me that like if it blows up, people are gonna go off in all kinds of different directions.

Joe: Yeah, it definitely feels like a lot of uncertainty right now, but

Rich: Yeah.

Joe: Thank you so much for having me on. It was a great conversation.

Rich: Okay. Thanks Joe. Kube Cuddle is created and hosted by me, Rich Burroughs. If you enjoyed the podcast, please consider telling a friend. It helps a lot. Big thanks to Emily Griffin who designed the logo. You can find her at daybrighten.com. And thanks to Monplaisir for our music. You can find more of his work at loyaltyfreakmusic.com. Thanks a lot for listening.