Oxide and Friends

Bryan, Adam, and the Oxide Friends revisit a 7 year old blog post from Bryan regarding unikernels.

Show Notes

Oxide and Friends: January 23rd, 2023

Revisiting Unikernels
We've been hosting a live show weekly on Mondays at 5p for about an hour, and recording them all; here is the recording from January 23rd, 2023.

In addition to Bryan Cantrill and Adam Leventhal, speakers on January 23rd included Steve Klabnik, Dan Cross, and others.

Some of the topics we hit on, in the order that we hit them:
If we got something wrong or missed something, please file a PR! Our next show will likely be on Monday at 5p Pacific Time on our Discord server; stay tuned to our Mastodon feeds for details, or subscribe to this calendar. We'd love to have you join us, as we always love to hear from new speakers!

Give feedback


Creators & Guests

Host
Adam Leventhal
Host
Bryan Cantrill

What is Oxide and Friends?

Oxide hosts a weekly Discord show where we discuss a wide range of topics: computer history, startups, Oxide hardware bringup, and other topics du jour. These are the recordings in podcast form.
Join us live (usually Mondays at 5pm PT) https://discord.gg/gcQxNHAKCB
Subscribe to our calendar: https://sesh.fyi/api/calendar/v2/iMdFbuFRupMwuTiwvXswNU.ics

Speaker 1:

Okay. Now I'm I'm I'm a 100% present. I am I feel relieved that the audio issues are resolved. The the, and we can now talk unicorns. So, Adam, so this piece I wrote this piece 7 years ago, and you read it at the time.

Speaker 1:

You read it when

Speaker 2:

it came out. I did I did read it at the time. I'm, I'm a big fan of yours. Many time caller, big time fan. Yeah.

Speaker 2:

No. I read at the time. And as I was saying, I I I wasn't kind of living in the same world, so I I don't know that I sort of knew all of the vitriol around and excitement around unicorns kernels.

Speaker 1:

Inside baseball of unicorns. That's right.

Speaker 2:

That's right.

Speaker 1:

So this is so this piece is a very specific reaction, actually. So the previous day, Docker had announced their acquisition of unikernels systems. And the and unikernels went from something that was, I think, an interesting and important experiment to all of a sudden being anointed by Docker as the future at a time that I think is, like, pretty close to peak Docker. Peak Docker Inc. Like, when is peak Docker Inc?

Speaker 1:

When does Docker have its maximal influence? I I think it's within this this time period. This is January of 2016, and I think that this is close to there. Because, I mean, Kubernetes begins to really take a lot of the momentum Yeah. Starting in the next just 2 years.

Speaker 2:

No. I I think that's right. I think that that is sort of the 2015, 2016 peak of the hype cycle. I remember coming to you as I tried to understand this environment saying, why does x plus y make any sense? Like, I see all this marketing around it.

Speaker 2:

You said, don't worry. It doesn't make any sense. We're we're at peak containers. Nobody knows what they're talking about.

Speaker 1:

Nobody knows what they're talking about. Everyone knows that it the room is very enthusiastic and no one knows why. And so everyone's just like, why I'm just gonna make up myself then. As long as everyone's just making it up, I'm just gonna make up some stuff too. Why not?

Speaker 1:

Right. Leading to the chaos. Yeah. And I and I feel god. Maybe maybe this was peak Docker Inc.

Speaker 1:

Maybe Docker Inc peaked at the moment they bought unikernel systems, and this is, like, this is the monkey's paw. Maybe this is what what brought them down. So have you heard of unikernels before? I mean, so you you are

Speaker 2:

So so rolling back, I'd sort of heard about it, but it it is you know, at the time, I mean, I'm embarrassed to say, like, I was dealing with, like, versions of Oracle that were EOL ed, before a bunch of the folks in the engineering team I was running were born. So I was dealing with a very backward looking kind of technology space. So nobody none of our customers were pestering us about unit kernels at the time.

Speaker 1:

Oh, unicorns. I don't I think it's fair to say that no customers were pestering anybody by I think that is a so indeed, this is, like, part of the problem. So so you are so you you're learning about unicorns. You kind of heard of it, but, like, seeing it for like, wait a minute. This is actually a thing.

Speaker 1:

I would like to point out that I so they they announced this acquisition. I actually asked the question on Twitter. Do I need to actually write a blog post about this, or can I let them be their own punishment? And which which, you know, admittedly, I'm asking it, I'm putting my thumb on the scale. I'm asking, you know, the tribe.

Speaker 1:

But the, people I I think that there was a a big exuberance around it, especially with the acquisition. And I do feel you had a lot of people who were in ops, software engineering, who are like, this does not feel right to me, but I haven't really thought a lot about it, and I would like to have some talking points. I felt like people were asking for air support for maybe internal conversations they were having. I'm not sure that maybe reading too much into it.

Speaker 2:

Liam, I that that seems as credible as anything just, because on its face, the arguments at at the time, I think, like, a lot of these things in the in the Docker space to me didn't really add up. I I didn't really understand the problem that for which these were ostensible solutions.

Speaker 1:

Part of the challenge is that and I so it it and there's also a kind of a definitional challenge. And then there is a further challenge that I think this just really attracts a false dichotomy a bit. And I'm realizing that, like, the as I was kinda thinking about this this morning, as we were kind of putting doing this and realizing that there are certain words, certain prefixes that kind of lend themselves to overgeneralization, and therefore are likely to create a, kind of a a, a debate that might have more heat than light. So, e. G, I'm gonna put mono, micro and now uni all in this category.

Speaker 1:

So I feel like

Speaker 2:

mono rail one plus rail makes sense.

Speaker 1:

Mono repo microservice. And I think that you could do a, I actually think that you could create, you could create an emotional reaction just by mixing these up Mad Lib style and just be like, hey, actually, we're going to all micro repos around here. Like, oh my god. Wait a minute. What?

Speaker 1:

Or the we're going to the unirepo. Like, it's a it's it's a monorepo, but I think that the there are in part because the these are ways of I mean, just tell me if I'm stoned here, but I what we're actually the the part of the reason that there is a lot of fuss about this is because it is revisiting an abstraction, and it's saying, hey. The abstraction the exit abstraction is the wrong abstraction. And I think that the abstraction should be radically, radically different. And the and the way I kind of embody that is with this ultimately, like, a tagline that that is a good tagline.

Speaker 1:

The unikernel tagline is a good tagline. But it it what we are ultimately doing is questioning radically the abstraction. And I actually think there's some questioning in there that's that's good, and some questioning in there that is not good, and I think a lot of this does not have great foundation. I feel that like there's been some really good stuff done on unikernels, but then a lot of unicorns advocates end up, I think, a lot of unicorns, not a huge number of people. But there there are unicorns advocates out there that undermine themselves, because they don't tend to think about this with much nuance or rigor, or kind of take these different things apart.

Speaker 2:

Yeah. I, that makes sense. And I think your, your idea about mono and micro and uni as being, these shiblets of reconsidering abstractions and appealing to the especially to this notion of this thing, the colonel, say, has gotten out of hand. It's too big in particular. It's doing too much.

Speaker 2:

It's too complicated. And Right. And let's get to something simpler, more comprehensible, more manageable, more secure. At least

Speaker 1:

let's Throw the whole thing out. Yeah. Let's which I'm kinda like, I am sympathetic to. I mean, obviously, as a company that is kind of throwing the whole thing out at some level, I'm sympathetic to, but it's if you don't understand why the thing exists in the first place, throwing it all out is perilous. You really need to understand why these abstractions do exist before you roll them all into the straight.

Speaker 1:

I think I would also add that the compact less is in this category as in server less. I think you could be repo less. If I'm like, we're all repo less now. It's all I've heard. It's all about repo less.

Speaker 2:

I just wanna be clear. That is a made up turn. Right? Because otherwise, take my money.

Speaker 1:

Exactly. Exactly. And, you know, another one of these, and this is actually gets to kind of the ancestry of unit kernels, is, the exokernel. Right? So the exokernel paper, did you ever ever read this paper, the exokernel paper?

Speaker 3:

No. I

Speaker 1:

haven't read that. Oh, god. It was it was it's a polemic. Actually, what I what I later found out is it was, like, designed to be a polemic. It's it the author was, like, I don't know.

Speaker 1:

Those tend to be a polemic. Those tend to be productive. Like, well, it was very provocative. Like, that wasn't I thought we were talking about this wasn't an actual proposal for a system, but, basically, the idea of the exokernel was that we want to that applications want to handle some of these lowest level details. And applications wanna have their own TLB missing, kind of like the canonical example.

Speaker 1:

And that you may have a, you know, multimedia server that wants to have its own TLB load behavior. And I remember as an undergraduate reading this thinking like, that's insane. That is that is turning every application into an operating system with all of its concomitant problems. And it's basically saying that I mean, and I think this I'm trying why does this get under my fingernails? I think this gets under my fingernails in part because the abstraction that it throws out is the one that you and I have spent our careers in more or less.

Speaker 1:

And it's saying that, like, this there shouldn't be an abstraction here at all. That that we actually operating systems as an abstraction should not exist is to me what this is saying at kind of its at its most polemic.

Speaker 2:

Yeah. I think that's right. And I think the the reason why it's true, it is so viscerally tough to to comprehend is because some of these abstractions, I think of is so beautiful and so elegant and so miraculous. I mean, you talk about Tilly mishandlers, you know, when discovering the magic of virtual memory, there's just something so right about it that that it continues to feel so right about it. So to say this was the wrong abstraction, and we should blow it up, especially when you're not really articulating a a a good understanding of what it is or why it needs

Speaker 1:

to be blown up, it's a

Speaker 2:

little tough to swallow.

Speaker 4:

It's It's

Speaker 1:

a little tough to swallow. And I think especially because the pathologies that you're gonna generate by doing that, that when you actually like, the bugs that you're gonna have at that level of the system, like, I want to change the fundamental abstraction of memory. It's like, well, okay. I definitely admire it, admire the courage. Great, bold.

Speaker 1:

When you get it wrong, the kinds of defects that you have are are really mind warping because you have fundamentally changed the abstractions upon which we depend. And now, you know, I do a load, and I get something that I never stored to that memory location because I'm getting someone else's memory. And how do you debug that? Because that is kinda this this nonfatal corrupt pathology is really, really, really difficult to debug. So you've gotta develop these systems really carefully and with a lot of rigor.

Speaker 1:

And it's not something if it's something that you wanna leverage. It's not something that you want every application to do on its own without really, really, really good reason.

Speaker 2:

Yeah. So if I can pause for a moment, what what made you re think of this, blog post, you know? Yeah.

Speaker 1:

Right. Why why why are we here?

Speaker 5:

Hold hold on. Before we get into this, I need to I need to demonstrate that, I can criticize without, you know, if I get fired from oxide tomorrow. I have been a longtime supporter of exokernels and and, such. So I I have a little bit of the counterpoint to all of these things while we're sort I felt like I should raise my hand because we're a little on the, like, these things totally suck. Why would anyone Yeah.

Speaker 1:

Totally. No. No. No. I've got here.

Speaker 5:

And someone literally worked on a serverless platform to be like, well, serverless, their servers is also kind of a little gets my goat slightly. So I gotta I gotta, like, take a little bit here. I think so context, me and my friends in college, largely my friends worked on an exokernel in d, and that was some of my like intro to systems, the sort of work in an adult way, or an adult age, I should say rather. And that like I'm a big fan of that. I think that the like part of it is there's a couple of different things.

Speaker 5:

I think that it's rethinking the systems abstraction that all, like, makes total sense. I agree with that is, like, some of this motivation. I think another, like, bit of the motivation for it is, like, okay. Imagine trying to make a Linux today as a hobbyist. Right?

Speaker 5:

Your surface area is, like, freaking huge. So when you're like, hey, there's this style of operating system that basically does nothing. I think it's, like, really, really attractive for greenfield development because it is, like, conceptually feasible for an individual to write a container runtime, write an ex kernel or micro kernel. And, like, I'm not saying that a model of the kernel is, like, impossible, obviously, but, like, when you start putting more and more things into the kernel, it makes it, like, a much, much, much, like, a larger task and one that's kinda, like, hard to to move forward on. So I think, like, there's sort of those things that I think you see a lot of the experimenting happening in this phase because it's just, like, it's just a lot easier to, like, do one of these things than not.

Speaker 5:

But I I

Speaker 1:

think I totally agree. Yeah. And I I mean and I think that there is this kinda, like, there's so many layers of abstraction, and there's an I an idea that I want to be able to fit this entire thing into my head, and I can't fit this entire thing into my head with a general purpose operating system. And, no, there's a great appeal to that. And I think that that's part of actually what I I think the the the disservice that I did to unikernels such as it is, is that I I think that I probably should have spoken a little bit more to that appeal and that appeal, that cognitive appeal to fit the entire thing, to revisit some of the the things that that have done been done previously, abstractions that have been created.

Speaker 1:

Do we still need those today? Can we revisit these? And can we can we shed some of these that we no longer need? Now I Steven, I gotta it's like you.

Speaker 5:

There's, like, 2 sides of this. Real briefly before we get to that because there's, like, a little So I saw somebody recently be like, why does Wazzy use epol instead of like a k q or like I what's

Speaker 1:

O ring? I owe the I

Speaker 5:

owe you I owe you ring stuff. Like, why are we do why wouldn't we just start with that? We have this opportunity And, like, you know, I don't know specifically that that particular decision, but, like, this kinda happens a lot of time. Right? Where you sort of, like, you just you just like, Wazee is sort of, like, how do we get POSIX into WebAssembly?

Speaker 5:

Let's, like, you know, start from that polite place and, like, move forward or whatever. And so, yeah. I don't know. I think the downside as as sort of mentioning a chat is, like, I sort of describe myself as, like, a former champion of these things because I, like, conceptually love this idea. But I think that in practice, it never really fully played out and it's very difficult to like even get started with or try.

Speaker 5:

And so that's where I saw it kinda, like, falter and die even among somebody like me who is very, like, philosophically predisposed to enjoy this kind of thing. Like, it's, like, was near impossible to, like, actually try out and use with anything more than, like, a hello world. Like, I got hyper running on a rump unikernel, for funsies, but I never, like, did more than the hello world because, like, even just getting that to go was, like, pretty tough. And so I think that it was, like, never really gonna reach the level of, like, mind share that it, like, had to because, like, I mean, you know, say what you will about stuff like Docker and it being hard, but, like, at least it's easy enough to use that, like, people use it and use it a lot, regardless of, you know, objections to any sort of aspect of, like, the system design. Like, at least it was, like, developer user focused to whatever degree.

Speaker 5:

You know, you may argue what succeeded or failed, but, like, they had Oh, I I experience. Whereas unikernels conceptually sort of, like, never really got there exactly. So Yeah.

Speaker 1:

I totally agree. They never was hooked to developer. So I gotta ask you, Steve, though. When you say you worked on an exokernel, do you mean you meant an exokernel or a unit kernel? I hate to do I I I hate to do this.

Speaker 5:

It I mean, we didn't get far enough for it to really matter, I think. Like, I think one of the last things that happened was, like, implementing fork or whatever. Like, we're, like, very, very, very not super far along, but, like, the intention was exokernel, specifically. So

Speaker 1:

Okay. Because the reason, like I mean Yes. At this point, like, the terms do become sadly kind of somewhat important, where I mean, I guess, with the question that I would have is in the kernel that you were operating in, are were they all in the same memory protection domain, Or did you have disjoint memory protection domains?

Speaker 3:

I don't.

Speaker 5:

So, like, I think that the we had not gotten far enough along in implementation for that to, like, super actually matter, but, like, conceptually, the idea was, like, the kernel only handles permissions, like, who is allowed to talk to what memory, and that is it. And so I think I I would assume that if we gotten farther along, we probably would have I would have hoped that we would do something similar to what we would do in Hubris where it's like, you know, yeah, we do believe that this is fine because of static checks, but, like, let's also just do the actual dynamic checking because, like, that matters, you know, like, we do, like, use the MPU, even though, you know, in theory, like, you know, Rust is good at that kind of thing. You know, you're just, like, let's just, like, defense and depth it. So I would hope that we would have, like, also added memory protection, but, like, you're right that there is, like, a certain level of, like, extreme extreme extreme here where it's, like, it's almost like Midori style, like, you know, no. No.

Speaker 5:

We're not even gonna use any sort of, like, hardware protection about different memory domains.

Speaker 1:

And Okay. So the thing

Speaker 5:

I think Spectre kinda killed that pretty much straight up for, like, everybody. Right? So

Speaker 1:

So this is a good door to open because the the I mean, you talk about hubris. I would definitely wanna get to our experience in hubris. And you said that, like, well, you know, we we wanna actually be extra sure we we were using a safe language. And that is true, except even in our safe language, our delightful safe language. And we definitely are trying to avoid unsafe constructs.

Speaker 1:

One of the things that I've definitely appreciated this early working on Hubris is there is in fact unsafe memory access all the time in the stack. The stack is like the in you can have an arbitrary and I mean, for us, like what is, you know, 95% I would say of hubris task faults are from stack overflows. And it is from going deep in your stack. And you remember, Steve, when we started, we had this we had switched where the stack and data was. And when you Yeah.

Speaker 1:

Would overflow your stack, you would run into your own data and then run off again. And we're like, why are we having memory

Speaker 6:

corruption problems in

Speaker 1:

this, like, safe

Speaker 3:

we're using

Speaker 1:

a safe language. That is the the as long as those and I think that, like I mean, it'd be interesting to know if people have kind of considered this rigorously or theoretically, but the, I think if you have a stack in a language, I think it's very hard to make that stack access entirely safe because that involves reasoning about what your program is gonna do. I think you have to solve the holding problem to know how much stack space you're gonna consume. Adam's gonna

Speaker 5:

Rusty in standard library, like, includes stack probes specifically to see if you underflow the stack. Right? Like, see happens even in bigger devices or whatever is a defense mechanism. Sorry, Dan. You're about to say something.

Speaker 7:

I mean, I here's the thing. I don't think you have to solve the halting problem because your operating system kernel is necessarily a restricted execution domain. Right? You're not running arbitrary code in the kernel, generally speaking. And even if you are with something like, what is it, Like, there are restrictions on that.

Speaker 7:

Right? Like, loops have to terminate and so forth. So I think that you're solving any number of halting problems, but you are not solving the halting problem in general.

Speaker 1:

And I think there's some

Speaker 7:

prior art here. Like, if you look at the biscuit kernel that came out of MIT, which they wrote in Go, they were very concerned about, like, hey, you know, what happens if we execute a system call and we don't have enough memory to satisfy it? Like, what do we do? Are we gonna take a GC in that hot path? And they did a bunch of an like, static analysis of every system called path and said, actually, the maximum depth of the stack could be x, and we're gonna make sure that there's sufficient memory available to satisfy that on entry into the kernel.

Speaker 7:

I think there's nothing that precludes you from doing that.

Speaker 1:

Well, we keep keep keep mind, Dan, we're talking about so but true. But but that's assuming that the kernel is in an orthogonal protection domain, which is not for a unit. In a unikernel, we are all in the same I I I kinda are all spring.

Speaker 7:

Define uni curl. I I I almost disagree with that too. Because if your constellation of uni kernels running on a system or running under a trusted hypervisor, then they are kind of in the same protection domain. Right?

Speaker 1:

Well, any use so in a unit kernel, there is only one protection domain.

Speaker 7:

Okay. But but what what does it mean to be a unikernel, and what is that executing on?

Speaker 1:

I well, I mean, I think that actually, the way I think that they would define it, or the way that it is defined, a a unikernel is defined really in terms of the abstraction, namely that there is not a system called boundary in the unikernel. So you all execution is privilege. Now that privilege may be living in a in in a virtual guest. It may be Wakelet Stack. But but it but you are sitting but you you do not have different levels of privilege within the system.

Speaker 1:

Oh, okay. But, like,

Speaker 7:

and to some extent, this is a distinction without a difference, though. Right? It's like, if if I am running that in under the the aegis of some hypervisor that enforces boundaries on what the unit kernel can actually access, then what like like, is that a big deal? Does that matter?

Speaker 1:

Sure. Because it means that my it when I've got my my rump kernel with MySQL and MySQL overflows the stack, the whole system dies.

Speaker 7:

Okay.

Speaker 1:

Or or corrupts itself. Worse, corrupts itself. My MySQL overflows the stack because there's no memory protection. It actually overflows my TCP state.

Speaker 3:

Yeah.

Speaker 1:

Yeah. Into my TCP They're not.

Speaker 3:

My TCP state.

Speaker 1:

Yeah. I I I So yeah. And so now I instead of having a instead of having an application that dies on a Stack Overflow, I've got a system that is now behaving at at corruptly because it's had data corruption. So, yeah, there is a difference.

Speaker 7:

Okay. Alright. Fair.

Speaker 1:

And, I mean, like, the I I I think that, like, the the this is indeed, like, one of the the the and the the thought definitely occurred to me, especially as we were playing around with talk. Adam, I don't know if the thought occurred to you that like, God, this would be, I know that gods love to amuse themselves with with their irony about my life, and it would be if the gods have sent me to go work on a unicernel as a punishment for my blog entry, like, I I I admire them. Because when we were considering talk, like, talk does not does not, we're all basically in the same direction of

Speaker 3:

it with talk.

Speaker 2:

So you you think of, you you think of talk as a unikernel?

Speaker 1:

I should stop short of that. But I think that, like, what talk has done with our modules is you've got a it's I I mean, it's a module kernel. I actually don't wanna get too caught up on on on talk itself. But I think that I I mean, I I think, emphatically, hubris is is not a unikernel. But hubris You

Speaker 7:

just you just yelled at me for saying that you that the stack the stack space thing in Hubris is addressable. And you said, no, wait. Well, what if I'm running my SQL on my unikernel? Like, what?

Speaker 1:

No. No. No. No. So it's sorry.

Speaker 1:

I'm I and maybe pass one another. What what I'm saying is that the that in in even in a safe system, you have effectively unsafe memory accesses, and why it was very important in Hubris to make sure that when the stack overflows, it hits a protection boundary, and and the task dies.

Speaker 2:

I think so. You can the task go ahead, Dan.

Speaker 7:

I I think what I'm trying to say is that you can construct a safe system where you don't have unsafe stack accesses.

Speaker 1:

Maybe, but that's not the system worry.

Speaker 2:

And Dan, you think that that's because you can't do static analysis to understand the maximum bounds of stack consumption?

Speaker 7:

Yes.

Speaker 2:

You know, I guess I don't have any experience with those kinds of systems, but my intuition about some complex systems that I've worked with is that the maximum stack depth is far greater than the practical stack depth, and so you may end up dramatically overprovisioning the system. So I I'm not disagreeing with what you're saying, but I think in practical terms, you you know, you may end up wasting more resources, and instead, you kinda put a finger in the air and decide that your stack is only gonna grow so much, even if you can't prove it.

Speaker 7:

Yeah. I mean, you know, like, go back like, I I think something that we're we're we're not really talking about is that a lot of times as kernel developers, we're we're used to living in this world of very small stacks. You know, like, in some cases, 4 k small. And and if you go back to, like, BSD Unix, they had to deal with this. And 44 introduced red zones underneath the use, like, underneath the u area or in between the u area and the kernel stack so that you would take a page fault if the if the stack descended and was about to overwrite the u area.

Speaker 7:

Like, I I guess what I'm saying is, like, I think if you're writing kernel code, you become accustomed to writing code very carefully so that you're not putting too much stuff on the stack.

Speaker 1:

Sure. I mean, our own our own experiences with Hubris are that even when you're writing code carefully, and you are because you're also trying to I mean, this is the the kind of when you're in a resource confined system, you actually want your stack space that you use to be as trimmed as closely as possible to that which you're actually gonna use. And these things are intention. Because you and it's it is hard to reason about the ultimate depth in which a a thread may descend with respect to stack. And to me, it feels like it's close to the halting problem, maybe it's not.

Speaker 1:

But, it's it's really hard to actually reason about. And the and that kind of dynamic behavior is part when you when you don't have protection boundaries, and when everything can kinda grow into everything else, it makes it really really really hard to reason about the ultimate system. I think that's my does that make sense, Dan?

Speaker 7:

Yeah. Oh, yeah. Sure. I mean, it makes perfect sense. I I I I think there's some definitional issues here because it's like when we talk about Hubris, it's not a unit like like, you're not running a general purpose workload on that system.

Speaker 7:

Like, every bit of code that's going to run under Hubertus is compiled into the kernel image itself.

Speaker 1:

Well and so this is

Speaker 5:

That's how the unikernels would be used as well as that you compile your program into an OS image. I'm not saying the hubris is a unikernel, but I'm saying that the guard is not why it's not.

Speaker 7:

But when you

Speaker 3:

Wait a minute.

Speaker 1:

But this is a really important point because I think that hubris splits this kind of false dichotomy. And I think that hubris shows to me that there are that so I think one of the things that people like about unikernels is the delivery vehicle. And it's like, hey. You can have the delivery vehicle without actually giving up your protection boundaries. You can actually there are there are different ways to turn the dial, and I think that, like, part of of, you know, my visceral reaction to unicorns is they're trying to turn every dial in its most kind of extreme direction.

Speaker 1:

And it's like, actually, you don't need to do that. And you can take, like, a a hubris approach where we are going to I mean, very importantly, like, there is no fork in exec on we cannot execute arbitrary processes on in a hubris kernel. We can only execute those processes, which were created at link time. And that's how I mean, hugely hugely powerful. And I do think that, like and so to me, it's that it's important to kinda split some of these things out, and find some of these different ways of combining a system that don't give up what we need to actually deliver a robust system.

Speaker 4:

Can I interject at this position? Please.

Speaker 1:

Sure.

Speaker 4:

The one hedge, I have for this, unique kernel idea is, if, system understood small model of processes, not actual processes, becomes easier to write as different kinds of code become easier to write with better tool all that kind of stuff. At some point, we might want to be in a position where the elders made some wise decision about the ABI or about the kernel or about how many layers of, indirection we need in case we have this kind of miss. But at some point, we might want to be able to, change the to tweak those changes. And the question is what kind of environment can one run such experiments if one only has time to write run operating system? Not, 15 evaluation one wants to try.

Speaker 4:

And, basically, the thing you did is that there might be sort of hiding in hubris, but, the complete tech server kernel got linked away into something which makes user processes, wrapped in memory thing, in safe memory regions. And that unicorns are an experiment vehicle which we sorta have to create because we have no idea where to start with regard to how we make our operating systems better. But

Speaker 1:

I think that's that's the pitch

Speaker 4:

I can give for it. That's the best pitch I have for it.

Speaker 1:

Yeah. Well, the the I think it's and I think that is, like this is the reason that radical ideas are are are always, like, intriguing and important at some level because they do force us so far out of where we are that it does force us to kind of question everything, which I think is no. I think that is good. I think that that is what makes it interesting. And I actually think it's really valuable for that kind of experimentation.

Speaker 1:

I think that the the thing that gets that does get frustrating is and and, Steve, maybe it'd be worth getting back to kinda your kernel to understand how how you dealt with this because and certainly we could talk about what we did at Hubris. I mean, the the thrust, I think, of my argument against unikernels is that they were dismissing debuggability entirely. And I do think that that when you are in a a system that can't execute de novo processes, debugging becomes really tricky, and you really need to spend some some hard time thinking about how you're gonna debug the system because we actually do have unicernels that are in the wild. It's called firmware. Right?

Speaker 1:

I mean, basically, every firmware payload out there is effectively operating as a unicorno right now. It is not every firmware payload, but if you go to your your disk drive, if you go to to many different components in your machine, there is a little operating system running on there, often without protection boundaries being delivered as a unit. And anyone that develops embedded software would tell you that, like, debugging that thing is is tricky. So I I think that's I and, Adam, I know this is very near and dear to your heart as well. I I so be interested to know how much those debuggability arguments resonate.

Speaker 2:

No. I'm I'm with you. I was as I was thinking about some of your arguments against unikernels, some of it comes down to values in the philosophy there. Not not so much denying experimentation, but rather saying, you know, what are we throwing out and what are we valuing? And so, I I particularly like your discussion of the zeitgeist around.

Speaker 2:

If something is broken, we just restart it. We don't care about what happened. We just wanna restart it. I think that that I think we talk about peak Docker. I think that that really was, you know, deep in the zeitgeist at that time.

Speaker 2:

And I think that's evolved a little bit. I think we've we've retreated as an industry a bit. Maybe this is too optimistic from the just restarted mentality to to one of of understanding or knowing that we need to understand these failures in order to build more durable and robust systems. That that's something that seems to be absent from the unit kernel philosophy.

Speaker 5:

I got half a comment, half a hot take on this topic. I would say the comment is that I think that the ideology that at the time that I remember is, like, you think of it as, like, a cross compilation target. So, So, like, you're trying to debug it, you're running you're debugging it on the host, you're running it on your actual computer. And if this is, like, a deployment strategy, obviously, like, those things are never one to 1, but you're sort of, like, once you're willing to do that, like, that sort of, like, the same idea is, like, using, freaking oh, God. Why am I totally drawing a blank on literally the most famous database ever that runs in a file?

Speaker 2:

I swear

Speaker 5:

to god.

Speaker 1:

SQLite?

Speaker 5:

Yeah. SQLite. Wow. Like, it's like SQLite in development, but post course in production. Right?

Speaker 5:

Like, obviously, you're gonna have some problems whenever those things have different behavior, but, like, a lot of people test in one way and then deploy in a different way. And so I think that the argument for debugging was not that, like, we don't care about it. It's like you're asking it to be done in a different place. I would say that, like, the ideology was closer to a, like, debug on the host and, like, not worry about in production. But I think that like a slightly more deeper, more accurate take would be like, unicorns are like delivering some sort of software that's written by some other platform.

Speaker 5:

And most of those other platforms, I don't think have a debug ability value to sort of put it in the terms Adam used because I really like that that you all have, like like, you all are, like, further along on wanting runtime debuggability than any other people I've ever, like, met in my life. And I say that with love, and it's a good thing, not a bad thing. Just like like, if you're

Speaker 1:

shit. I know. Is it like

Speaker 3:

I see that

Speaker 5:

as, like, as like okay. So you're deploying a Rails app as a unit kernel. Oh, it's like I can't debug a Rails app. Well, it's like that's more of a fault than rails than of the unit kernel, I would argue. If you're gonna like try to assign fault as to why.

Speaker 5:

But I will say that on the, like sort of to also piggyback on Adam's, like maybe this is the optimistic kind of path. I think that that so some of this attitude also, at least in the web dev space comes out of the idea of like, if I'm SSH into a machine, that means I'm treating it like like a pet, not like cattle, and therefore, like, that's, like, not appropriate, and we need to get away from that as much as possible. But, like, then people realize that it's really hard to figure out what the f is going on. So that's why you see all the, like, rise of, like, tracing and, you know, all that kind of, like you know, those sort of necessary tools are kind of like the natural swing back towards, like, yeah. Like, I'm not trying to SSH into this machine, but, like, also I do need to know what happens to figure out, like, what's causing this problem.

Speaker 5:

And so I think we're just kinda seeing the natural sort of tick and talk on these sort of things. And so I would say, like, that's my take anyways. Like, I don't disagree with you, but I think, like, the sort of blame lies elsewhere and sort of that general attitude is, like, also a broader one than, like, specific to the Unicorn stuff. That's all.

Speaker 7:

I mean

Speaker 1:

Yeah. That that's a Steve, I like a lot of elements that that I that I wanna go tease apart. One is this idea of the I I wanna be able to interact with the system dynamically by creating processes, not to mutate its state, but to understand what it's doing. And I think that that there's often I think you're exactly right. When people think, like, if I can SSH into a system, then it is it's a it is a pet.

Speaker 1:

It's not cattle. I don't have a mutable infrastructure, and there's lot of value to having a mutual infrastructure. And I I think there's, again, a bit of a false dichotomy there. But but I do think and, Francois, I hope you you Francois is currently dropping hot takes into the chat. I hope he's gonna jump up on the stage here, because the, saying that the that I actually do think that firmware is hard to debug by nature.

Speaker 1:

I think embedded systems are hard to debug by nature, And I think that it requires more attention, not less. I think the tooling is is historically not good, not because people don't care about the problem, but because it is, you have to care about the problem even more than anyone else because it is so hard in an embedded system to be able to debug it. But that's a hot take to get Francois up here because, that's what what Francois's company currently does.

Speaker 4:

I will I

Speaker 6:

can I I'll take the bait I'll take the bait very briefly? Hi. I'm Francois. I make debugging tools for firmware, so I have opinions. I think, you know, having having worked on firmware for a while, but also nowadays building web applications, I think fundamentally, debug ability is a function of your run time.

Speaker 6:

Echoing the point that I I I think it was Dan made about, you know, if your Ruby and Rails application, you know, unikernel isn't debuggable, like blame Rail. Blame blame Rails, not the unikernel. And when it comes to embedded systems, you know, what's your runtime? I would argue it's 3 things. Number 1, it's the, you know, application programming interface of your chip.

Speaker 6:

So whatever whatever facilities Arm provides you, number 1. Number 2 is your libc, which, you know, provides some facilities. And number 3 is your operating system. And ultimately, I think all three of those have exposed ways for us to do our job and debug our systems, and we've done a terrible we we actually haven't cared. Right?

Speaker 6:

Like, for example Yeah. CoreSite has been a part of all arm chips for a really long time. It is extraordinary powerful. It has lots of words. It's not a perfect system.

Speaker 6:

But nowadays, with modern, like the newest revision of CoreSite, you can basically stream you know, you can trace program counters, trace sample program counters to a buffer in RAM and then extract that and and do like, you know, as good a a job profiling your application as you would on on, I think, a modern Linux system more or less. Nobody I know uses it. Nobody at all. And so

Speaker 1:

Almost everybody.

Speaker 6:

Well, so actually nobody.

Speaker 1:

Yeah. I mean, it depends if you're are you talking about ETM? Are you are are you talking

Speaker 6:

I'm talking about ETB, which is the embedded trace buffer.

Speaker 1:

So ETM, I'm convinced I was like the first human to actually try to make real use of ATM. And the EETM is the embedded trace macro cell that's present, on the, in the Cortex series. And it's extraordinary. They don't use it in the M7. It's gone.

Speaker 1:

It's like Because

Speaker 6:

nobody uses it.

Speaker 1:

Because nobody uses it.

Speaker 6:

And and and most of the semiconductors don't implement it. But the point I'm making is, you know, when tools are built, they're not being used. And if you look at, you know, your OS level OS layer, which I know we're all we all love Hubris here, but still free RTOS is is maybe the the 900 pound gorilla. Yeah. FreeRTOS has awful debigibility not because it's not because it's not possible to do but you just haven't prioritized it.

Speaker 6:

And of course, if you peek under the hood and you look at that data structures, it's piles of macros on top of each other Mostly mostly in order to deal with some misra compliance misery. And the bottom line is is the point I'm making is I I would argue that if your system has poor debugability, it's just that nobody has cared to build good debugability. And I have yet to see, you know, someone build something amazing to make their, you know, database debuggable that you couldn't build on a embedded system or in a unikernel environment, not being an expert with a little bit of work. That's, you know, that's what separates amazing from bad is just work, And I'll stop by the fact.

Speaker 3:

I mean,

Speaker 1:

I do think in an embedded system though, you don't I mean, you don't have, for example, SWED, which is the single wire debug, which is what allows you to control having a with a cortex series. You actually don't you don't have that in your embedded system. Right? You've got you you are, I mean, you which makes it it does make it more challenging to debug those things in the field, don't you think?

Speaker 6:

What don't you what what do you mean by you don't have that? You mean you don't have it when you're remote?

Speaker 1:

Yeah. I mean, well, I'll just tell you. I mean, our experience is that we have gotten I mean, we've built a lot of tooling around the the the SWD functionality. And, when we actually get these sleds loaded into the rack, like, we don't you're not we're not attached to SWD. We are actually debugging this thing over the network.

Speaker 1:

And one of the things that that we actually I don't know if you know what we've done with our root of trust. I I this just delicious innovation, due to to Rick and Laura and Cliff and Matt and a bunch of folks is, having the root of trust actually has the SWD lines to the service processor. So the root of trust which is because the root of trust can actually control the service processor. And so we actually use that to take a dump of the system. So we can take a system in the rack, and we can, have the root of trust stop the service processor and and take an actual dump of memory.

Speaker 1:

It is a little bit surprising. Maybe, Francois, you can shed some light on it.

Speaker 6:

But it sounds but it sounds like you solved your problem. Right?

Speaker 3:

Like, you know?

Speaker 1:

Oh, for sure. I think yeah. Yeah. For sure. I didn't mean to be there was impossible.

Speaker 1:

It was just like it was just tricky.

Speaker 6:

And but it's tricky because nobody's care to build it. I if if this if this were the Linux ecosystem or if this were the JavaScript ecosystem, you would have just plugged in a library into your, you know, root of trust. You would have imported, you know, cargo cargo added via cargo to your project, a library that just does this and not thought about it. There are JavaScript debugging libraries that do crazily complex things. And we don't think about them much because we just NPM NPM add them and then move on with our lives.

Speaker 6:

It's it's a community problem. It's an open source problem. And it's a dedication to the craft of building tools problem, not a essential complexity problem. I think. And and in fact, you know, to your to your thing about the SWOOD lines, you know, yes.

Speaker 6:

Still, the hardware world is very hardware focused, making debugging hardware first. But you can also use, you know, the, like, debug and watchpoint trace units to basically build your own software debugger for your firmware. Again, nobody has built a great off the shelf open source library for it, but you can.

Speaker 1:

Yeah. That's a really good point. I mean, I do feel like the and I don't know if you have you've looked at the debug facilities that exist in the cortex series. There are actually some really good hardware debug facilities that I have the same Francois same reaction that, like, people don't seem to be using, which is kinda tragic because you got the things that I would love to have have at the kind of the host CPU level that we just that we don't have. And it it isn't an I mean, incredible tools.

Speaker 1:

But I do think that it I and I and, again, we were just agree to disagree. I think it is in these embedded systems, it is harder that the the there are limited interactions with the outside world, the limited level at which they can be dynamic, makes it harder to reason about when they are misbehaving. You're you're gonna have to use a different set of tools, and in part also you have to use a very different set of tools in development that you're gonna use in production for these systems, which maybe is true for for lots of systems. But I

Speaker 6:

think I think where where I'm happy to leave it is it's hard, but but the but 80% of the reason it's hard is that there's just nothing out there. No community. Nobody to help you. No blog post to read. And 20% of the reason might be essential complexity.

Speaker 6:

And so I would say if we try to bring back the metaphor or or, you know, look at what we can learn from there and apply it to the topic at hand which was unicurtal, well, we might say, okay. Maybe we're we think they're hard not because they're essentially impossible to do right, but rather because no community ever sprouted and the UX sucked from the beginning, so nobody wanted to be part of that community. And and, you know, the human story is oftentimes the the more interesting or interesting one.

Speaker 1:

Yeah. I think that's fair. I mean, because you you said that, like, well, if it's a Linux, you just use a library. But but, I mean, it's g d b. I mean, you cannot be a fan of g g e we can do better.

Speaker 1:

G e is I mean, I don't know. Maybe you love g e b. I I I find you

Speaker 6:

We must do better.

Speaker 1:

Right. Exactly. And and it should be said, I mean, this is what you I mean, this is this is the genesis of memvault. Right? Is you getting frustrated that that teams are having to reinvent things every time they do a new system.

Speaker 1:

Am I saying that correctly?

Speaker 6:

Yeah. You're you're you you basically kind of, you know, set me up here. But but bottom line is, you know, I was looking at my friends who were building cloud applications and were able to log in to a dashboard and find out where, you know, whether their system was working or not. Forget about why it's not working. Even figuring out whether your your your product is working or not, you could somewhat figure it out for some dashboard and might get an email or page your g d alert, and it doesn't work.

Speaker 6:

And I was toiling away at, you know, devices like the Pebble watch or the Oculus virtual re reality headsets. And the only way I found out that they were broken is people called me. You know? I received a 1,000 emails because we shipped a bug in a firmware and customers noticed. And I thought there there there's no essential reason why this experience as an end as a software developer is different.

Speaker 6:

There is the the only reason is investments in tool and and point, you know, habits of building tools as as well as building products. Something outside has been doing super well. Right? Like, building tools alongside products. And so I thought, you know, I'm gonna start building tools for my products and it it, you know, went spiraled a little bit out of control, and now I'm running a tools company.

Speaker 6:

But but I think the essential thesis is that there is no essential reason why firmware cannot be built like software with the debug ability, with good tools. And I continue to believe that, you know, I've been doing this for 4 years now. So it's

Speaker 1:

it's And so so learning unikernels then so your belief on unikernels I mean, if you using unikernels as kind of an approximation for firmware because they share a lot of things in common, Your belief would be, hey, this is in unicorns. There's nothing that that makes this impossible. It's just that the the time and energy has not been spent.

Speaker 6:

Exactly. We just had haven't had Solomon, you know, the the founder of Docker and an incredible engineer who said, I'm I'm gonna build an ergonomic system to use this technology and good tools alongside it. And and that's often the difference between vibrant ecosystems and really crappy ones.

Speaker 2:

First of all, I think you're spot on. And and and, Brian, I was just thinking about, you know, if the unikernel manifesto had included also, if there is a problem with the unikernel, we don't just restart it first. We capture, its entire memory state, bundle it up, put it somewhere for your later analysis. I think that might have changed at least how I thought about them or how I viewed them or or how they were regarding, the need for robust you know, building robust systems. What do you think about that?

Speaker 1:

Yeah. I think it'd be interesting. I think that would also likely be a consequence of someone trying to actually use it to build a broader system. I

Speaker 5:

mean, it's

Speaker 7:

it's It's right.

Speaker 1:

I mean, kind of my other frustration is that it's like, hey. Go build something with Unifinirls. Great. It's this is the right answer. Go build with it, and and show us all.

Speaker 1:

And I think that we know we saw that with Docker, where we could actually the the tooling made it it delighted developers, Steve, to your point, Francois, your point, that the tooling was superlative, and it showed, and people could develop things faster. And I think that with unikernels, that was that ended up being more rhetorical than demonstrative. And I think it was So

Speaker 3:

there are a small number of systems in the world that do actually build things in that way. I worked briefly on a radar signal processing, and they were actually using the speed of the cycles on the CPU to do time of flight radar. So they wanted it to be pretty well real time. And that thing had no operating system. You compiled directly for the chip and it knew what the size of the stack needed to be because you were not allowed to have cycles in your call graph.

Speaker 3:

And it said, okay, here is the maximum depth of the call graph. And if you're gonna blow the stack, that is a compile time error. But it was a real pain to work in that environment because, you know, sometimes it's nice to have cycles in your call graph.

Speaker 1:

Yeah. Interesting. So this is going back to kind of our earlier discussion about can we reason about stack depth, and can we is stack depth the noble? And so you're in an environment what what's the language in this environment, by the way? Assembly?

Speaker 3:

What is this? It was a language that generated c code that then got compiled to the chip. So it was sort of a c ish thing that was specifically for targeting this one chip that was used in this one radar use case. And to the best of my knowledge, the language was never used.

Speaker 1:

Interesting. Are are you putting language in air quotes when you were saying it, or should I be putting it in air in your quotes? It feels like it's, but interesting. So that, yeah. That sounds like it, it a tantalizing application for sure.

Speaker 1:

And it puts so then so there are basically in the just to Francoise's point about this tooling coming from the system, the system itself would impose these, that that you could not have a cycle in your call graph, for example.

Speaker 3:

Well, it was the compiler that was enforcing it. Compiler. There was very little security on the system itself. The philosophy was that the place where we put the software on the chip, there's a guy with a gun who won't let

Speaker 1:

put on anything that's not approved.

Speaker 6:

And, basically, any any MISRA any MISRA certified, you know, firmware has pretty much that constraint. Right? If you're building firmware for a for a car, it most likely needs to be ISO 26262 compliant and needs to be con go through a misrailinter who will yell at you if there's any cycles in your code because it actually wants to solve for how much tech debt you have. In other words, you can build us a runtime that will make, you know, will make it so that you don't have to solve the stopping problem in order to reason about your stack depth, but you might not like it.

Speaker 1:

Right. Right. Well, this is and I think this was a big eye opening for me when I was an undergraduate. I was working at an operating system company, QNX, real time operating system, selling into all these hard real time customers, and learning about all these things like breakpoint analysis and all this kind of academic real time systems, and then learning that like, oh yeah, no, we don't. Sorry.

Speaker 1:

No one does any of that. You actually just beat on the system until you have confidence that it's gonna meet its deadlines. It's like, oh, god. It's I which, you know, I it's it was it was eye opening. I was I I, it was revealing about it was engineering versus academia for sure.

Speaker 7:

Different than any other operating system project you've ever worked on, like, at all?

Speaker 1:

In terms of, like, being well, I do actually feel that, like, I I think that and the part of what I love about Rust is that you actually do shift a bunch of that cognitive load, and you do actually get the compiler helps you out a lot, and the compiler can can tell you a lot about when you are I've got something that would be potentially misbehaving. It wouldn't doesn't let you do it. I mean, first of all, one question I have on on on Misra. Is there are you seeing any I mean, because Rust is actually in the limit, a great use case for these these, hard real time deeply embedded systems. But I'm I'm not sure if Misra is quite caught up to Rust in that regard.

Speaker 6:

No. I I'm not as far as I know, the only efforts that's getting us slowly closer to that is the folks at, Perrisystems have spun out, a group that's trying to build, I think they used to call it sealed rust. I don't know if that's still what they call it, but trying to build a basically documented enough and verified enough Rust compiler that it could then be used in safety critical application and and go through all of these different requirements, whether that's MISRA compliance or, you know, the FDA also has a a concept called SEWP, like software of unknown provenance, which specify all these things you have to do to, include a third party library or use of compiler or something like that to build a medical device. And by the way, really well thought out stuff, huge emphasis on the actual risk. So if you build a device that just measures your, you know, is is like a wrist mounted tractor that measures your blood oxygen level for, you know, day to day use, does the the requirements are much much looser.

Speaker 6:

And if you're building a pacemaker, the requirement and and, you know, failure of the device can cause a grievous harm, The the requirements are a lot stricter and I I was neck deep in this bulb last week. So I'll I'll leave it at that. But long story short, it's gonna take a decade before with good reason, by the way.

Speaker 1:

Good reason. I was gonna

Speaker 3:

say It's

Speaker 6:

gonna take a decade before you see Rust in a pacemaker, maybe longer. I'll I'll and while I'm I'm talking, I'll pose one more one question back to you which is, do you think the best technology always wins? Because I think that that's a that's at the at the root of the question about this, like, Docker versus mini kernel versus whatever. Ultimately, we're arguing technical merits, but in my experience, it's not the best technology that wins. It's it's there are many other human factors

Speaker 5:

I

Speaker 6:

that that have a much, you know, much stronger influence.

Speaker 5:

I, I have a hot take on this, but I also just want to say it's called ferrocene now. Sealed rust was the old name. You're correct about that, but ferrocene is the current name. Just,

Speaker 1:

yeah. But I and then Steve then you gotta drop it. You can't just be helpful. You gotta drop in your hot take, please.

Speaker 5:

Okay. The hot take is if it doesn't actually help people, like, marketing factors are real factors, and that means it's a worse technology. It's not better in some, like, abstract sense. You could be, like, the quote, unquote best thing in a vacuum, but if nobody can use it, that's not just a human factor that actually means it's a bad technology. So I I kind of think that, like, best technology wins is, like, technically correct, basically.

Speaker 6:

If you if you define best technology in an amazing way, which goes beyond the, like, you know, engineers arguing in their or or scientists arguing in their papers. Yes.

Speaker 1:

I mean, I, I think that the, whether a technology quote unquote wins, first of all, like I do think that especially in a post open source world, I, I think that we, we should not merely think in terms of winners and losers. And because I think that that technologies can survive in perpetuity with a limited audience, and that's okay. That's good. I mean, like, SEL 4 is not that's not a dead technology. That's an important vibrant technology, but it's one that's one that's potentially small in terms of of the folks that are using it.

Speaker 1:

So I I think that we don't wanna necessarily think but I and I also think that the the kind of the ubiquity of a technology is, only related in certain regards to kind of its purely technical qualities. It is definitely related to how it actually meets the user. And I do think I mean, Steve Rust is kind of an interesting one because, I mean, Rust is a language that is enjoying more and more popularity and is a really good artifact. It's really technically rigorous. I mean, I think it's and if anything, I mean, I think Rust should give us some solace and confidence that a ubiquity of a technology does not necessarily mean that it's mediocre.

Speaker 1:

I think there's been kind of this idea that, like, well, the best technology never wins. Therefore, any technology that is ubiquitous is crap. And it's like, well, not necessarily, maybe. Maybe some, technology can, can have ubiquity and still be pretty great. I don't know.

Speaker 5:

Sounds good to me. I don't know. I guess I wouldn't I wouldn't argue that, like, good technologies are actually crap, so it's hard for me to say anything other than, like, yep. I do think it's nice when something gets good. You know, popular is also good.

Speaker 3:

So What

Speaker 1:

what is the turning point for Rust, do you think, by the way? Where I know it's hard because of someone who's been, you know, close to it for a very long time. Because one thing I do, you look at back at this kind of unicornsal fever in 2016, and some of the problems that it identified around surface area, and around burdensome complexity, and so on. I do feel that some of those problems were actually addressed by Rust at some level, and I do wonder does the rise of Rust, kind of show that the world has found different ways to solve some of those problems?

Speaker 5:

Yes. I think I thought I was gonna answer the start of that differently than I am at the end of it, So that's my that's my funny, silly answer why I'm starting that, like, a little different. I think

Speaker 1:

Can I choose your own adventure? Can you give us both answers and then we can kinda choose what path to go down?

Speaker 5:

I don't know. I just think, like, it's it's hard. Like, I think Russ is successful because of taking certain things technically seriously, but also taking things like socially seriously. And I think that there's a lot of good sort of like low level programming languages that could have been more successful, but ultimately weren't because they miss out on a lot of those kinds of sort of the rest of the zeitgeist, if you will. But that's sort of a little, I don't know.

Speaker 5:

I feel like I'm like going super, super way off topic or something. So No.

Speaker 1:

I don't I don't think this is off topic because I I think this is this is all around kind of, like, how we think of abstractions. And I think then how we also affect change in those abstractions. Because I think one of the Yeah. The way like, why did Okay. So why and how is Rust succeeded in kind of changing our abstractions?

Speaker 1:

And, you know, a bunch of these 0 kernels kinda didn't despite some of some initial enthusiasm.

Speaker 5:

So personally, I have some theories about why Rust is successful, and I think they, you know, sort of self serving way, it's kinda like, this is the reason why I decided to work on Rust and therefore it was successful. So, of course, I think those things are sort of the reason why. But, like, part of what drew me to Rust in 2012, and what still does to me today, is that, like, Rust sort of like I don't know. It kinda like okay. So I was like this language seems good on, like, its own sense.

Speaker 5:

Like, I like ML sort of things, and I thought it was sort of like a well constructed language. But then I saw that it had the backing of Mozilla, and that meant that there was, like, money to pay people because it turns out that, like, building a production let it ready language these days is a herculean effort and requires lots of people, therefore, either time or, like, money in order to get more people to work on the thing. And so I was kinda like, okay. Both of those things make this seem feasible. Like, because there's a name behind it and because there's a budget behind it already, that's like a super, super huge indicator of success.

Speaker 5:

And there's a lot of folks who sort of have that question about programming languages in general. Like, can you really build, like, to what degree can the, the, like what happened in the nineties where all those languages were like one person was like, oh, make a programming language. And now 1,000,000,000 of dollars of the world economy relies on what was originally somebody's hobby project. Like, can you still do that now is a question that sort of programming language people debate, but at the time I definitely like think that it was definitely a positive. So I think that is like really significantly helped Rust be a thing.

Speaker 5:

So I think that's a large part of this kind of like why people use these technologies and like non purely tech factors fulfilling prophecy, and that it attracted

Speaker 1:

enough people like you, even this kind of fulfilling prophecy, and that it attracted enough people like you, even those kind of early days, like, hey, if Mozilla's backing this, my efforts are not gonna be for naught. And then those efforts, of course, I was not attracted to Rust at all because of Mozilla. But I definitely was attracted to it because of of a bunch of the maturity that was now in it and how practical it was for solving real problems. And I think that, I mean, that pragmatism is something that I always look for in a technology because it shows that, to me, something that's very important is that a technology is being used to solve actual problems. And it's not being done for its own sake.

Speaker 1:

And maybe, Francois, this gets to some of your questions about, like, this idea that technologies that are being done for their own sake, then, being not being used. It's like, well, you've got to actually, you've got to develop it as you use it. That's how what what leads to the best possible artifacts, I think.

Speaker 6:

Building products and tools together, I think leads to the best outcomes. But but beyond building, thinking about trust, I think, you know, Steve was talking about Mozilla. I I would call that, like, trust. Right? Distribution price.

Speaker 6:

I think there are many, many technologies or software packages that won out because they were free versus others that weren't. And and and you might call these, like, part of what makes something the best technology, but I think they're distinct from the purely technical. And I think when we argue about the relative merits of a trends, we should think about beyond the purely technical aspect, but is there, you know, trust? Is there distribution? Is the price right?

Speaker 6:

Is is, you know, is is the code of conduct one that you want to attach yourself to? There there's many, many more things that go into making, a project successful. And I think that the the way Steve talked about Russ resonates a lot with me because, ultimately, none of the things he mentioned were technical. They were really, like, human factors. For me, what attracted me to Rust was that it explored a problem space that that, like, felt new, that I I it wasn't accessible to me before Rust.

Speaker 6:

Because I work in the system space, right, I I write bare metal codes. I don't have a lot of choices when it comes to the tools that I use and a and a programming language that opened up, like, new options and explored new ideas. Even if maybe, you know, I'm not an ML or a programming language researcher. So I'm they might have been old ideas to some, but to me, they were new ideas, and that was really exciting.

Speaker 1:

Yeah. I feel the same way. And I feel that, like, I mean, just, bringing I mean, some types I had not used as a living in the in the ghetto of sea. I had not really and totally changed. I mean, it was not the reason I first started dabbling with the rust, but kind of the first impression that I had.

Speaker 1:

And again, you immediately see that that that pragmatism, which I do feel is I mean, that to me is lacking from some of these unicorns. Now someone had asked in the chat about Mirage OS. I do think Mirage OS is by far the most interesting well, you got Mirage OS is definitely an interesting one. I think Rump kernels are interesting. And it sounds like, Steve, that's kind of the origin of the d a system you were dealing with, was a, a Rump descendant.

Speaker 1:

And I mean, I think part of what makes this a challenge in terms of defining unikernels is that they actually are pretty different from one another. I mean, Mirage, you can't have a everything in Mirage is in okay. It's it's it's very much bound to the language versus in Rump, you could do things that are are a lot more arbitrary, but then you kinda have to port them into into to Rump. So I

Speaker 5:

I think

Speaker 1:

this is this

Speaker 5:

is a great way to tie the 2 things together though because, like, I was thinking about this a while back is, like, that's part of I think the issue is that, like, the canonical best example of unicorns is tied to a programming language that most people sadly don't use or know. And so, like, it's just inherently because it's, like, it can only ever really attract part of the audience of OCaml folks and not, like, everyone who's building web apps. And so, like, that's, like, kind of the tooling missing that human factor or whatever.

Speaker 1:

Totally. And I think you think you then you need some value that's just gonna be, like, off the charts to really draw you in on that. And I think that for I mean, Mirage, to its credit, is, like, still around, and people are using it. And I actually love their, you know, in just reading a recounting of a Mirage meetup, and they're just talking about all the things they're actually building with. It's like a total admiration for it.

Speaker 1:

It's open source. It can

Speaker 5:

Sometimes people are like, I wish there was a rust but with a garbage collector, and it's like, OCaml's, like, crying in the corner. Like, I'm right over here, everyone. Like, this is this I've been here a really long time just, like, waiting. So, like, if you, you know, if you need if you've ever wondered, like, dang, what would Rust with the GC be like? Like, seriously, go try OKML.

Speaker 5:

Is it exactly Rust with the GC? No. But, like, it's the closest thing that exists.

Speaker 1:

And Steve, are are you I mean, there was someone in the chat saying that OCaml is getting a bit of a resurgence now. Yeah. Do you think you

Speaker 5:

Recently, OCaml achieved something they've been talking about for, like, 15 years and then finally actually did, which is, making multi core work well. Yeah. So totally. I mean, definitely, it's it's got a lot more of attention. There's also just in the last, I would say 5 ish years, it's also gotten really big boost, like, with Facebook writing a lot of, like, tooling that, you know, folks in the job shipping ecosystem would use is, like, was in OCaml.

Speaker 5:

I think they've sort of moved away from a lot of that by now, but, like, that definitely was, it's, like, was it the reason compiler was written in OCaml? A lot of the stuff that's, like, programming language related stuff at Facebook is in OCaml. So, yeah.

Speaker 1:

Which is great. I mean, I because I do think that I mean, something I I I don't wanna lose in this. I mean, you don't wanna become I mean, god. I mean, you know, we're talking about earlier today, but PRNESTOStrups piece on safety that was picked up today is just you Adam, did you read this thing? I don't know if you saw this.

Speaker 2:

No. I haven't seen

Speaker 1:

it. Oh. Oh, brother.

Speaker 5:

Oh my gosh.

Speaker 1:

It's just like you just don't want it to sound like this. You you you just don't wanna be I mean, it just comes across as I mean, Steve, what's the the the final line of it? Do you do you happen to have Oh, there's

Speaker 5:

2 versions. There's 2 versions of the paper actually, and the the first one, is a small one just by Bjorn. It's 2 2 pages long, and the final sentence is, in any way, what is, quote, the overarching software community, unquote. To the best of my knowledge, no experts from the ISO C plus plus standards committee were consulted, which is this is this is like about the about the government saying, like, memory safety is important. And they were like, everyone says we should move away from memory and safety.

Speaker 5:

And, that's the like response, but there was also a longer 10 page version of the paper, which does not have that language, but, is a lengthy description of the way they think C plus plus should orient itself towards this topic in the future. Yeah.

Speaker 7:

And they keep misspelling Rust, which is awesome. Seemingly deliberately.

Speaker 1:

Oh, it's spelling rust in

Speaker 5:

All caps.

Speaker 7:

Yeah. All caps. Just like Oh. Kind of because. Right?

Speaker 7:

Which seems Why

Speaker 1:

are we streaming rust? Time. That's it's I I mean, I do think is and especially as, you know, your one is deep in one's own technology. It can be kinda tempting to, to dismiss all all others. And I I think you one wants it, I think and and I think you're gonna argue fairly that maybe I fell into that trap a little bit with the the this unicronal polemic from from 7 years ago because I do think that there are, I mean, I think and, Adam, do you remember my idea of having a conference called in retrospect where we revisit the half talks?

Speaker 2:

Yeah. Yeah. I think I think, kind of the the pair to systems we love. Yeah.

Speaker 1:

Yeah. Right. Would would it be good? Or people, like, have a I wanna, like, I gave this talk 7 years ago, and now I kind of wanna talk about how my thinking has changed a

Speaker 3:

little bit.

Speaker 2:

Yeah. That'd be great.

Speaker 1:

I think if I were to give that on unicorns, I feel that the the the bit that I would expand is that I, there is some interesting bits here around the unit of delivery that I think is actually really important. And I think that the and finding ways to achieve some of these goals. And I think Steve, to your point about the about just eliminating complexity. I mean, one, there was a actually, I gave a a, and Adam, you may remember this talk because I quoted Levinthal's conundrum. Leventhal's con may I may I attempt to you to explain

Speaker 2:

The coin of Leventhal's conundrum. Yes.

Speaker 1:

I think yeah. I can't coin Levinthal's killing You

Speaker 2:

described it to you described it to me. I appreciate it. Yes. No. You you are the one who gave it the name, and I greatly appreciate it.

Speaker 1:

Which is when you are looking at a a a pathologically performing system, you are given the hurricane and you must find the butterfly. Right? Is that

Speaker 2:

I'm Yes. Yes. Better better articulate than I think I did in the moment when when I was beating my head against, you know, the the hurricane.

Speaker 1:

And I which, you know, something we do a lot, and we're kind of dealing with the complexity of the system. And one of the questions that I actually got that that talk was recorded, but the question and the answers were not. And I really, I guess I'm I'm both glad and I regret it because there's a question that I would love to take back. I was asked, do you think that that complexity that we're going to have less complexity in our systems? Or this complexity, you talk about, you know, the the emergent behavior in the systems and how complexity has grown.

Speaker 1:

Is there gonna be anything that actually reduces the cognitive load? They didn't phrase it exactly this way, but it's kind of what I heard. And my answer was basically like, nope. It's just gonna get worse. I mean, it was just like a and there have been so many and that that kind of talk is, like, right before I really got into Rust.

Speaker 1:

And I wish I had been a lit I'd I'd been looking around a little bit more to be like, well, actually, there are things out there, I mean, including OCaml for that matter, where you folks that are trying to actually how can we rethink parts of these abstractions, and how can we slim them down without losing why these abstractions were created in the first And, Steve, one of the reasons I think Rust is so successful is because Rust managed to dial this up really, really well, where there is a reference for past systems while still wanting to improve the state of the art. And I don't That

Speaker 5:

is, like, all grade like, maybe not all is the wrong way to put it, but, like, that's, like, such that's such a grading thing. I think that is great that Rust, like, inherited that. Sorry. I feel like I cut you off in the middle then. No.

Speaker 1:

You didn't come. No. I think it's great. I because I think that's really important in terms of, like and if that reflects, like, Graden's disposition, that's a mean, that is Graydon

Speaker 5:

Graydon is the person that knows the most out of programming languages that I know, like, person like, you can you can mention the most hipster obscure, offhand programming language to Graden, and, like, not only has he, like, heard about it, he, like, knows the person who wrote the paper or, like, implemented it for fun or, like, he just knows so much about the history of programming languages. And I think that's a large a large part of, like, why Rust fits together really well.

Speaker 1:

Well, because it feels like Rust considered for any given thing, it, like, looked across all languages and took the one that was the best or or the one that that was and there are I mean, just there were so many examples, across the board where you kinda came to it like this. Wow. This is great. It reminds me of the, you know, risk 5, same thing kind of instruction sets. I love I love the the mechanics of the risk 5 instruction set are great because they know instructions that's really well, and they looked around it like, what does x86 do well?

Speaker 1:

What does ARM do well? What did Spark do well? What did MITS do well? What did Power do well? And what's gonna take the best of all of that?

Speaker 5:

You're you're asking a long time ago about, like, where's Rust turn the corner or something, like, as a, you know, like, where where does this go or or, like, whatever. Also, it's kinda where you're just getting into, linear in chat linked to paper that also I think today is just, like, a wild thing to me personally, which is consumer reports has now put out like this report. That's like, we need to get companies to care about memory safety. And they like directly stay like that. They, they have identified Microsoft and Apple as companies that they hope will get on board with the idea of voluntarily providing a memory safety roadmap and explain how they plan to eliminate memory unsafe code in their products over time.

Speaker 5:

Steve, do you

Speaker 1:

have a deep role operating at Consumer Reports? Do you not need to confirm or deny this? But if you have a

Speaker 5:

I'll blink once for yes or blink 2 for no. Yeah.

Speaker 7:

Is NEC.

Speaker 1:

This is like, oh my god. The Rust Evangelism Strike Force has this is amazing. This consumer thing is

Speaker 5:

It is amazing. I think that, I think that I brought jokingly. I did I wrote I can't wait to hear the conspiracy theories about this one to, someone on Twitter about this earlier, but I I think if you look at the names on it, it looks like maybe it's the I r ISRG, getting in there. I would I would expect that's how this happened. But, I would be interested to know, like, the story about why Consumer Reports is caring about memory safety.

Speaker 5:

It sounds great, but I'm just like, what? This is just so interesting to me.

Speaker 1:

This thing is outstanding. It's well written too. I would I oh my god. I love yeah. The full the fact that they are calling us back not just to Unsafe at any speed, and Ralph Nader, but then going back to the jungle.

Speaker 1:

Bio.

Speaker 5:

Yes. I know. I also it's like there are 2 works that, like, people associate with, like, unsafe manufacturing. I was like, unsafe with 80 Speed. Okay.

Speaker 5:

And then, like, the jungle. And I was like, wow. Are you saying c plus plus is like, I found, like, crap in my food? Like, is that is that the the analogy that's being made here? Like

Speaker 1:

Exactly. C plus plus is like the triangle shirtwaist company. I mean, this is just this I mean, Eddie's remarkable actually and great. I mean, good on them and for that, that level of awareness. Leonard Cannon, I'm not sure where you found this, but this is this is terrific.

Speaker 1:

This thing is absolutely golden. Oh my gosh.

Speaker 5:

Yeah. So, so yeah. So now that, like, as normie of an organization is consumer reports, like, cares about programming languages, like, this is Russ turning the corner, basically, like, IMHO.

Speaker 1:

Yeah. I think that that's kind of interesting because I do feel that it's, you know, in, like, this was it an it was an NSA report that that stressor was reacting to? I don't know what the the report was that he was initially reacting to.

Speaker 5:

I forget it was NSA, but I think it was the NSA. Yeah.

Speaker 1:

Yeah. But

Speaker 3:

the the the

Speaker 1:

fact that there's kind of this

Speaker 5:

NSA, you're on the call already. Please answer it. Was it you or was it someone else? Right.

Speaker 1:

It's an NSI. Could you please give me audio problems right now if we are correct? You know? But the, you know, the fact that you're getting kind of these strident reactions to it, I think shows that, like, okay. Yeah.

Speaker 1:

This is, you know, we are getting a much broader awareness and, which is terrific. I mean, it is it is great for software, honestly. Because I think it's it's so easy to kinda fall into the trap of, like, oh, everything gets worse, and it's bad. And but it's like, you know, actually, some things can get better, and this is awareness of this issue. And, boy, this is simply focusing it and sharpening it with this consumer reports thing is is pretty great.

Speaker 1:

And then I feel that, like, it just to kind of, you know, bring it home for a second, then we'll kinda wrap it up to let Adam get get back to his to his family. But the I I I think that, like, that's the kind of problem statement that is really missing from unikernels. It's like and I think that, you know, it would be, I think, wise to when we are going to change the abstraction. And if we wanna get rid of something like memory protection, which again, I feel pretty strongly that we should have memory protection in the system, and I think it's a mistake to get rid of it. And I think if you're gonna get rid of it, you need to get rid of it for really, really, really crisp reasons.

Speaker 1:

But if you're gonna get rid of a a memory protection, then you really you need to have not just a great reason for doing that, but go build a system that way. And and learn when you're doing this, and then you can really show us instead of just telling us, show us the systems that have actually been built. And it's then it is, I think, less of a kind of an emotional appeal, like, look, we can get rid of all this crap and have nothing, and much more of a, hey, this is these are the actual benefits of the same.

Speaker 5:

Yeah. I was trying to look, and obviously, this is during the call or whatever, but I was, like, trying to Google for, like, example of, using, Mirage OS, and it's, like, here's your hello world, and I was, like, cool. What about anything more significant than that? It's like, nothing. Like, I can't find it.

Speaker 5:

And I'm sure that exists or whatever, but the fact that there isn't, like, a clear immediate when I Google, like, what's the biggest thing that uses Mirage OS? It should be like this thing, like, super clearly. Right? Like, if you Google, like, what is Rust used for? It will, like, give you specific things.

Speaker 1:

Yes.

Speaker 5:

So, yeah, I would agree that would that would definitely be and that's kinda like why I'm, like, not while I was, like, super into them in theory, why in practice I has not does not happen in practice. It's just because, like, I'm not building that system, and I don't even know off top of my head what that system would be for this. So it kinda yeah.

Speaker 1:

And and I think we've been able to kind of, get to some other aspects that make that appealing in a vehicle that I'd that I think is very robust in terms of Hubris, which is definitely exciting. So, Johan, you got some, some some nice closing questions. So you might be wanna wanna ask those? If I think I need hold on. Steve, can you make him speaker?

Speaker 1:

Because I think Adam has stepped away.

Speaker 5:

Yes. Adam did step away. I think I can do that.

Speaker 1:

I can do that. Maybe Here we go.

Speaker 5:

Speak. I clicked Hey.

Speaker 1:

Alright. Johan, you wanna close it out for us? You're here.

Speaker 5:

Yeah. If you're if you can hear us, Johan, your, the green circle is not appearing, so that means Discord is not picking up your audio, if you're saying anything. Still no circle.

Speaker 1:

Yeah. This report is delicious. I wanna do an out loud reading of this thing. Steve, I think you should do an audio book of this. I think you should do I think Audible.

Speaker 1:

I think you should do the report of future memories. Steve Klavnick reads.

Speaker 5:

Actually, it's fair to say.

Speaker 1:

On memory safety. They could be terrific. Alright. Johan, you there? A dramatic reading.

Speaker 1:

Oh, god.

Speaker 5:

Sounds like sounds like Discord problems, sadly.

Speaker 1:

Oh, well, you know, I I guess the it's gonna of course, the, the grass did seem so green over here. Then, actually, broadly, it's been so much better than the search space. I do actually I love having, the chat, especially now that I can actually be in the chat. So, Steve, are you oh, no. You're using you use your actual computer, your mic

Speaker 5:

I'm on my desktop for both.

Speaker 3:

Yeah. Yeah. If you're able to

Speaker 1:

that that's that's how you're able

Speaker 5:

I I do sometimes. Like, I can be on my phone and also, like, comment at the same time. Sometimes I will do that if it's, like, easier for some reason. Usually not and for the purposes of, like, this, but, like, I don't know, occasionally. Like, at least it works.

Speaker 3:

So so I think we about unikernels. I think my professors at Georgia Tech covered it pretty well. There were 2 classes that were corec that you had to take at the same time, compilers and operating systems. And the first lecture in the compiler class was you don't really need an operating system. Operating systems are only there because our compilers aren't good enough, and you should compile everything down to what you actually want to happen so you don't waste machine resources interpreting things.

Speaker 3:

And then you get to the operating system course, and they say, the only reason we need compilers is because our operating systems aren't good enough. You should just give them your code, and they build the code into something that's perfectly synced for that machine and gets the thing to run. And, yeah, take these two courses simultaneously until they work to the end and you realize that actually neither of those work at all. You're always going to have something in the middle because no matter how much you think you're in run time OS, web OS, all written in JavaScript, Someone's going to ahead of time compile things so you get faster start ups. And no matter how much you think you're in compile time, static world, someone's gonna attach DTrace to your process and start rewriting it live.

Speaker 3:

You're always going to be in a world where there's some kind of run time that's messing with your stuff and some kind of compiler. So going all the way to unicorns doesn't make sense. And going all the way to a purely interpreted operating system doesn't make sense. We have to live in this world in between. You can say, okay.

Speaker 3:

For this particular application, how much compilation do I need and how much runtime do I need?

Speaker 1:

So I I'm actually desperate to ask you if this was pedagogically deliberate or this is a symptom of a civil war within the department.

Speaker 6:

These these 2

Speaker 1:

these 2 professors that they actually, like, you know what we should do? We should give diametrically opposed perspectives, force them to take those courses.

Speaker 3:

They come that the 2 tenured professors decided to write their lectures together, each one saying that the other one was completely and utterly wrong about all things.

Speaker 1:

So it it is coordinated at some level. I think we should coordinate on the fact that we hate one of those guts. I don't think we should, like, let's it'll be great for the students. But that's I mean, it sounds like that you definitely I came away with the right lesson. I do think that this is, like, this is an important, maybe a good place to to to close, a good thing to end on.

Speaker 1:

I I I just think it's important that, you know, we when we have these radical ideas, we wanna kinda create these extremes. But the and the extremism can be thought provoking, but ultimately, there's gonna be a pragmatic middle that is there's always gonna be a pragmatic middle. And the, that pragmatic middle is, is gonna be the thing that we actually wanna pay attention to. And so, yeah, because Johan's kinda closing question was, are we confident that unikernels can never be debuggable? I think it gets to Francois's earlier point.

Speaker 1:

I guess I'm I think that that if you if you do not have that debuggability, I feel that sacrificing that debuggability, makes the it it makes it more difficult for the technology to succeed. I think that you that if for a technology to succeed, it's gonna have to solve a real problem for someone. And I think that that that just there's too much, ultimately, that that was not really solved, and too many problems that were created. And ultimately, and then Dan is saying the pragmatic middle ground is unicorns with virtual memory and and a debugging thread, which is there we go. Exactly.

Speaker 1:

General purpose operating system we're getting. No. I don't think general purpose operating sure. Alright. Well, I know Adam and Sid, boy, Steve, thank you very much for, and, I mean, everyone.

Speaker 1:

Thanks for offering your perspectives on this. It's a a a kind of an evergreen debate. I imagine we'll come back here in 7 years, and, and we'll pick it up then. But I, in the meantime, Steve, it'll be, I think there was a suggestion in the chat. This, report on the future of memory safety, we've gotta get the authors in here if they'd be willing to talk with us and and get Yeah.

Speaker 5:

That'd be cool.

Speaker 1:

That'd be really neat because this is a really, interesting report and, really taking on, a very important issue. Alright. Thanks, everyone. We'll talk to you next time.

Speaker 5:

Bye.