Oxide and Friends

Oxide and Friends Twitter Space: May 3, 2021

Mr. Leventhal, Come here I want to see you
We’ve been holding a Twitter Space on Mondays at 5p for about an hour. Even though it’s not (yet?) a feature of Twitter Spaces, we have been recording them all; here is the recording for our Twitter Space for May 3, 2021.
In addition to Bryan Cantrill and Adam Leventhal, speakers on May 3rd included Laura Abbott, Nate, Antranig Vartanian, François Baldassari, Tom Killalea, Land Belenky, and Sid?. (Did we miss your name and/or get it wrong? Drop a PR!)
Before the recording started, we discussed:
  • 2011 Solaris Family Reunion video ~20mins
  • Katie Moussouris’s blog entry on the Clubhouse vulnerabilities
  • Laura’s blog entry on the LPC55 vulnerability
  • Land pointing us to the Atmega 328p MCU in a BK Medical endorectal probe
  • François on the STM32F103 found in Pebble
  • Intel Management Engine
Some of the topics we hit on, in the order that we hit them:
  • ASPEED BMC chip
  • [@1:24](https://youtu.be/h-WSU3kiXVg?t=84) So formal correctness is something that I think we are all very sympathetic with. > It’s very laudable, it’s also very hard.
    • From L3 to seL4 What Have We Learnt in 20 Years of L4 Microkernels? (paper)
    • Who guards the guards? Formal validation of the Arm v8-m architecture specification (paper) > Hardware architecture is an area where formal verification is more tenable, > a level you can readily reason about.
  • Our challenge is how can we satisfy our need for formalism without getting too pedantic about it. You don’t want to lose the forest for the trees.
     A system we never deliver doesn’t actually improve anyone’s lives, that’s the challenge.
  • [@5:20](https://youtu.be/h-WSU3kiXVg?t=320) Journal club experiences
    • Bootstrapping Trust in Modern Computers (book) > [@9:45](https://youtu.be/h-WSU3kiXVg?t=585) > We’ve tried to build a culture of looking to other work that’s been done. > Not because everything’s been done before, but because you don’t want to have to > relearn something that someone has already learned and talked about.
      > If you can leverage someone’s wisdom, that’s energy well spent.
  • [@11:46](https://youtu.be/h-WSU3kiXVg?t=706) When systems repeat mistakes, engineers feel deprived of agency: “I suffered for nothing.” > Engineering is this complicated balance between seeing the world as it could be, > and accepting the world as it is. > As you get older as an engineer, it’s too easy to no longer see what could be, > and you get mired in the ways the world is broken. You can become pessimistic.
  • Caitie McCaffrey on Distributed Sagas: A Protocol for Coordinating Microservices (video ~45min)
  • [@14:17](https://youtu.be/h-WSU3kiXVg?t=857) It’s dangerous to live only in the future, detached from present reality. Optative voice
  • [@16:45](https://youtu.be/h-WSU3kiXVg?t=1005) At Oxide, we ask applicants “when have you been happiest and why? Unhappiest?” Interesting to see that unhappy is all the same story: we were trying to do the right thing and management prevented it. > When I was younger and maybe more idealistic and willing to charge at the windmills, > I stayed too long with a company. > All the developers that interviewed me were gone by the time I got there. > I should have walked out the door, but I was too young and didn’t know better.
  • [@18:43](https://youtu.be/h-WSU3kiXVg?t=1123) “How do you and your cofounder resolve conflicts?” > I don’t want to hear about how you don’t have conflicts, tell me about how you resolve them.
  • Folks aren’t able to walk away, they’ve got this commitment both to the work and to their colleagues.
     I’ve been a dead-ender a couple of times, I’ll go down with the ship.
  • [@20:28](https://youtu.be/h-WSU3kiXVg?t=1228) In “Soul of a New Machine” (wiki) Tom West says he wants to trust his engineers, but that trust is risk. > I just love that line: that trust is risk. > That’s part of the reason some of these companies > have a hard time trusting their technologists, > they just don’t want to take the risk.
  • People are so not versed in how to deal with conflict, and there’s nothing scarier than salary negotiation.
  • They need you, that’s why you’re here, you’ve made it all the way through the interview to this point, you’ve got leverage, now’s the time to use it.
  • [@23:04](https://youtu.be/h-WSU3kiXVg?t=1384) Oxide: Compensation as a Reflection of Values > It takes the need for negotiation out, > because it replaces it with total transparency.
  • Sometimes it’s not about what you’re getting paid, it’s about what the other person is getting paid. Not wanting to get taken advantage of.
  • It’s a social experiment for sure.
  • [@28:07](https://youtu.be/h-WSU3kiXVg?t=1687) Steve Jobs famously tried this at NeXT: pay was transparent but not equal.
    • History of compensation at NeXT (wiki) (quora post) > I think that’s the worst of both worlds, a recipe for disaster.
If we got something wrong or missed something, please file a PR! Our next Twitter space will likely be on Monday at 5p Pacific Time; stay tuned to our Twitter feeds for details. We’d love to have you join us, as we always love to hear from new speakers!

Creators & Guests

Host
Adam Leventhal
Host
Bryan Cantrill

What is Oxide and Friends?

Oxide hosts a weekly Discord show where we discuss a wide range of topics: computer history, startups, Oxide hardware bringup, and other topics du jour. These are the recordings in podcast form.
Join us live (usually Mondays at 5pm PT) https://discord.gg/gcQxNHAKCB
Subscribe to our calendar: https://sesh.fyi/api/calendar/v2/iMdFbuFRupMwuTiwvXswNU.ics

Bryan:

If

Bryan:

you've heard of a speed, like, you're you're you're having a stress reaction right now because it's one of these companies that you've only heard of for bad reasons. But ASPEED is a the company that makes the BNC that is in most personal computers, or or all personal computers. So we're getting rid of that one too.

Speaker 3:

I had a quick question around, the kind of second part of your answer before where you got seems to be getting traction in rewriting these super low fundamental systems. And I know there's a push for, like, correct Rust or, I guess, like, logically correct Rust. And I think it's a subset of the standard library or a subset of Rust. And I wondered, I've heard of, like, s s e l 4, I think is, like, one of the provably correct kernels that, you know, like, they literally employ mathematicians to to prove that this system is secure in certain ways. And I wondered if, oxide and other companies are seeing, either opens I mean, you know, employing both mathematicians and programmers and all that on the same team seems like a tough thing to do.

Speaker 3:

And I wondered if any of that is open source or if that's really, relegated to kind of, like, industries that serve the army or, you know, kind of like governments or stuff like

Bryan:

that? Yeah. So, formal correctness is something I think we are all very sympathetic with, but we have not endeavored to formally prove our own software. And and Laura, I don't know if

Bryan:

you've talked to me.

Bryan:

We did a great journal club on that that got into some of this anyway, on on formal correctness of systems because it's actually it's, it's very laudable. It's also

Speaker 5:

very hard. We're it turns out, a lot of us at Oxnard tend to be a lot of, formality nerds that we're we enjoy reading the papers, but I think more than anything, we try and take away from reading these papers as we go and kinda actually apply it because we did a, as Brock has to give you this year, I'll come looking back on, I think it was the the 20 years of of the CL 4 and and figure out what takeaways we take away from that. And I I think more than anything, what we can appreciate is that we'll probably never be right now, at least be doing the formal correctness. What we can least takeaways is that what are the other methods they had not be able to help, make the code more secure. And, in particular, are there development models other ways of approaching the problem to be able to make things, safer?

Speaker 2:

Yeah. The actually, we've done, 2 papers on the subject. So, Laura's referencing one of them, the and I I'll I'll I'll link both these out. I gotta figure out a way to have, like, show notes for the for Twitter spaces. My my my many are are my my many request for enhancement for for Twitter spaces when it stops kicking me off.

Speaker 2:

But Laura's referring to, from l 3 to s e for it to s e l 4. What have we learned in 20 years of l 4 micro kernels? And then the other one I was thinking about, actually, Laura, was the one that that, that you, shepherded the who guards the guards, the former validate the formal validation of the r v a m architecture specification, which is also really interesting. And that, to me, is like an area where formal validation, formal verification is actually more tenable, when you actually got at the kind of the level that is you can more readily reason about. But I thought it was also a really interesting paper.

Speaker 2:

As you say, Laurel, we're we're definitely formality nerds.

Speaker 5:

So I'm crying for a minute. I completely forgot that I ran to journal club about that until until we actually mentioned that. No. That was so I think it'd be a good paper. I think that one actually perhaps at least to prepare a formal specification?

Speaker 5:

Because it wasn't quite as abstract for for, some things that was closer to trying to figure out how that can verify that a machine model is actually doing exactly what we say. What we're trying to do is, you know, how do we make sure that this network hardware is actually doing something useful. So Okay.

Speaker 2:

I'm not sure if that if that answers the question or not or anything.

Speaker 3:

That was amazing, and I really appreciate it. Please definitely link that out. The I I guess, it kinda my answer or, like, kinda what I parsed was that it's almost like certain features or certain, functional aspects of that work it is actually, implementable and and, like, kind of like a super small subset can make certain pieces of the system paper. Is that accurate? Yeah.

Speaker 2:

I think so. It look it's small but important aspects of and that we can actually there's a lot to be gleaned from all of the work that has happened on those systems. And so I think our kind of challenge is like how can we this is what I'm saying. Like, how can we satisfy our need for formalism without getting too pedantic about it or too I mean, you were mentioning, kind of a, you know, safe rust versus quote unquote unsafe rust. And I I think sometimes, you you you can end up getting you don't wanna lose the forest for the trees.

Speaker 2:

Right? The the you wanna be sure that your a system that we never deliver doesn't actually improve anyone's lives. So, that's kind of the challenge. You know, we mentioned our our journal club. I don't know.

Speaker 2:

Adam, do you maybe we're gonna talk about what we what we've done there because I think it's actually you and I both experimented with journal clubs in previous lives. I feel like we've got something that seems to be working decently well hydroxide that may be Yeah. Replicable for other companies.

Speaker 4:

Yeah. I think I I mean, it's it's a it's a nice, I think, light ish weight process where when people find a paper of interest that they like, they're gonna blast it out to folks and then, it it encourage people to read it once a critical mass. Usually, it's like a quorum of 3, I think. Say that they read the paper, interested in doing it, then then we set a deadline. So a bunch of folks scurry around over the weekend to go read the paper and do their homework.

Speaker 4:

We recorded because you know, Brian Brian loves all of the quirky things, of course, like every request we recorded. No. And it's it's actually been great for new folks joining. There's some great topics that's relevant to those artifacts. And I think one of the things that, you know, that I've tried to institute at at other organizations, it's just it never kinda got us momentum.

Speaker 4:

It never became something that we did, as a matter of course. And and some of that became because it was, like, it was too heavy weight. People were reticent to propose papers because then they had they thought they had to present on the topic. And people didn't do the homework because they thought that they'd be able to observe it passively. And so, to be solicitably, we asked

Speaker 2:

yes. I I mean, I I I feel like the the the two things we've done that have been that have been valuable that I have not done, and I think you and I have both separately not done in previous times trying to do this, is, everyone's gotta read the paper in advance, and then there's no fixed cadence with respect to time. It is purely lazy evaluation when someone has a paper that they would like others to read with them. And that together, like, so far, I I feel that's been it's I feel like the nice thing about the system is, like, it can never be failing. There could just be an indefinite long long period.

Speaker 2:

There could be an arbitrarily long period of time. Right? No. It's actually I can't. No.

Speaker 2:

I feel like that that's can't fail.

Speaker 5:

You can't no. But this is the

Speaker 2:

problem I've had. It's like you had this fixed monthly cadence and, like, the first couple are really good And then you get this problem that you're describing, Adam, where people, like you have these really good presentations to the paper, and they're like, why does anyone wanna read the paper? I'm gonna read the paper. I'm gonna go listen to Adam present the paper. It's like, why would I read the paper when I can have Adam's gonna give me a deck that walks through it.

Speaker 2:

That's, like, that's a lot. Then just some work. Well, and then I think, you know, we

Speaker 4:

we are certainly in a privileged position, but I think other folks may find that authors' papers are are often quite amenable to to chiming in or joining in. And and certainly no one gets offended if you send them an email and say, hey, my my group at work is reading a paper of yours because it's relevant to our work. Would you be interested in joining? Especially in, like, pandemic times when we're all looking for excuses to, like, get away from our our toddlers for half an hour, which is what I'm doing right now. You know, yeah, I think a lot I think a lot of the authors of papers or these topics are really excited to see their work making it into potentially more pragmatic settings than they originally intended it for.

Speaker 4:

Well, yeah.

Speaker 2:

I mean, Laura, you had, good trapping trust in modern computers was great. Jonathan, the author, joined us and it was I thought that was a great discussion. I really enjoyed that.

Speaker 5:

Yeah. That was another one where I I appreciate your answer. I said a bit through the proposal because, it it's some respects, if I have to suffer this, you all should have to suffer through with me in terms of the time of

Speaker 2:

the next, which is actually possible. Yeah. I don't know. So, Laura, have you

Speaker 5:

term of public good, but, I mean, I I I think I agree with with everything that's kind of going on the as as these phases has been, really effective. Although, I think, what it is is they all had to come up with things. But I think some what ends up being captured is some of these actually quite normal to schedule something, and that really takes big off. So, I think it's just a matter of everybody meant to, get enough scheduling down to be able to happen.

Speaker 2:

Yeah. But I think it it is it is correct. I mean, I think something you tried to build and that's something that's been important to me in every engineering organization that I've been a part of is having that culture of looking to other work that's been done. Not to not because everything's been done before, but just because you don't wanna have to relearn something that someone has already learned and talked about. And and if you can leverage someone's wisdom, it's like, boy, that that seems like it's time really in inner tools, but yeah.

Speaker 2:

By the You know, but but to that

Speaker 4:

point, you were talking about open source earlier in the in the in the revolution. Are we ever gonna make progress if we keep on rewriting things from scratch? But, you know, it's only from scratch. You you even a de novo occasion before not learning from existing systems. And I know, Brian, you and I have talked about this, like, the frustration of, like, systems that failed to learn from the systems before them.

Speaker 4:

It it just really challenging. It's But, but, you just start from learning what we did and learning the mistakes that we made, and then just make it better, rather than, certain certain building without learning. And and having that be part of the culture in your company organization is really important.

Speaker 2:

What do you mean? Like, how do you pull in those kind of those different DNA strands and different perspectives and get it to make sure that you're, like, you are understanding something. Often, like, you know, a different domain did this. What this one these are the things I actually love really like about ROS is, like, you go to these RFCs, and so often they are looking at how every single language did something. You know?

Speaker 2:

And that's there's something to be said for that. I'm really trying to learn as much as possible.

Speaker 4:

Our our colleague, Steve Klotnick, you know, about half the time when I ask him a Rust question, he'll come back to an article on c plus plus or on Haskell or some other language where where, you know, good artists copy, great artists steal, where they've stolen from the good work done in those other locations.

Speaker 2:

Yeah. I warned. And I just think it's you know, you are learning from the mistakes of others. Right? So, like, please don't because I feel like when when UVC systems that repeat mistakes, I feel like engineers feel deprived of agency.

Speaker 2:

They feel like my life is is deprived of meaning. Like, I suffered for nothing. I mean, I I I I I think this is where because I I do feel like, you know, in this engineering is this, like, very complicated balance between seeing the world as it isn't and accepting the world as it is. Right? You gotta hit that, like, exact fulcrum.

Speaker 2:

And I feel that, like, as an engineer, especially as you just, like, as you get older as an engineer, it's too easy to no longer see the world as it could be, and you just get mired in the way that the world is just, like, broken, and you just become very pessimistic. And I think that's like Well, the the the

Speaker 4:

to the germ of, you know, we we don't also, also, exclusively look at accidental papers. And we've looked at videos, we've looked at books, like, I I can imagine, sort of, any artifact that requires some offline time to review it being fair game for that, because it it it's bringing in those different aspects of people's experience and background and, and culture and context is Mhmm. Tech cloud. One of my favorites was getting the Caffrey talk on, on sagas, which is work that I had been, like, you know, sniffing at eyes over for for, like,

Speaker 2:

Yeah. That was great. And that was a I was so great to have because we had Katie join us, and it was, we which was it was so much fun to ask her all the follow-up questions. Nate, I saw you trying to get in there a second ago.

Bryan:

Yeah. It was kind of, what you're talking about about learning from previous systems related to all of these discussions is kind of part of what I was getting at you talking about open source hardware. And and are we learning from the design level lessons, not just not just implementation bugs? So it's encouraging to hear that. And and I think that the like, as you said, the the trick to keeping that spirit of, you know, accepting how things are and and pushing forward to make them better is something you have to choose consciously.

Bryan:

Because if you're if you become unconscious of it, then you will fall to one side or the other, and the default is entropy.

Speaker 2:

I think you're right. No. I think you're right. And I think that, like, the failure mode because even you do get that failure mode mainly in upper management where you just no longer see the world as it is only as it could be, and you just start, like, living in the future with those people. And we don't wanna be those people either.

Bryan:

Right. That's that's kind of a curious thing to me because I'm not that by default. The people who are just always sunshine, and I'm like, yeah, but, well, I and and it's aggressive to me, and we need those people to balance us out. But but the upper management trap is is the cynical road of I feel like I personally now have too much to risk to, you know, to imagine the world being that much better.

Speaker 2:

Yes. And that's that's awful. As long as you keep track of the way the world the the the way the world is. Because when they those folks get I I I I occasionally work with these and so I I I've had an executive who would just, like, just could not tell the truth. So everything, like, he could because, you know, when if he said something, then we had already built that system.

Speaker 2:

And then he could, like, build he's building these, like, castles in his imagination of, like so you need to tell a customer all these things we did. And I'd be, like, you know, all that stuff is, like those are all, like, whys. Like, we haven't actually done any of that. And he said, you know, Brian, I studied Greek. And in in Greek, there is what's called the optative voice, where you refer to something that is actually in the future, but you refer to it in the present tense.

Speaker 2:

And I'm like, did you just, like is this a euphemism for but this is just lying. We don't have the alternate voice. Right? We're just In in in English. In English.

Speaker 2:

Alright. But but in in my recollection of that customer meeting is that it was not in ancient Greek, and that was English. Maybe that was the wrong meeting.

Bryan:

What what you need to ask is, so how do you think our customer heard it? That's exactly right. Well, I'd, of course, like Communication isn't what you meant to say. It's what got heard.

Speaker 2:

Yeah. That executive was fired.

Bryan:

He's later fired. Yeah. That line that line will always get back to

Speaker 2:

you eventually. Yeah. This is the the the CEO of mine that was fired on the day my daughter my daughter was born. And now, like, those two events are alike. She's the 3rd kid, so it's like, you know Cosmically linked?

Bryan:

Yeah. Cosmically linked. And, fortunately, you

Speaker 2:

know, she's got a very thick skin, so I'm not really worried about her. But I I I am worried that she's gonna turn 50. I'll be like, oh, man. In. It was 50 years ago today.

Speaker 2:

That's fine. But, you know, sad. We've become a, you know, at at Oxide, we've got so, we ask people, when to do when they apply. We ask them, like, when have you been happiest and why? When have you been unhappiest and why?

Speaker 2:

And I it is so interesting to read all those answers, but, man, all of the unhappiest and why answers are all basically the same. Basically, like, we are trying to do the right thing in management. So it's all

Bryan:

the Oh, man. I I my mind immediately went to times like that of where but and it was when I was younger and and maybe more idealistic willing to charge at the windmills, and I was like, no. I'm gonna stay and and fix this, you know, from within and, and stayed probably 3 years too long, and actually got some stuff done. But then, like, me and the director who I was constantly at odds with, like, left in the same week was kind of anti climactic. Yeah.

Bryan:

That is anti climactic.

Speaker 2:

It's like, no. No. You're supposed to stay here and suffer. I leave. I escape.

Speaker 2:

You said Yeah. Right? No. I won.

Bryan:

It was it was one of those where like, we never ever had a 1 on 1 or a meeting that was scheduled for 15 or 30 minutes that didn't go I and I mean literally 5 or 6 hours. Like, calling the wives, like, I'm you know, don't wait up. We're gonna hash this out. And then, like, you know, I'm telling him why what he's saying is wrong. And then and then a week later, I'm hearing my words coming out of his mouth at the staff meeting.

Bryan:

And I'm like, okay. It's it's cool. I don't want credit. I just want it. Right.

Speaker 2:

I guess I was right about that then. I guess that protracted argument that we had for 6 hours where I had to, like, clear my evening calendar. I guess I was right because okay. Yeah.

Bryan:

Yeah. No. So it was it was productive in that way, but, it was stressful. It was, like, I I swear it was it was one of those where you took the job, and 2 weeks in, you realized you've been lied to. And, like, all the developers that that interviewed me were gone by the time I got there.

Bryan:

And Oh, I was I I just should've walked out the door, and I was too young, and I didn't know that I didn't know that I could do that. I was just like, oh, I can't I can't just leave the job less than 6 months in. Right?

Speaker 4:

What's so interesting is interviewing people who've become really savvy about sniffing out some of those conditions. And and, like, as a startup founder, one of the one of the best questions that I got asked was, how do you and your cofounder resolve conflicts? And, and I found that Brian, I don't know if folks have asked you that question, but I I found it to be very incisive and reflective, almost always, of folks who, had discovered that that's an important attribute of of executives and founders.

Speaker 2:

Yeah. And, like, I don't wanna hear about, like, that you don't have conflicts. I wanna hear it, like, tell me how you resolve them. And, yeah, it's a very savvy question because the no. It is very, very core.

Speaker 2:

And it's just, just it it it's frustrating because we again, we are are seeing asking this question of so many folks and seeing all these answers. It's frustrating how much they all have in common. And it's just to me, it's like a lack of trust that people

Speaker 4:

aspect of trust in that folks, aren't able to walk away. Right? They've got this commitment both to the work, and they've got this commitment to their colleagues and the artifacts of the success of the company. And even even in these powerful, kind of unresolvable conflicts, that it's really hard to peel themselves away.

Speaker 2:

Which I admire at some level. I mean, I definitely I've I've I've been a dead ender couple of times. I'll go down to the ship. Oh, yeah. I mean, it's like Yeah.

Speaker 2:

I mean, you people. Yeah. Exactly. I said, oof. I I I that's actually weird to hear.

Speaker 4:

I just wanna make sure

Speaker 6:

you're gonna have the the chair of

Speaker 2:

the ship with me. I'm like, yeah. No. I I mean, I Yeah. No.

Speaker 2:

It it's true. What you know, that's fine. I talk about solving a machine a lot. The I love my favorite line of solving a machine is Tom West, who is like a mixed bag as an engineer or as a manager, but saying that he really wanted to trust his engineers, but that trust is risk. I just love that line, the trust is risk because that's part of the reason that some of these companies have a hard time trusting their technologists.

Speaker 2:

They just don't wanna take the risk. And they I mean, I I would assume that, you know, that I'd be interested if I had that factored into your situation where it's like, ultimately, like, you're having 5 hours of combat because you're not being trusted at some level. Right? You're having to, like, the the the but that sounds brutal.

Bryan:

Well, it does. But, I mean, this is kinda touching on one of the fundamental things here that that we don't we as technologists often don't have the vocabulary for it, and so it goes on unseen is is the psychology and, like, humanity of of the people you're talking to. Like, a lot of times we project, you know, that that everybody else internally is like we are internally, and it's not true. And and I think people would for, like, being a a psychological therapist for developers. I think there's a huge unserved market for, like, being a a psychological therapist for developers for these things.

Bryan:

Like, one of the one of the last talks that I gave before quarantine was on salary negotiation. And it was at a one day it was at a one day conference, like an unconference where you just threw your abstract up there and assigned the the rooms by size based on who signs up for what. And they put me they opened up a double room, like and I was terrified because that I had written the talk that morning at 2 AM. And I was just, like, holy shit. People really wanna see this.

Bryan:

And and it was packed. And I and I stood there for half an hour afterwards answering questions because people are so not versed in how to deal with conflict. And and there's nothing scarier than salary negotiation with people. And people don't realize the power that they have in that situation, but but it goes back to the psychology of of who you're dealing with. It's like they don't people don't even know how to put themselves in the shoes of the person who's hiring them enough to realize that, like, they need you.

Bryan:

That's why you're here. You've made it all the way through the interview to this point.

Speaker 6:

You've got

Bryan:

leverage. Now is the time to use it, and people are afraid to. And It's like you're here because you're valued, and people are constantly afraid to to lean on that more test it.

Speaker 2:

Yeah. That's interesting. I I I gotta ask, have you seen our compensation approach at Oxide? You know, I have not. Okay.

Speaker 2:

Oh, here's the drop the link to the blog entry. So we pay everyone the same at Oxide, And, we've got a long blog entry that explains why. But it's been, I think it's been great. I mean, I'll let the other folks at Oxide can speak for themselves, but I think it's been, it's something we were not gonna actually talk about, but we found that, people who were applying didn't necessarily know how serious we were about our principles and values, and knowing about the compensation was a big part of that. So, anyway, I'll let you find That's interesting.

Speaker 2:

I will

Bryan:

I will definitely be reading it. Yeah. It's worth reading. I mean, I think it's like it

Speaker 2:

it definitely takes the, it takes the need for negotiation out, interested. I had some folks who've been like, no, no,

Bryan:

I I know that's I

Speaker 6:

read the walking tree.

Speaker 2:

I get it. They were the same, but, you know, I mean, we can talk. Right? We can negotiate. I'm like, no.

Speaker 2:

No. No. No. It was the same. What do you think?

Speaker 2:

Do? It's been it's been interesting. It's been honestly, I think it's been more productive, but I'm always surprised.

Bryan:

That would be an interesting thing to update my talk with because I I incorporated, like, a survey of, like, the Netflix model and, like, the big five model and, you know, all those things. So Yeah. Well, don't worry. And, you mean,

Speaker 2:

your defense, we only talked about this, like, what, a month ago or something like that. So it's been pretty recent. There are there are a lot

Bryan:

of, you know, smaller shops that are open books and, you know, very transparent, and and that's got its upsides and downsides too. So I'll be interested to see how

Speaker 5:

it goes

Bryan:

and and to read back.

Speaker 5:

I actually saw the,

Speaker 2:

blog posting my RSS feed, and,

Speaker 5:

then I and it took me 2 weeks to understand it, by

Speaker 4:

the way. Like, it it

Speaker 2:

was not really easy to understand from but wait. What what's my reading right now?

Speaker 5:

So and and then I sent it to my cofounder, and

Speaker 2:

he also took him to risk to understand. And the next day, he came to me and said, we have to do this. Like, for a long time, we always argued how do we wanna

Speaker 5:

pay people. And then after we read this,

Speaker 2:

we said, okay. This is this is actually a pretty makes sense thing to do

Speaker 5:

when everyone is just really equal. Then you understand that, there is less negotiation and more work, basically.

Speaker 2:

Yeah. I mean, certainly, I feel that it has fostered our sense of teamwork. It has you know, I've had people phrase it to me different ways. I've had one person phrase it to me. It's like, you know what?

Speaker 2:

That I realized that there was a part of my brain that was always thinking about this, and that part of my brain can now just focus on the problem. So it's been I mean, I think it's been I I don't know. I have to be right outside your computer. Chime again.

Speaker 4:

You have to you had to that point, Brian. I think I think, sometimes it's not about what you're getting paid. It's what the other person's getting paid that that Right.

Speaker 2:

Sticks in people's head. I know. And, like, how shit is that?

Speaker 4:

Not wanting not wanting to feel like a sucker. Right. Not wanting to feel taken advantage of, and having that transparency of it. I think in particular, if everyone can do the same salary, especially I mean, I think, you know, it's gonna be a hard I was very interested to see how long we can hold off to that with oxide, you know, through hiring people with radically different titles, hiring sales people in particular. I don't know.

Speaker 4:

I mean, there's

Speaker 2:

Oh, it's a social experiment for sure. I mean, we, Right.

Speaker 4:

It's gonna be tricky, but I think certainly when there's there's so much sort of, just like just existence risk on the table that it takes this whole category of bullshit out of everyone's head. It says, we put our head down and make this make this company worth something and make our time worthwhile. While, and let's set aside the, you know, who has 7 and a half years experience versus 10 versus 3 or whatever. Just say everyone's gonna bring something, everyone's gonna contribute, and, like, it'll it'll work itself out.

Bryan:

I I just had that conversation with a start up founder the other day. He was asking, like, to what degree do I need to go, you know, making shades of gray, you know, levels for my developers. I was like, well, you're not Microsoft. You're not Amazon. Don't worry about that yet.

Bryan:

And and I didn't have solid answer for him. That's a one size fits all. You know, a lot of it has to do with your mission and how people relate to that and how much of the reason that they're there is for that. You know? And there's he he admitted there's always gonna be a mix of some people who are there that are you know, that want steady employment, and they're gonna keep plugging and going and going.

Bryan:

And there's people who are, you know, passionate and, and go on burnout cycles. And, and trying to fit all of this together is kind of a patchwork quilt. So I I still don't myself, I don't feel like I have the answer to all of that. That's a big question. So I'm I'm really interested to see yours.

Speaker 2:

Yeah. So Nate, it's and again, I kinda figured I wouldn't do, like, show notes for this or whatever, but it's, compensation as a reflection of values. Today is the blog entry that describes our approach and why we did it that way. And Sid, I saw you were, unmuting yourself there.

Speaker 6:

Yeah. I I I

Speaker 2:

just wanna say, I don't know if you

Speaker 6:

have everybody knew knew this, but, Steve Jobs famously tried this at Next in, his early days. And it was it

Speaker 4:

it was a failure, but it was

Speaker 6:

slightly different. He he wanted total transparency, but pay was not equal for everybody. So it turned out to be a disaster. Interested to know how how you guys' experiment works out. Yeah.

Speaker 6:

That's what

Speaker 2:

I did not know that about Next. Yeah.

Speaker 6:

I think I read that in his biography, the the official one by, Isaacson.

Speaker 2:

Yeah. I that does not that's interesting. I mean, certainly, his character is well, that's interesting. I'd be curious to know how it would work work out. But it may presumably, he saw some same things we saw, in terms of people wasting mental energy on this.

Speaker 2:

This is ultimately the way. But it I do feel that, like, it's very hard to say that, like, alright. People should stop wasting mental energy on this, but, by the way, there's gonna be this huge disparity. You're like, well, alright. You're just done.

Speaker 2:

Okay. Hold on. Wait a minute. What?

Speaker 6:

Right. Yeah. I I think I think that's, like, the worst of both worlds where you're you're kind of like Google where you're saying explicitly that you cannot value each person, you know, the same. But, you know, we're we're this radical company where everybody knows where, you know, what everybody else is making. That's just that's a recipe for disaster by family.

Speaker 6:

He believed in it for a very long time, and then he just had to, you know, you know, he he was just he got kicked around

Speaker 2:

a lot because of it and lost a lot of productivity

Speaker 6:

and sort of kind of, you know, gave away I mean, it it gave it up.

Speaker 2:

That's interesting. Well, the the certainly, I was talking to a fellow CTO, and she was telling me about about the company she had advised that transition from non transparent salaries to transparent salaries. And she's like, yeah. That's, like, 6 months of guaranteed shit show when you go. So, like, that's what you wanna really avoid.

Speaker 2:

So we've had the advantage starting the

Bryan:

right The only the only rage quitting with That's right.

Speaker 2:

That's right.

Speaker 3:

Well, hey. So we

Speaker 2:

I think we wanna keep this for about an hour. I would love to and hopefully, Twitter's basis will improve to the point where it won't move me off after 20 minutes. But thank you, everyone. Thank you, Laura, for for for offering your perspective on the XB vulnerability. Rick, you I I saw that you joined, but Laura was saying your high praises for all of your roles and then and then you roll in the the vulnerability of the CP 5 vulnerability.

Speaker 2:

Adam, thank you as always. And Nathan for for joining us up here. And, yeah, let me know how you go.

Bryan:

How you like it, but

Speaker 2:

I think we're gonna do it kind of, like, the same time next week, Adam. Are you still in?

Speaker 4:

Yep. Same time next week. And if there's folks like, if if, people are happy with the random walk, then we'll keep on randomly walking. But if people have topics, like, post them on my friend's Twitter, and we'll we'll see if we get to them.

Speaker 2:

Yeah. And if people want to, like, would love to speak, let me know. So, So, like, the app is not great at showing me everybody. So, maybe we'll try to get better this week, but it's definitely fun to talk to everyone and, get everyone's perspective. So thanks,