[00:00:00] ​ [00:03:21] Ryan Kidd: Hi, my name's Ryan Kidd. I am co-executive director at mats, uh, which is this awesome AI safety field building program for researchers, uh, engineers and founders. And I have, uh, previously I was a physicist, uh, very briefly and along the way, I've, uh, helped found and, uh, spin up and advise a host of other AI safety startups, such as the London Initiative for Safe ai. [00:03:45] Jacob Haimes: Awesome. And so to kick us off in one sentence, what's the low hanging fruit of AI safety field building and why aren't more people doing it? [00:03:56] Ryan Kidd: Yeah. I think the low hanging fruit is basically targeting an archetype that I call amplifiers. So we, we talked about this some 18 months ago in this, uh, blog post where we talked about the talent needs of technical AI safety teams, which we determined by surveying like the 31 leaders or hiring managers at all the major AI safety labs at the time. [00:04:17] So this is like mostly technical AI safety needs keep in mind. Right. Um, but I think that this amplifier role actually transcends just technical things, uh, and amplifiers. They're like, they're sort of, uh, they're people, they're people managers, but they're also like, have a technical bench. You could see they're analogous to technical program managers or tpms. [00:04:35] Um, but they also take on many a variety of roles across the ecosystem. [00:04:39] Jacob Haimes: So you, you mentioned, you, you were a physicist, you have a PhD in physics, and then like right after that, from what I could tell you, become co-executive director of mats. How did that happen? How did that come about and what went into the decision? That got you there. [00:04:58] Ryan Kidd: Yeah. So I Did a physics PhD in Australia, but I was also running a local EA group from my university for three, four years Before that, um, I was very involved in like the scouts, you know, there was various leadership opportunities there. So I, I guess I wasn't a stranger to project or people management. [00:05:14] Uh. You know, in a non-professional setting. Um, but yeah, MATS was my first time doing people management in a professional setting, though we did start off small, right? Like most startups. So I was a participant in the first generation of mats, Siri, mats, right? Which was, uh, it ended up being five different individuals and at the pilot program, um, we all selected from the first EA Cambridge AI Safety Fundamentals course, what's now Blue dot Impact. [00:05:38] And we did these like two month research projects, uh, after a six week kind of part-time online reading group. It was pretty informal. Um, and during that period I came to Berkeley and met with the organizers and I. Yeah, I gave them a bunch of recommendations for how I had run that. They invited me to help run the next one. [00:05:57] And, uh, they ended up leaving. Um, Victor, Victor Warlock stayed on to help run one program. Uh, Oliver Zang, the other co-founder, left to found a case of Dan Hendricks, uh, where he is still today, COO and, uh, Christian Smith, my co-director joined shortly thereafter, and we've been running it since. And I guess like, yeah, I mean, I always, uh, I think I've been kind of personal. [00:06:19] I think I'm somewhat extroverted. Uh, I definitely enjoy working with people. I think research, uh, my, my, the kind of research I did my PhD and then a little bit after in terms of the AI safety stuff was like not as people facing as I wanted. And I do associate somewhat with this amplifier role. I can talk about more what I think makes a great amplifier later if you're interested. [00:06:38] But I think, um, yeah, it's just the standard startup story, right? It just so happened that I had like, uh, uh, more research background and my first startup worked, you know, it actually I think works really well, um, which does happen. [00:06:52] Jacob Haimes: Yeah. No, I mean, I, I think the way that you introduced that to me also, brings into focus, like you've really been with this from sort of the start of when it became what it is today. Yes. It started before that and, but like this started sort of right at the beginning or right before, um, chat GPT sort of like launched and, and really changed things and. [00:07:23] You know, you were talking about Blue Dot Impact, uh, and how you did like first and formal course, uh, that it wasn't even Blue dot impact at the time. It was just sort of like a reading group. Um, and, you know, it was really the beginning of changing what I would say is like a very niche focus area, uh, into something that is a lot broader and, uh, more knowledgeable, uh, or known rather to people. [00:07:51] So like how does navigating that transition, um, look, you've been in a position of relative authority is like throughout that, how has that, um, looked and, and like what are your, I guess, takes, uh, on that [00:08:14] Ryan Kidd: Hmm. [00:08:14] Jacob Haimes: of time? [00:08:16] Ryan Kidd: Yeah, so I'll say it didn't feel like I was in a position of relative authority at the time. I think looking back, I, I, I probably was, but um, I remember, uh, back in the summer of 2022, the summer of 22, the summer of Love, as they called it, when everyone was getting all their projects funding, there was capital everywhere. [00:08:33] You know, this was back when, uh, so Matts never took any FTX funding, but there was a lot of FTX money floating around, um, from their future fund, you know? Uh, and yeah, I don't know, like at that time everyone was getting their project funded and I think Matts is one of the few projects that survived, uh, from that period. [00:08:53] And, and is, is still, you know, flourishing today. Um, 'cause there was a lot of field building projects that were launched around that time. You know, I think, um, if I'm to, to, to think about why we survived, I think mats has been un like, particularly like my, my, uh, uh, uh, you know, strong will here and, and Christians too, is that like we really want to do things. [00:10:49] Uh, like the right way. And we aren't super keen on violating a bunch of social mores. Like we think having a HR department is a good thing. You know? Uh, I, I think like there's some, there's some field building projects that like, oh man, like, we don't want to put too many, uh, restrictions on, on the way, you know, but No, no, we think, we think like having a, a community health policy is very important. [00:11:11] So, uh, MATS has avoided, I think, you know, a bunch of, you know, potential problems that way, but having a great community health policy, making sure our fellows and mentors feel safe and supported, um, which is, you know, a challenge. But that's, that's been a huge focus of our, of our program. And I think the second probably more important thing from a survival standpoint is from the very beginning, I, I tried very hard to elicit, uh. [00:11:35] Like understand the underlying mechanisms by which, uh, like open fill now coefficient giving was making funding decisions and replicate them ourselves, in fact, build a better prioritization apparatus for the goals they want. Sort of like imitative generalization, you know, and we did that by assembling a mentor selection committee, which we still use this day that comprises like all the top experts that coefficient giving did and would consult to make funding decisions. [00:12:02] So we just like explicitly based our mentor portfolio on those top expert advisors. Um, and then of course coefficient giving, uh, gave us more and more, um, kind of like, uh, like just they, they really liked our system, so they gave us more funding over time and I think that, yeah, that, that ended up really well for us. [00:12:23] Jacob Haimes: Gotcha. Okay. So, uh, I guess from, from my perspective, and I, I don't know, uh, if you've like, listened to any of the, the podcast before, but I take, uh, I guess in a more, uh, like skeptical view, uh, in general and the. Red flag or orange flag maybe, uh, that I hear in that is like we base our, um, you know, who we're choosing mentors on entirely on how we're getting funding. [00:12:56] And while that is [00:12:58] Ryan Kidd: not at all. [00:12:59] Jacob Haimes: good for getting funding. Okay, so then can you like, give me a little bit of context there? Maybe how I misinterpreted that? [00:13:09] Ryan Kidd: Yeah. So it's, it's, it's a fairly interesting point though. So in the early days of maths, I, I do, uh, I, I think it is true to say that that coefficient, giving our, our grant managers there, there was a lot more discussion and back and forth about the kinds of research, um, that they wanted to support, because this was like, like it was before the money really started flowing before people really open fill and coefficient really kind of, uh, I suppose in a way woke up to AGI, like everyone just woke up to the nearness of AGI. [00:13:36] So, but in those early days, it was kind of like more skeptical and, you know, we were sort of an unknown quantity though. I think, you know, having a PhD myself helped a lot. Um, and then I think, you know, uh, having strong connections with a bunch of the, uh, the grant makers and the org leaders and stuff that I, that I built by moving to Berkeley and going to all the the retreats and just trying to, you know, really establish those strong personal connections. Um, but I think that like, actually the way that we. For the last, at least two and a half years have been making our mentor selection decisions has been like, almost like we are sort of a re granter in a sense. So coefficient giving like trusts our process. Okay. Which, which is not something that we like consult them on. [00:14:17] I actually think, uh, uh, like other funders beyond Coefficient giving have been more selective, I think about the kind of research they want us to support. [00:14:25] Which is partly because they see somewhat their, their, their entire existence, uh, and their purpose. I'm just like, like things like Longview, uh, philanthropy, um, uh, perhaps navigation fund and a few others, like they're trying to fill in the gaps that coefficient giving has left and they find it really hard to find gaps. [00:14:44] It is not at all easy to find things worth funding. That coefficient giving hasn't funded is a strong claim that I'll make. [00:14:52] [00:16:56] Jacob Haimes: I think that's interesting though. 'cause like I don't get to hear, that perspective as often because I'm not in. [00:17:06] The Berkeley bubble, so to speak. Right. Um, I, I live in Colorado, um, and the organizations I work for are like fully remote, global, and I, and I, I do think there is like something to be said for just being there, um, and the value that that has, uh, I don't like it personally for a number of reasons, but, but like I, I do think that there is something there. [00:17:31] Ryan Kidd: Oh, there's huge value in being like, like in a, I mean, you see this is bigger than AI safety, right? This is like, if you wanna work in acting, you go to la. If you don't go to la, you're full, you know? Um, if you want to, if you want to work in finance, you go to New York or so, yeah, yeah. These kind of micro, uh, uh, things pop up. [00:17:45] So I, I guess that's the bay, that's London, that's dc Uh, maybe to a lesser extent. There are some other hubs out there. [00:17:51] Um, but I think, yeah, it is interesting. So I do think there are a lot of things that are not getting funded that should get funded. I think that part of the problem here is not necessarily like a lack of money, it's crowing on this. Um, I think, I think it's like the same thing with other like, like VC ecosystem stuff, right? [00:18:08] You know, uh, like so the various teams at op coefficient giving that are very different mandates for the things they wanna fund. And it's the same with VCs. Sometimes it's like hard to actually work out how to, uh, you know, get into that advisor network. And I think an easy way for researchers is just like. Publish an amazing paper with great citations, get a great reference from your, you know, top researcher. [00:18:29] Easy. When it comes to like, founders of AI safety projects, it is not at all clear. [00:18:36] Jacob Haimes: Hmm. [00:18:37] Ryan Kidd: I mean, it's like, it's easy to, to, you know, like work out ex anti, like how you might become a, a top researcher, uh, candidate. It's like there's a standard path that academia has laid out or [00:18:46] Jacob Haimes: yeah, there, there's [00:18:47] Ryan Kidd: like in the case of founders, I think that it's not clear. [00:18:51] Jacob Haimes: no, I, I [00:18:52] Ryan Kidd: path to be clear. [00:18:55] Jacob Haimes: Yes, um, what would you recommend then, like as, what do you see as a path for founders or, or how we could make that, um, you know, what is the missing piece there or, um, you know, what is a path that does exist that maybe isn't as obvious, but you know, already does have some of that, uh, scaffolding and, and infrastructure there? [00:19:21] Ryan Kidd: That is a really good question. Um, it is something I've been trying to solve, right? This problem of, of like getting founders, uh, the visibility and the accreditation that they need to get funding from nonprofit donors and Yeah. And confidence and buy-in, um, with catalyze impact. So I've been advising catalyze impact for, for years. [00:19:39] That's Alexandra Boss. Um, and so that's, that is an incubator for AI safety nonprofits. We now also have seldom lab. Right. They've run their first cohort as well. They have some fantastic, uh, both these have had fantastic projects. Uh, constellation is spinning up an incubator. We have like, you know, Y Combinator already exists, A couple of mats, alums, uh, founded in startups like Theorem and, uh, and like Atla AI through that. [00:20:00] Um, entrepreneur First Defense Acceleration. So basically we just need more startup accelerator kind of programs, um, now for nonprofits, right? Typically, you're not gonna, like if these are, if if Catalyze impacts a nonprofit, it's not gonna take a cut of your, your funding, right? 'cause it's, that's not its business model, you know? [00:20:17] Um, especially, especially if you're founding a nonprofit like Apollo Researcher Tamima or something. Um, what you really need, right, is you need like references. Okay. And that's why a lot of mats, alums, I think have been successful other than the fact that they're overdetermined to be great researchers and generally effective is that they get the references from the top advisors. [00:20:34] You know, and this is really a research kind of ecosystem. So the top researchers are also kind of the top advisors in strategic sense often. Um, and they, they get like the, the credible outputs. Uh, and then maybe they like get some initial seed funding. And then the way that, like a lot of these big nonprofit funders work, right, is they'll, they'll like give you, I dunno, three to six months of seed funding to, you know, get some MVP going. [00:20:56] Jacob Haimes: Gotcha. And you see, mean, you mentioned this kind of earlier, like, you know, if you're an actor, you go to la Um, do you think that. We're in a similar, uh, I guess, regime here where it's like, you know, if you're really serious about AI safety, you go to, uh, the Bay Area, and if that is the case, uh, what do you think the, the consequences or repercussions of [00:21:31] Ryan Kidd: Um, so I do think we are in a similar thing. I think that if you work out of Constellation or uh, Lisa in London. Or, or far, far labs also in Berkeley. Uh, where now there's, there's some other, uh, things popping up like mocks in San Francisco. If you work out of these office spaces, you have, uh, vastly greater exposure to a bunch of illegible knowledge that doesn't get published anywhere because I don't know, maybe there'd be, uh, PR considerations. [00:21:55] Like people can't say their honest beliefs about some of the AI safety things 'cause that would look bad. Or, uh, like, I don't know, people form personal relationships. We're humans, you know, like this, this kind of conversation matters in a way that informal impersonal writing on a blog doesn't, you know? [00:22:11] So I think like, yeah, if you're in the US and you want to get a job in ai, technical safety move to the bay, it's just a no-brainer. I think. Um, obviously it's gonna be very expensive, so not everyone can even afford to do that. And it, that's tough. Um, but there are, you know, grant transition funding opportunities, like there's career transition funding from coefficient giving LTFF and so on. [00:22:31] Collision or Kader, yeah. So these things take a while to accrue. Um, and, and there are always gonna be like, you know, specializations in certain cities, but I do think, yeah, you can really get a leg up if you move there. Um, but, but I think like, like I said, summer of Love, 22, everyone moved to the bay. [00:22:47] Everyone came to the Bay for summer. Very few people got jobs. You know, it was just like, it was like two years too soon or something. Um, [00:22:57] Jacob Haimes: Gotcha. So, I, I mean, so like my, from my perspective, um, as someone who, like, for various reasons, I, you know. Either can't or don't wants to move to the Bay Area, but I'm still, you know, dedicated and, and want to be contributing in this space. And so I, I mean, to people who are like that, is it just sort of like a, eh, tough luck. [00:23:21] Um, what's, is there a solution there? Like what, what do you, uh, think about that aspect? Mm-hmm. [00:23:29] Ryan Kidd: well, well, I think for people who can get into programs like Mats Astra, or, or, uh, if you're going to London, uh, pivotal Laser Labs. There are great opportunities. You can go there, stay there for like three months, you know, and then go back home. And you've acquired a large part of the benefit of exposure to those ecosystems just from visiting there and meeting all the right people. [00:23:49] Uh, even your cohort probably is at like, as valuable as, uh, the other, the other things, right? But you gain a lot from that experience and then you can bring that home and start your own local chapter or, or hub or whatever. Um, separately. Uh, I think that like there are remote programs that are really good. [00:24:05] So we have things like spar, you know, supervised program for alignment research, uh, has been really killing it. Um, and of course Blue Dot Impact this fantastic, uh, uh, decentralized AI safety course ecosystem. Um, so, so these provide like a large benefit. And I try and like, to the extent I can, uh, go into these communities and like offer advice and, and stuff like that, but it is like, it is tough to, um, it it is tough to gain the same benefits 'cause it's not just like, like if I, if I try my best to go into, to like attend spar career fairs or something, at best I can meet with like. [00:24:39] All the people in that, right? So that's one X 300. So let's say I have 300 meetings, I can't even do that. Let's say I can. Um, that is nowhere near as many connections as if those 300 people came to the bay and met a thousand other people, right? All those possible connections that could form. So you just can't really replicate that in the same way unless you have, I guess, uh, big online round tables between say, 300 participants or something. [00:25:01] Um, but then of course, like those are all new participants and all the seasoned researchers would then also have to take part to make it really worth their while. Um, which is why it's just hard to replicate things. You have to get buy-in from the people whose time is super valuable. Um, so you're always gonna have gated access communities, some physical, some uh, reputation based online. [00:25:19] Yeah. [00:25:21] Jacob Haimes: And so, I mean, related to the idea of getting buy in, um, and, and you've made a post about this before you've talked about this, um, and we, you even mentioned earlier, like you've recruited mentors and, and the mentor recruitment process that you have is incredibly. Intentionally thought out. That's great. How did you get there? Because like, how would you start that? Uh, from a a third party perspective, how would you get that buy-in? Mainly for mentors is what I'm talking about. But even for other things, you know, like how do you bootstrap that the way that you guys have? [00:26:06] Ryan Kidd: Yeah, so like the first match program that I, uh, so I was in charge of mentor selection for mats, two participant in mats, one in charge of mentor selection for mats two. Uh, I picked people like, um, so Evan Hubinger was the original MATS mentor. Um, but so I, I'd read tons of his work in Paul Christiano in my, my, my first MATS tenure. [00:26:23] And, and that was massively influencing me. And mostly it was, I selected people who I'd read at the writing of, you know, that had like, and, and really seemed to impact me in that first cohort or, uh, who were kind of hanging around what the light cone offices at the time, which is where I was, I was hanging out in early 22. [00:26:40] Um, so like Alex Gray as an example there. Um. And I, I thought about this like post, post the program. I was like, man, this just seems like a very arbitrary way to, to determine, uh, you know, the future of AI safety particularly is like, we had 30 participants. And I was like, wow, okay, this is scaling faster than I thought. [00:26:58] We got funding for 60. And I was like, wow, okay. We really have to think hard about how we're gonna do this. 'cause there could be some serious repercussions from steering the field in the wrong direction. You know? Um, there was a memo that went around, uh, called against current mass movement building and it said like, one low quality talent is bad because it dilutes, you know, the, the, the community and like maybe, you know, uh, where we're too, too early, too young and AI safety to like bring in a bunch of people who don't know what they're talking about. [00:27:23] 'cause, uh, on, on that they'll like publish bad ideas and the public will take that up and et cetera. I don't know. I, I, I have disagreements with that. Um, but the second thing was. Um, basically it matters, like there's massive founder effects in how you, uh, uh, you know, start off what, what research directions you emphasize. [00:27:42] So I, I thought like, okay, what's the best mechanism I know of, uh, to elicit top, um, you know, like to, to, to help steer this field? And I was like, well, clearly, uh, you know, wisdom of the crowds is something, you know, I generally believe in, um, what's even better than wisdom of the crowds, uh, maybe wisdom of the crowds that have thought about it a really long time. [00:28:01] Uh, and, and there are situations where, you know, you'll, you'll see like in in prediction markets, um, sometimes like expert opinion beats out, like public opinion. Sometimes the, the converse is true Um, so I guess the, what I was trying to imagine was like, who are the AI safety equip, like, like equivalents of, of like, uh, uh, top expert forecasters, you know? [00:28:24] And so we made a list and we consulted those people. We said, Hey. You know, who are the top researchers we should work with? Then we reached out to all those researchers and more, and then asked those, uh, uh, those advisors. Okay. What do you think about these people's applications? Like their research experience, um, you know, their, their actual projects that they're particularly interested in. [00:28:43] Uh, and as time has gone on, we've now got 275 mentors applied for summer. That's so many. We had to reject like 80%. How do you do that? We got 30 advisors to review them. Um, we, we, we accepted the top 10% unconditionally, knocked off the bottom 40% unconditionally, give or take. [00:29:01] Um, and then everyone in the middle, uh, which is about half of all mentors we accepted, we made decisions according to like our own. Uh, like we had some, we had some data such as like for returning mentors, did they get great ratings or bad ratings, you know, from, from the people that they supervised and the, the MAT staff that worked with them. [00:29:18] Uh, so they were either in or out based on that. Um, and for new mentors, like, okay, so we're trying to, to, uh, have a diverse enough portfolio to cover a bunch of different stuff. And we like, wrote down all the agendas that we thought were topical and interesting and merited some, uh, you know, kind of boost, right? [00:29:34] Because a lot of MATS history has been about on the margins boosting marginal agendas. Um, even if it's not most of our portfolio, it's like. Combined, like maybe you want to have, uh, uh, like 20 different moonshot bets, you know? And that's actually like, if, if you add those up, that's like a lot of your portfolio, right? [00:29:49] Um, and so, so we found those bets and we added them in, uh, and then made the rest of the picks based on merit. [00:29:57] Jacob Haimes: Gotcha. Okay. I think that's, that's helpful for me as well, just to like myself. I mean, this is not a trivial effort. A lot of work back then at the beginning and now, um, into mentor selection. Um, and I think that's an important part to like, make sure is, is clear, uh, because it is not, it's not like necessarily trivial to just, you know, say, oh, I know, some people who, who have good credentials on paper. [00:30:31] Um, I'll start a, a fellowship and it'll work perfectly from the, from the start. [00:30:36] So when thinking about how you choose these mentors, how you choose the projects, and even on a more meta level, the strategy of mats as a whole, how do you go about doing that? How do you incorporate people from your team, like you mentioned earlier, um, and also make sure that it's in the direction that you are wanting, uh, to go as an organization. [00:31:05] Um. How do you define that? [00:31:08] Ryan Kidd: Yeah. Well, yeah, I mean, uh, some of that is, you know, human resources, other that is, uh, information flows, right. Um, so until recently Daniel Felan, uh, a senior research manager at Matts was in charge of mentor selection. So he and I would meet weekly. We'd talk about, um, and Daniel, if, if you don't know him, is the host of the A XRP podcast. [00:31:26] Uh, he did. Uh, he likes to say he, he was the first person to start a PhD in AI safety, uh, under, uh, at Center of Human compatible AI at at uc, Berkeley. So he is been in the space for a long time. He has a lot of knowledge. He's, he's very broad. He's also somewhat contrarian. Uh, and yeah, like we just, we shared a lot of intuitions about how to design a good selection process. [00:31:46] So I built the first version. He took that over and made it much better. He ran it for the last, um, uh, year and a half I believe. Uh, and now he's left to, uh, he has his a new job now, which is very sad. But, um, it was wonderful working with him. And now Juan Gill on our team is gonna be in charge of this process. [00:32:03] And, uh, so he's our new program lead. And yeah, I guess like the way that I, um. I, mean, I built the process like through this, this, this concept of a, you know, chain of trust, right? Uh, if I can't actually directly assess, uh, maybe this something's like really hard to assess, right? The impact of it. Uh, and I, but I believe that this like, group of experts is probably like the best, you know, governing body to assess that. [00:32:26] Well, I put my faith in them, right? They're the next link in the chain. Um, okay, maybe that group of experts, these mentor selection committee members, maybe they like, don't actually know how to pick, like, or they don't have the, the, the kind of wherewithal to pick the scholars that enter mats. Okay? So they pick the top mentors, uh, with, with some input from mats. [00:32:42] Okay? So then the mentors, right? Pick the top, uh, scholars or fellows, whatever, uh, with some input from that. So at every stage we've kind of like deferred to the local expert. Um, and so that, that's the main, you know, process behind this, this, uh, our selection process. Um, now of course you wanna have checks. [00:33:01] You know, uh, so you wanna like have a feedback loop, right? You wanna like look at the impact of your actions on the world, um, maybe do some type of AB testing. Luckily, we're a big enough program. We can see the ABS and the different types of people selection processes, right? We've never had a homogenous selection process across mats. [00:33:19] It's always been mentor based, right? Different mentors have different needs, and that's been like the, the core of mats, right? Um, but we can also assess the impact in the world of the research. The orgs founded, the placements, uh, like the take up of that research into, into policy or, or, uh, in shaping agendas or hiring practices or whatever. [00:33:38] Uh, and so we've been like studying that whole feedback loop for years and I don't know, have some takes, but maybe not, uh, as strong as I'd like because it is still hard to see the counterfactuals, you know, in many ways. Um. And of course, like internally to mats for, for strategic communication. Like we, we, uh, we have like open access to our project docs. [00:33:58] All team members can comment and review. We have, you know, biweekly team meetings where we share key strategic updates. Like I'll give the strategic updates about mats and the AI safety field and other team leads. We'll share, uh, like the main things they're contributing to. And we try and publish as many internal strategic memos as possible. [00:34:15] So I write like tons of memos and I give lightning talks every week as well during the program, uh, just trying to communicate my ideas. So like Cunningham's Law, you know, uh, which I think is like, uh, if you're wrong, put it on the internet and someone will tell you. So I tend to try and do that too with, uh, some less wrong posts and, and tweets and the like. [00:34:34] Um, but also like much higher bandwidth communication shared internally than MATS via memos and team updates. [00:34:41] Jacob Haimes: Gotcha. And you, you mentioned then just, uh, last thing on, on like, I guess mats itself. Um, you know, mats, uh, has, it sounds like a couple targets. Um, placements was one of them. Um, like uptake of the research. Um, I guess maybe explicitly what are those and um, how, how did you get to them? Um, do you have a core one that you care about more? [00:35:06] Uh, yeah. What are, what are your thoughts on sort of like targets, I guess of, of the organization? [00:35:14] Ryan Kidd: yeah. so I mean, I would say the, the actual North Star of what we're trying to do is, you know, reduce. Catastrophic risks from AI make AI go. Well, as we like to say, uh, this is super hard to evaluate on the day-to-day. Like the actual way we do this is broken down via this chain of trust, right? With these kind of local goals. [00:35:33] So, uh, like our, like we want to invent selection, right? Which is the first link. Um, that is like leveraging the mentor selection committee and our own team members, uh, knowledge and past ratings of mentors and so on, uh, to pick like the people who will, you know, have the best chance of reducing. [00:35:49] Catastrophic risk from ai. Okay. As, as researchers and mentors. Uh, and then the next step is like the scholar or fellow selection, in which case, you know, we, we leverage the mentor's expertise and also a bunch of proxies that we have built over the years. Um, and, and things like, you know, coding tests like code signal for some mentors and, uh, uh, for, for the men ideally like specialized, uh, uh, selection problems and processes. [00:36:13] Like as an extreme example, Neiland is like 10 hour, you know, uh, neck and turp kind of work test thing that he gets people to do, um, which is like a lot of effort, but is extremely good at picking out, uh, very, very good people, I think. Um, and yeah. And so that, that process we, we kind of chaperone that process and, and try and reduce the load for mentors as much as possible. [00:36:37] Um. So that is the main thing we're actually optimizing for in terms of like what our decisions are based around, um, in terms of checks, right? It, there are useful proxies that we can get. So placements at organizations. Okay, what are the best organizations? What matters? Well, what's asco? Mentor Selection Committee, you know, so they have their takes about the best, uh, research agendas, uh, and stuff. [00:37:00] Um, not perfect, but you know, there, there's something there. Um, we can also, of course, you know, look at the size of organizations. You can look at, oh, maybe, maybe Coefficient Giving has a, an independent and somewhat independent, somewhat good, uh, you know, uh, credit assignment process when they, which they use to determine funding. [00:37:19] Um, VCs, uh, maybe not. Uh, it's like, it's hard to say that, uh, like, you know, the market cap of AI companies is, uh, is a good proxy for like, how well they're helping the world. Uh, I don't know. I don't think, yeah, I'm, I'm not that capitalist. Um. Yeah, so, so there are some things you can do. You know, the found the organization's founded, same metrics you can apply there in terms of research. [00:37:40] Uh, you can look at like, there's trivial stuff like citation, you know, which is a good measure of academic impact and, you know, is accepted external. Um, but you can also look at stuff like, uh, which we haven't actually, we haven't built a proper process for this, um, which is, which is like you can, you can get your committee of advisors to, uh, sort of like assign ELOs to papers, you know, via pairwise comparisons. [00:38:03] You know, you can imagine like a research ELO ranking system or various ways people could talk about their favorite papers. We have got people to elicit, in some cases their favorite papers and we have built a, a selection of like big wins out of that. Um, but yeah, we haven't done this thorough yet. [00:38:19] Jacob Haimes: Nice. Oh, that, that's really interesting. I feel like that would be, I mean, that could be its own research paper, right? Like, uh, [00:38:26] Ryan Kidd: Yeah. [00:38:26] Jacob Haimes: this. Yeah. Could be. Cool. Uh, awesome. Uh, so like, I guess now I want to shift gears a little bit to talk, um, more about the AI safety talent pipeline as a whole, as opposed to like math specifically. [00:38:42] And then, you know, your, your perspectives on that. And also when it becomes relevant, how maths fits into it. Um, but like a, a key thing that. I think a lot of people have said, and I know you've acknowledged as part of like even math strategy is to cast like a very wide net when it comes to, um, trying to recruit people. [00:39:06] So making that net as wide as possible so you capture as many people and then you provide support to the strongest ones. And the concern I have is that this can neglect the, like, externality of the wide net. Um, in that there are a lot of individuals, so let's say, I mean someone like me who isn't gonna move to the Bay Area, at least in the immediate future. [00:39:34] Um, and I can't, uh, then necessarily get the support. Uh, but I've been sort of like brought into this in, in a way I've been, um. Convinced that this is a problem that I should be supporting and working on. Um, [00:39:56] Ryan Kidd: Hmm. [00:39:56] Jacob Haimes: but I can't necessarily in the way that I, I feel like I can and I feel like I should be able to. [00:40:05] Um, and so I guess I'm curious about your thoughts on that and like if, if I am missing something there or, or how, um, you think about that and how that, uh, sort of plays into some of the strategy here. [00:40:23] Ryan Kidd: Mm. Yeah, that's a really good question. Okay, so first off, I'll say MATS has always allowed for remote participants. We've, we've never tried to discriminate against remote participants. We, we just think that like, being in person is so much better for development as a researcher. Um, even if people can come, if they can come to the program for like a, like a week, we'll pay for flights, you know? Um, because, because we think, you know, depending on the week, like we want it to be like a good week, like the first week or, or the last one, right? The symposium. Um, but we've always had that. And we also of course have, uh, uh, you know, scholars based in Berkeley, but also in London, right? Our two hubs. And we've, we've now added Washington DC as a third location. [00:40:59] Um, and, and, and mainly it's like we wanna save on costs. Like we don't wanna support people going to an office space anywhere, because that would cost a lot of money and that would be probably not that impactful relative, you know, to coming to the base. We wanna incentivize it somewhat. Okay. Secondly, all right. [00:41:15] Um, so one of the key elements of MATS strategy, um, that we've always pushed for is we are picking the best people. We're not picking the people that we can have the largest delta for. We're picking the people that we can accelerate fastest into research, lead, and field building roles that can increase the carrying capacity of the ecosystem. [00:41:37] What that means is, from the very beginning, like I, I, I've been advising, uh, like, uh, you know, like I, I helped, uh, Agus and Nev when they were like, uh, like taking over and launching Spar. Um. I, I've, like, I've helped, uh, I advise Pivotal research. I advise Catalyze Impact, um, uh, base this new, uh, black and AI safety ethics fellowship. [00:41:58] I'm one of the advisors there. So, and we've always tried and publish our like, uh, uh, math stuff and we share mentor, like the, the mentors that don't make the cut that are over Excellence bar, of which there are, uh, you know, like, like at least a hundred, maybe 150. I've shared those, uh, the last like two, three years with um. [00:42:15] Like as many programs as I can find that seem like they're doing the right thing. What we're trying to do with mats is not be like gatekeepers necessarily. We're trying to be amplifiers of people that we think have the best chance of going out into the world and staffing these other programs. You know, many mats, scholars of fellows have been AI Safety Camp mentors during the program or immediately after an extension. [00:42:38] The same with, uh, spar, you know, which, which offer kind of like this lower, uh, uh, commitment, mentorship opportunity, right? So they're gaining experience as research leads, as team leads and, and advisors. So Matt is trying to like, empower those individuals because we still think mentorship is the main bottleneck in AI safety. [00:42:57] Right? Um, so from that perspective, uh, like I do see it as MATS job to help the whole distribution as much as possible. Obviously there's some cutoff, like there's some minimum level of ability that is generally sufficient for working in AI safety and probably it's gonna be kind of high. Um. That, I think that's just the case with like this, some of this knowledge work. [00:43:17] Uh, but that's not to say that like yeah, maybe that's the technical AI safety stuff or, or the very, but, but there are like tons of opportunities for people who have like less technical ability but are really skilled as communicators or operators or so on. Uh, and I see these all the time and 80,000 hours job board. [00:43:32] Um, and I think there should be more opportunities for people with, with different seal distributions. Uh, does that shed some light on the strategy? [00:44:50] Yeah, no, I think that that's helpful. Um, and also provide some context as well of like, you know, what the end goal is for you guys and how that plays into it. So, and, and this sort of gets into the next section, which you mentioned earlier, um, about the amplifiers. So, um, from what I heard just now in, in that sort of description was you want to get the people who, uh, potentially they're the other, uh, sort of like. Archetypes as well, uh, which I think are like iterators or um, ideators or something like that. [00:45:32] Ryan Kidd: Connectors, iterators amplifiers. [00:45:34] Jacob Haimes: connectors. That's what it was. So the, , role that you see MATS holding at least, you know, moving forward is getting people to be in those amplifier roles as, um, then, you know, maybe the other people who weren't necessarily, didn't make that cut. [00:45:56] Uh, I guess to get into the program can still participate because you're developing a broader ecosystem by creating those amplifiers. Is that, does that match what you were saying, or am I misunderstanding? [00:46:11] Ryan Kidd: Yeah, I think, I think like the archetype at Amphi, as I normally conceive of it, is a bit different than like this research lead or mentorship, uh, type of, type of role that I'm describing with, um, which, which is I think is the main benefit of, of, um, this very tail, tail heavy selection process, right? Where we're prioritizing excellence, uh, and like, you know, like, uh, minimizing time to impact or time to, to research lead or something, uh, versus like, you know, maximizing the delta that a program can provide. [00:46:39] Like these, these are, yeah, so, so that, that's, that's different. 'cause like those people are, are going on to roles that are very technical amplifiers. Uh, that kind of role. Like relatively few of our fellows have taken on, our alumni have taken on amplifier shaped roles. Some have, right? We have, uh, on our team, Claire Short, uh, was a MATS alum. [00:46:57] Who joined our team as a research manager. She's clearly in an amplifier role. Uh, you know, we had like Kai Konik, um, he's another guy. He helped co-found catalyze impact. Now, I, I, think he's worked in the general, uh, GPAI lab. Um, so there are, there are people out there who've done this, like Claire as well. [00:47:13] Uh, worth noting she found Athena, this mentorship program for women, um, which was, which was great. Well, uh, I don't believe that's, uh, happening at the moment. And, uh, yeah, so, so I think like, um, the amplifier skill set is like a little bit different. Um, certainly like people with technical backgrounds like myself, uh, you know, and, and Claire have like gone into these amplifi roles. [00:47:33] Uh, but you also find plenty of people who never did a PhD, you know, never did a master's degree in a technical subject who are just like. Great amplifiers because they have the, the networking skills and the people skills, and they have enough of a technical context or perhaps even have a technical co-founder, um, that they can, you know, successfully kind of navigate the very technical jargony, uh, uh, application process for funding and for, for getting, you know, kind of, uh, credible mentors or whoever people to buy into their project. [00:48:01] Um, so I think like in terms of building amplifiers, right? I don't actually, there's not like a clear, uh, mechanism for this in the field today. I think like, uh, it's the same as like startup founders in some sense. Um, there are people who kind of like, uh, complete the gauntlet, you know, and they, they get funding or not. [00:48:21] Um, but I think like we have now more programs dedicated to AI safety startup founders, like catalyze and seldom, like I said, um, than we have amplifier shaped programs. Um, okay. What makes a great amplifier? I've danced around this, I think, uh, amplifiers. Yeah. Yeah. Do you have takes, actually, I'd be curious. [00:48:43] Jacob Haimes: Um, oh, no, I was, I was just saying like that's the, that was the next question that I wanted to ask. I'm, I do have takes, uh, I would also see myself as an amplifier, um, but. I'd like to hear yours first. You are the guest [00:49:01] Ryan Kidd: Okay. Oh, thank you. Yeah. So I, I've, uh, there's three words that I like to use to describe amplifiers. Uh, I say they're plural. Liminal and relational. So they're plural in that. They, like, they do a bunch of different things, right? They're jack of all trades. They do management. Uh, they have some like technical understanding as well. [00:49:20] Uh, maybe they do communications or strategy operations. They, they like have a, a bunch of different things under their, under their belt. Uh, they're liminal in that they're like much more technical than usual ops or communications or HR roles, right? They, they like have, uh, uh, they, they can actually read papers, for instance, uh, and understand what's going on there. [00:49:38] Um, but they're less technical in terms of the role they, they do than research roles, right? They're not actually necessarily in the guts of language models. Um, and they're also relational in that their outcomes are typically grounded in how others are benefited. So yeah, this, maybe there's like, uh. A, a whole spectrum of amplifier shaped people. [00:49:57] Um, but the kind I'm talking about is the kind that is in very high demand by these technical AI safety organizations that need, uh, uh, managers and need kind of, uh, you know, people, people, operations staff, and all these kind of things to help scale their, their organization. [00:50:14] Jacob Haimes: Gotcha. So you say then that the untapped role maybe, um, for this like flavor of person is more a like ops role? Um, and potentially just like the stigma maybe around. Ops, uh, in this space has like, uh, caused less people to be applying for these, uh, than might otherwise, or like, what's your reasoning there? [00:50:50] Or, or, not reasoning, but like thought process there. [00:50:53] Ryan Kidd: It is interesting you say that 'cause uh, MATS recently hired like something like two to 3% of our operations applicants for our roles. So we're getting a lot of operations applicants. Um, and like I think all our team members are fantastic. I'm not even exaggerating. I've just, I think we have a exceptional team and, and our operations team is no exception. [00:51:13] Two to 3% is pretty low, so I don't, so may, maybe there is a big stigma around operations roles, uh, but it doesn't seem to have manifested in changes to how, you know, how many applicants Matt is getting from qualified operations people. Maybe that's because maybe this wasn't the case two years ago. um, [00:51:30] Jacob Haimes: people, um. They fit that amplifier role of like, they can read the, the, uh, okay. So that's, that's what I'm trying to, to identify here. Um, I guess what I'm saying is that like the people who are amplifiers, maybe they should be applying for more of these ops roles than they are because I, I do think that there is a stigma around, uh, you know, if you feel like you have spent time in doing, uh, upskilling and, and learning, uh, technical aspects. [00:52:10] So maybe you, you have software engineering experience, maybe you've, you've done a master's in, you know, some sort of machine learning or otherwise related thing. Um, there is this implicit stigma around taking an ops role. Uh, and like I've, I even mentioned this on a previous podcast, um, with Lee Leanne who does, um. [00:52:32] A lot of that flavored stuff, uh, at Blue Dot Impact and you know, we, we mentioned to each other like, yeah, you, you, you say, you know, you do ops for this organization and you, you just sort of watch their, their facial expression, like deflate a little bit. Um, [00:52:50] Ryan Kidd: Hmm. [00:52:50] Jacob Haimes: and so [00:52:51] Ryan Kidd: unfortunate. [00:52:52] Jacob Haimes: I think, yeah, I agree. Um, but I, I do think that that is part of the reason why we're not seeing these amplifiers apply for those roles. [00:53:08] 'cause maybe, you know, I, I think for, as someone who's been done the hiring process, uh, and sort of projecting as well, we want those kinds of people to apply for these roles. It's just very difficult to find someone who will, I guess. [00:53:27] Ryan Kidd: Oh, yeah, I have a lot to say. Um, I do think there's a meaningful difference between the optimal operations person for most of the operations roles I'm conceiving of. And I don't just mean at mats, I mean in the broader AI safety ecosystem, or tech in general. Uh, and this amplifier role, I think they're quite different. So I think the closest analog to amplifier in the tech industry is a TPM, a technical program manager. Now, you could call this operations, but I think you'd be making the same sin that most, uh, effective altruists have been making for like, you know, five years or something, which is calling Everyth Theater in Research Operations. [00:54:03] Um, I think, I think operations as a category, I think operations as a category is a bit more narrow than that. Um, at least in my in my ontology. And it's, it's like notably, um, while, you know, plenty of our operation team can read ML papers and do for fun, they, they don't typically have the level of ML research experience or engineering experience that A TPM does. [00:54:28] Um, so tpms typically have some amount of technical experience. Like, um, uh, like if you're, if you're a TPM for a software engineering team and you don't know how to code, you're gonna have a bad time. You know, maybe you don't have to code that much in your day job, but you have to understand Yeah. Like, like what goes into the coding process. [00:54:45] You have to be able to like work with the engineers. So it's, it's, it's liminal. It is between those operations roles and research roles, it's right in the gap. Um, and, and I think that like you can have people who have op operations kind of specked in terms of their history speck into. These kind of TPM roles and, and maybe do some technical upskilling. [00:55:06] That's great. You can also have research people with research backgrounds like myself spec into these amplifier roles, you know? Uh, and there are, there are examples of both. [00:55:14] Um, you can join a great organization that's very, like a high functioning organization and you can pick up great operation skills from being on that team. I think like for technical skills, typically there is a large amount, like you need to like really be touching grass in terms of, uh, AI models or, or like technical degrees or something. [00:55:34] Like, quite a lot. I think technical skills are just harder to pick up, um, on the job or something. Um, and so that's why I think you see most of these TPM or amplifier types, uh, they're like researchers who burned out from research or something like myself, um, and they just, and maybe they're like more extroverted, uh, and they like working with people and they have good people skills and, and they found their niche, you know? [00:55:58] Um, that's the kind of thing I'm angling at here. Yeah. And so if you look at the backgrounds of most mats, research managers as an example, uh, like they come from a variety of different things, but mostly they have some academia academic experience. They have done ai CT research before. They're data scientists. [00:56:16] Uh, some of them, few of a few of them have been product managers. Uh, but they come from tech startups, from consulting, from research academic positions. Uh, yeah, so, so it's, it's like a bit different than like our operations staff. Typically they've been doing operations, uh, at like effective companies. Um, and in many cases, like some of them have like founded their own companies, you know, but it's, it's typically not been like research, uh, or, or data sciencey kind of roles, which is more of what we would see from the research managers. [00:56:45] Jacob Haimes: Do you think that the, like misnaming, uh, of using ops for, you know, anything that's not research, um, has contributed to this problem in ways, uh, and like why and how? Basically. [00:57:05] Ryan Kidd: definitely, definitely has. Yeah. I think people should, uh, uh, reclaim the title of operations. Like what, what, you know. Um, but that said like, um, operations is like a, a, a criminally, un undefined word, you know? Uh, operations is what makes your organization operate. Okay. Well that changes based on what your organization does. [00:57:24] Um. But I think AI safety is kind of interesting, right? I wrote a blog post about this recently, uh, called AI Safety Undervalues Founders, and I highly encourage people read that. Uh, one of my, some of my messages in there were like, man, it seems, seems kind of crazy that like there's such a huge status gradient towards being an academic or a researcher in AI safety field, uh, that like even founders of great projects. [00:57:49] And I was like, you know, kind of pointing myself a little, but also to a lot of people I know, uh, who founded great field building projects or other things that aren't themselves doing research. And get like very low credit, credit for the things they've done. And I think this is, this is beyond founders, this is also operations people. [00:58:05] Um, my, my team, my team have said like, yeah, I went to like x uh, AI safety hub, like this office or whatever. And as soon as I told people I was doing operations, their face fell. You shared a similar story. I think that's crazy. You know, it should be like, it's an impact oriented field. People don't understand. [00:58:22] Operations is really hard to do Well, you know, like, uh, great, great operations people make or break your organization. Uh, like it's, it's, there is this kind of elitist academic culture. I have noticed. Um, and I think like one of the things that we do in MATS, uh, to try and mitigate this is we try and give people across the team, you know, the ability to contribute to strategy, to write memos, to, uh, to comment on things. [00:58:48] Just like have a very, you know, just very, uh, level playing field in terms of, you know, who gets to. Offer feedback. Um, obviously there are some problems of salaries, right? 'cause the market for operations salaries is typically much lower than technical salaries. So, uh, we, we just go with the market on this. [00:59:05] Um, though obviously we, we offer, uh, internal incentives to people to take on project management, right? Which is itself, like quite a level up in terms of usefulness and ability. Um, and then I think like the way that, uh, we do things like lightning talks, our weekly sessions in the program, our operations staff regularly get up to say Cool Lightning talks. [00:59:25] And I think that has an impact on how people regard them. Um, and in general, I think that, uh, people in the program, like, I could be wrong, but I think if you asked any of my officer or community staff, they would say like, yeah, like we, we feel respected by fellows. I'm not sure you would get the same answer if you went to, uh, a. [00:59:44] Certain AI safety hubs and ask the same question. Um, yeah, it's, it, there is a bit of a pernicious thing happening here. Part of it I think is maybe unintentionally caused by podcasts, like 80,000 hours, only interviewing researchers. Kind of a wild trend, right? Um, there are plenty of other people to interview, you know, but they tend to only interview researchers. [01:00:05] Maybe I'm overgeneralizing, but yeah. [01:00:09] Jacob Haimes: No. Yeah, I, I definitely think that's true. I, I think, at least from, from my perspective, from like the content creation, it's easier to, to interview, uh, a researcher because you can target a thing that they've just put out and be like, okay, this is then the, like, it themes the episode by default, you don't have to do as much. [01:00:29] Uh. I guess planning around like, you know, here's what the arc looks like, here's what we want to talk about. Uh, so it is just like easier when that is the case. Um, I think it's very much worthwhile to do that, obviously, um, to do that extra work, to have conversations with people who wouldn't otherwise get to have. [01:00:48] Um, but like appearances and stuff, because I think it's really important to share the perspective and the work that you are doing. Like I think a lot of what you've already talked about in this, um, in this interview is really helpful for me as well as someone who is, you know, starting my own thing or, or has been starting, it's been going for multiple years now. [01:01:13] Uh, but, you know, um, you know, yeah. So I, I think that's really valuable. And I want to say also I appreciate the pushing back against some of those, uh. Seemingly established norms, um, in some of these spaces. 'cause I feel like that is hard to do often. Um, so I, yeah, I think that's great. [01:01:38] Ryan Kidd: I got a lot of backlash from my AI safety. Undervalues found founders, less strong posts. You can read some of the comments. Uh, people were like, we don't want more founders, or, or, Hey, we we're all for founders, but like, founders should also be, uh, on less strong all the time and, and should be like amazing researchers too or something. [01:04:01] Um, I don't know, like there's some valid points in in the criticism there, there's a lot of valid points, but one of the things I, I do note is like this, yeah, this like academic kind of, uh, bias, you know, uh, and also like, I don't know, just, um, bias spotlights. You know, like if you look at the 80,000 hours job, uh. [01:04:21] Ryan Kidd: Board, you know, they've, they, they typically don't have many of the operations roles. Maybe there aren't that many posted. I could be wrong. Um, but also like the, the, uh, the job profiles, the career profiles, it seemed like STS for founder, for top AI safety founder for, uh, new top programs or projects or whatever seems like a ST. [01:04:37] Operations, they had an operations 1 0 1 point seems like a stub. Um, I've been trying to work with them to, uh, see if they, they, they're interested in this amplifier archetype. 'cause I think it is so in demand, uh, it is actually the dominant hiring need in AI safety organizations in the 10 to 30 FTE range. [01:04:53] It's the dominant hiring need and it's like almost as in demand as iterators for the larger companies. So. It does seem like there are some, uh, there's some clear value being left from the table here. Uh, and we don't yet have a good training process for amplifiers. Uh, I've, I've got an article in the works talking about what that might look like. [01:05:11] Um, though we'll note that like, uh, plenty of great amplifiers in the community I think have come through mats or, or other, uh, field building kind of projects. And I, I think we do add value there, but we're like, we're, that's, that's like a career stepping stone. [01:05:24] You know? It's not the same thing as a training program. I think you could have a training program and it might be like, uh, radically different than these current research fellowships because, uh, the main, as we say there, uh, one of the key aspects of an amplifier is that they're relational. Their outcomes are not technical research project products. [01:05:41] They're benefited humans, you know, so you have to train that skill set in a different way. [01:05:47] Jacob Haimes: And the feedback loop is a lot, uh, fuzzier and slower, I think [01:05:53] Ryan Kidd: Yes, that's true. [01:05:55] Jacob Haimes: so, yeah. That's another thing you have to contend with there. Um, so I, mentioned like large organizations, um, you know, they, they want iterators and this, you mentioned this in a post as well. Um, can, and you know, I'll have described Iterators, so we don't need to go into what that means. [01:06:16] Um, but, I think this is useful for sure to have iterators. The maybe concern or, or, uh, issue with having so many, um, iterators, uh, is it almost feels like we're training, um. A friend, uh, who I was talking to said like it's like we're training paralegals, not lawyers. Uh, and so we'll have tons of paralegals and you know, that's, that helps the lawyers that do exist sort of like, you know, do what they're doing faster, but we don't have enough lawyers or, or, you know, people who are sort of starting initiating research ideas. [01:07:06] Connectors, I think is, is the term that you use for it, um, because it is such a new space and we really do need to be covering our, like covering the ground. Like you mentioned, uh, you know, way back in the beginning with you're trying to make sure you have full coverage of these different, um, research avenues and spaces. Are we overinvesting in the iterators and. Doing that before we have enough people who can actually do that, like, uh, ideation, connecting, and development. [01:07:43] Ryan Kidd: That's a good question. I think it also might misunderstand what Iterator is trying to describe. An iterator is not an engineer like who is just kind of executing on a task. Uh, an iterator in my conception of it, is like a pretty empowered individual who has strong research taste, uh, but is executing on that within a paradigm. [01:08:03] An example for that is like, I am a physicist. I have a quantum, or I was a quantum physicist. I was an iterator. Right? As a PhD student, I was like taking established, uh, uh, you know, uh, theories of both Einstein Condensates, uh, and quantum chaos, cold matter physics. And I was trying to make marginal progress by making new hypotheses and testing things in new ways and so on. [01:08:25] But I wasn't inventing like, new paradigms of science or, or, uh, I, I guess like maybe I was trying to like do, uh, uh, some, some, like, you know, theory, empirics interplay stuff. Uh, but I don't think it was like the kind of interdisciplinary knowledge transfer that is, is archetypal of connectors. So that is what connectors are doing. [01:08:46] They are, they're like taking, so when I asked Buck, uh, Sogar, uh, at Redwood, like how he would make more people like him, he said, I would probably get them to read, uh, a bunch of game theory textbooks, evolutionary biology history, uh, you know, and, and psychology. It's like, what? Because that's where he minds his ideas. [01:09:09] You know, notably his Matt Stream does not do that. He trains iterators typically. There are some people who are great at both, like Alex Turner as an example, you know, his shard theory and uh, uh, great at routing. All these things like very theoretically informed, you know, um, connector level things. [01:09:25] Okay, so I've said my piece about defining these things. I'll say, I'll say, I'll say this next. Um, when I talk to people that I consider the archetypal connectors in the field, uh, or, or anything, people who are running labs, they make it clear that they do not need more ideas right now, at least with their, their beliefs about how to make marginal progress and AI safety on the whole, they need people to execute on an overabundance of ideas. [01:09:52] So if they're right, then we're doing the right thing. We're making more iterators, you know, uh, there are other people who think, you know, all the ideas we have are bad. Let's, there's no way they'll ever work. You know, we can rule them out, uh, based on some heuristic, hand wavy kind of argument. Um. Not, not to knock, curious to can wave the arguments. [01:10:10] Those have been very useful in like deciding what directions to pursue. Uh, but, but a lot of people like rule these things out and they say, okay, no, we need radically new ideas. Let's just come up with new ideas. And I, I say to that, how will you know it when you see it? You know, like, it feels like the level of analysis at which you're ruling out these ideas would, you could rule out anything. [01:10:28] You know, I don't feel like these ideas have been ruled out. I feel like alignment MVPs are still very valid, you know, uh, uh, thing to mine, I think like, uh, hardcore interpretability, very valid thing to keep iterating on. Um, and notably like people who do Neil Ns training program in the first 10 hour application, they're coming up with original hypotheses and research directions in interpretability. [01:10:51] Sure. They're not like, uh, redefining the field of interpretability in that stage. Maybe they will go in and do that, but I don't think being an iterator is tantamount to being a paralegal at all. It's just much harder, uh, uh, to like, to, to do the kind of, uh, ideation and hypothesis testing and so on At that iterators, do I think. [01:11:11] Jacob Haimes: Okay. So I think then maybe, uh, like what I was seeing is a little bit of a disconnect from how you're describing it. The, uh, sort of motivation behind this is, you know, I've, um, interacted with a couple people, uh, like MATS alumni in the past who, uh, we, you know, I work with apart research. We also have a fellowship and one thing that we've emphasized a lot in that is, um, sort of coming from the participants. Um, and I. Have found that in one or two cases, a person that has been through mats and has like really good technical, uh, chops, you know, when you get to the point of like, okay, let's, let's do the, um, like, uh, research process, so to speak, uh, like doing the hypothesis rigorously and now, and testing it and analyzing the results, um, that isn't, uh, there in the, in the extent that I would want it to be, uh, to like fill that role. [01:12:27] And so that's where that question came from. Um, and so then the question is, um, how do we improve, uh, these training processes to make sure that they meet that iterator bar that you are? Um. Outlining because, you know, if they do, I think that's exactly, you know, what, it sounds like what you're looking for very intentionally. [01:12:52] Um, but then how, how do we make sure we actually, you know, create the comprehensive Iterator or, or like the, the half iterator or something like that. [01:13:01] Ryan Kidd: Uh, I mean, the very best mats, alums, I think are like MATS mentors. You know, like you see a lot of them today. They're also like running teams at labs. Um, now I think, I think, I do think like most of the impact of mats, like most of these, uh, in most of these like long tail fields, right? Like, like academia and AI is gonna come from relatively few people. [01:13:21] There's, there's, uh, uh, if like log normal or maybe even Powerwall distributions of research impact. That's just how it is. Um, so yeah, and I also often think like for in terms of, uh, uh, program design of mats, selection is more impactful than training, right? And sometimes we're gonna get selection wrong. [01:13:44] Um, but on the whole, I think that like we have a pretty low, uh, false negative rate. Like we have a pretty low rate of rejecting people who are really good. Um. But we tend to like accept, uh, a lot of people from different backgrounds because we value diversity. Uh, we wanna, uh, uh, you know, like we have more money than we, you know, are, than like money is on our limitation. [01:14:04] Um, okay. Now it feels like I'm pushing blame aside. Okay, so let's, let's actually talk about what Matts tries to do for instilling research tastes, what it could do better. [01:14:13] Okay. So. Basically what we do for research taste is we, we require people, uh, to, to pass milestones. Okay? So we have like a mid program milestone that is effectively, uh, gates entry into our extension phase. Uh, and is the, and is like the largest component by which we gate entry other than mentor, uh, you know, feedback. [01:14:36] Um, and is also used heavily in our later recommendations to jobs and so on. 'cause we do offer references and, and that kind of thing. People frequently ask us, and we tell a lot, we tell our fellows, people often will tell, will ask us, who should we hire? And if you do well in mats, if you give us like, good data that we can like, recommend you. [01:14:54] If not, we can't, you know, we don't, we can't really do that. So they have some incentive to perform well at these, these, uh, checkpoints and this, this checkpoint, uh, of, of a research plan. Um, well, what isn't like. You know, the best thing in the world. We kind of basically built this thing historically, um, to be this mid program checkpoint where they put in, uh, their theory of change for their research, right? [01:15:15] Their, their proposed plan, um, and the threat model of AI safety. This is actually trying to target, and we've iterated on this many times over the years, but it's sort of like our MVP of assessing a person's ability to conceive of, uh, uh, like the reasoning behind their research and, and justify that. Um, and it has like helped us to select some, some people for the extension historically and provided like, you know, just general credibility for our ability to assess fellows. [01:15:41] Um, but I guess like, yeah, there, there is like, there are substantial differences among mentors in the, the degree of support that they provide in terms of, uh, how to do hypothesizing and ideation and, and that kind of thing. Um. Um, we have workshops on this and we have seminars with leading researchers. Uh, but you really can't make up for, like, in academia, you can't make up for, uh, uh, like whether a mentor is really great and it's gonna give you all the tools or not, you know? [01:16:11] Um, but I will say there's some people who kind of, they make their own way. You know, some people have supervisors who are absent and PhDs and they just work out how to do this sort of thing, you know? So I think the key thing here, uh, ultimately is more reps, right? Whether you're really good or bad, uh, whether you have a great supervisor or bad supervisor, um, having more shots like on goal in terms of, you know, creating interesting hypotheses and then testing them, see if it pans out and working out what, what was right. [01:16:40] This is the main way to build up your research taste. And MATS is trying to provide that, you know, in terms of having large programs with, you know, some feedback loop process there. Um. And yeah, like it's, it's, I don't know, like I've, I've read a lot of literature on how to, uh, build research tastes, and, and I, I, I don't feel that there is a, like a very solid, uh, pedagogy here, unfortunately. [01:17:01] Jacob Haimes: Yeah. [01:17:01] No, I, I agree. I, I think that's, I mean, as far as I understand, like that's why the PhD exists, and that's why that's sort of what's. Been done for such a long time is like, the best way to do it is to get someone who, who knows how to do it. And then, uh, they watch you while you do it a bunch of times, uh, and you [01:17:20] Ryan Kidd: Yeah. [01:17:20] Jacob Haimes: on goal, uh, and they give you pointers. [01:17:23] But like, really, it's more about you just doing it, uh, and developing that over time, um, and not falling into like a bad habit along the way, which is what that other person, uh, helps with. So I, I think that that makes sense. I think there's, yeah, there's gotta be a way to improve this. I, uh, like on this sort of, uh, method and I think that programs like Matts are a way towards that. [01:17:51] The last thing that I want to make sure we touch on is, you know, you've mentioned in, in previous, I think a talk and, and also writing that you, and therefore MATS in, want to focus on existential risk reduction specifically. [01:18:07] Um, and so then my sort of next. Step is okay, but if you're in the business of having a wide tent and trying to get as many people, uh, involved as possible, at least with the why, the reasoning behind that? Uh, just providing a little bit of context around, around that. [01:18:35] Ryan Kidd: Yeah, good question again. So first off, I'll say like, we don't support. Uh, reducing existential, or as we like to say now, catastrophic risk to the exclusion of all other theories of change. Like we do have mentors in the program who are working on, uh, like digital sentence research. Um, we have people who are working on, uh, like we have or have over time had people working on gradual empowerment, uh, kind of things, um, uh, risks from, uh, you know, uh, like totalitarian, you know, uses of AI and so on. Uh, and of course most recently will McCaskill uh, working on flourishing as well. So these, these do feature in the portfolio and they are part of our whole portfolio approach. [01:19:16] So when I say catastrophic existential risk, um, I would say this is like, uh, this is like the central, I guess, thing we are targeting because it seems like the biggest bad, uh, in many ways. Um. [01:19:29] Jacob Haimes: Mm-hmm. [01:19:30] Ryan Kidd: Now there's plenty of epistemic, there's plenty of disagreement here, right there, there are people who, who think this is not very important. [01:19:37] Um, there are people who think it is like important, but perhaps on par with, you know, risks from totalitarian use of ai. Uh, and to them I would say, like, well look at the portfolio of research matis covering. Does it seem to you like this research only pays off in scenarios where AI is an existential risk, like the oversight control the evals, including for, for, you know, CBRN, chemical, biological, radiological nuclear weapons, uh, like the, the Syco sea evals, you know. [01:20:08] Interpretability in general? No, I think all this research, almost everything we do that is, that is trying to like reduce the risks of catastrophic ai, uh, harms actually pays off in a bunch of scenarios where AI never harms people, uh, because this makes AI more trustworthy, more robust, uh, reliable, like less likely to cause small harms and so on. [01:20:29] So I think like, you know, aiming for the, you know, aiming for the, uh, star, like aiming for the, uh, what is it aiming for, the Moon Street for the stars kind of approach is valid here. You know, try and target the hardest thing and along the way, you know, uh, try and like have an impact in all the other worlds where the worst case scenario doesn't happen. [01:20:48] I think it's very fortunate that we can in fact have this kind of portfolio. Um, does that answer your question? [01:20:57] Jacob Haimes: yeah. No, it, it does. And I think, um, something that I am on the record for saying is like, yeah, well, if we, if we resolve certain issues, we don't ever get to, um, the issues that a subset of people are worried about. Um, and I, I, it sounds like that is, um, in line with, uh, what you're, you're saying as well. And so that also helps me and helps, uh, provide context for you choose the research, uh, directions you do, and that there is a lot of coverage. Uh. It's not just one agenda or, you know, two or three agendas. It's a lot of theories of change. It's a lot of ways to go about that. Uh, and as you've been scaling, you've been able to cover more of that, um, which I think is, is awesome. [01:21:44] And Do you have like a couple more minutes to do a lightning round? [01:21:47] Ryan Kidd: Yeah. Let's do it. [01:21:49] Jacob Haimes: Um, so the first one is what's over hyped in AI safety right now? [01:21:56] Ryan Kidd: maybe a, maybe, uh, [01:21:58] I don't know, security and governance or something. [01:22:02] Jacob Haimes: Mm. [01:22:02] Ryan Kidd: yeah. Yeah. That, that's some spicy take. Yeah. [01:22:06] Jacob Haimes: brief reasoning, just like one sentence. [01:22:09] Ryan Kidd: Yeah. Um, the market will provide for most AI security things, um, and ai. AI governance only matters if the administration so, you know, cares enough to implement your governance policies. [01:22:25] Jacob Haimes: Yeah. Yeah. That is too real. Uh, alright. Uh, what needs more attention then, uh, or resources? [01:22:36] Ryan Kidd: I think a certain type of multi-agent or cooperative AI is very neglected and very important to work on. And the reason is because we are soon gonna be in situations where people build AI agent like, like, uh, uh, economies or, or infrastructures like an agent like based company or, or people leverage many teams of AI agents working together. [01:22:59] Jacob Haimes: Gotcha. Okay, cool. Um. At this point, you've trained hundreds of people. Um, the pattern that you see in the people who actually make it versus those who don't? Um, is and even, is there a pattern predictable at all? [01:23:19] Ryan Kidd: Uh, okay. I could be really trite here and say it's like conscientiousness trait, you know, the main, the main predictor of success among personality metrics, um, but, and I, I think that's, that is, that is like, uh, may, maybe I'm, you know, maybe I'm just, I, maybe I've internalized a model and I'm projecting that, but I do see the people who've really succeeded are extremely hardworking. [01:23:40] So as people who just, like, they keep failing, but then they keep going, you know, they learn from their mistakes and they keep trying. Um, and they don't get too discouraged from, you know, failure. [01:23:52] They, they just, they pick up again and start the next thing, be it re a research hypothesis. 'cause you're gonna fail every, like so many times in your research journey. You're gonna fail at hypotheses. Startups, most startups fail. Most startup founders have failed startups before. They're one succeeded and every other career too. [01:24:09] Many rejections in applications and so on. [01:24:13] Jacob Haimes: Yeah, I have a, a, a friend, he's actually the co-host of, of my other podcast who's like, research isn't about, uh, you know, finding the successes, it's about killing your babies. Uh, which is another like phrase that I've heard in, in startups as well. Uh, and I I think that's, you know, a good way to think it. [01:24:30] Like, yeah, this is gonna suck, this is gonna be hard. We're gonna have a lot of failures to get to the, the successes. Um, and sometimes that's not the case. Sometimes you'll luck out, but usually gonna take a lot of effort. [01:24:44] Ryan Kidd: I think a stoic philosophy is good here. You know, uh, it's about the process, not about the destination. [01:24:51] Jacob Haimes: Mm-hmm. Okay. Uh, and then, uh. Next one is what's, um, the most common misconception people have about what it takes to contribute to AI safety. What do you wish you could tell every applicant about, um, before they apply? [01:25:16] Ryan Kidd: the most common misconception Oh boy. Um, [01:25:24] I guess the most common misconception is that you have to be of like a very particular mold. You know? Um, you know, people, people, uh, seem to think that mats is, you know, like, because the acceptance rate is like 7% or something, like, it's super hard to get into. [01:25:41] Um. The kind of individual that succeeds in AI safety, um, may maybe like, maybe not the person that gets highlighted on the a DK, uh, podcast or, or other types of things. Uh, but like, but most people who get employed and, and, you know, have impact in AI safety. Like, they're just a variety of different types of people there, um, in terms of backgrounds and, and, uh, their specific talent archetypes. [01:26:04] Um, yeah, there are people who like their first paper, you know, their undergrads and they, they joined masters. Some of the program like Hoge Cunningham and like their first, uh, paper was an incredible smash hit, you know, that exceeded all expectations. Um, or there are people who are like, Hey, I'm like 55, or, or like some, some of our team members are, uh, in their, in their late fifties, early sixties, and they just like pivoted from, uh, very long, uh, exceptional careers, you know, and are interested to work in AI safety. [01:26:34] Now, you know, their, their jobs for everybody potentially. [01:26:39] Jacob Haimes: Yeah, I think that's definitely a big one, is like, don't, don't discount your expertise, your, um, competitive advantage. Um, because oftentimes there's a way that it is unique and valuable, um, for AI safety. Like we, we need all kinds, um, I guess to make AI safety, uh, or AI safer, I guess. I, I think that's a big one. [01:27:08] Um, and like, don't discount yourself before, before getting in there. Um. [01:27:15] Ryan Kidd: isn't just making AI safe, right? That's, that's like the first, that's just the starting point, right? It's making AI safe and then making, you know, the future, one in which there's abundant, flourishing for all people. And, and this, this process requires all people, because it impacts all people. [01:27:33] Jacob Haimes: Yeah. Alright, last two questions. Uh, first one is, what is the part about your work that like grinds your gears, gets you irritated, you don't like doing? What's the part that it's just like, ah, man, I gotta do it again? What is that for you? [01:27:52] Ryan Kidd: Hmm. Oh man, I, I really performance reviews and letting people go suck. Ah, you know, well, okay, some performance reviews are great, but like delivering negative performance reviews [01:28:04] that is, that is a painful process. Uh, sometimes there's tears involved and yeah, it's just like you have to be very empathetic and, uh, you know, hold their needs. Um. But also hold the organization's needs, right, uh, paramount. Um, so yeah, it's a, it's a tough, it's a tough process. [01:28:23] Jacob Haimes: Yeah. And then, um, last question. If someone is listening and they're thinking, I want to work in AI safety, but I don't know how, what's the, you know, actual honest advice you give them? It doesn't have to be polite. Uh, it's like the, the real version that you think will, will actually be valuable to them. [01:28:47] Ryan Kidd: I would say stop anchoring on timelines, you know? Uh, if, if you're asking the question like, how can I be useful to AI safety, stop thinking, oh my God, AI's gonna come in 2027. What can I do? What can I do? No, just say, dude, like maybe your work doesn't matter. Before 2027. Th th that's possible, right? Many people, you know, can pivot right now and, uh, 'cause they have like, tons of experience or, or they, they, like, they have the opportunities to pivot right now to work that impacts AI safety. [01:29:15] Given a GI arrives in 2027, many more people don't have that opportunity and they have a longer lead time. But they will, they, if they, if they start today, they can definitely become a person that is gonna be hugely impactful. Uh, if a GI arrives in 2033, which by the way is the median, meticulous estimate, you know, that's almost eight years away. [01:29:37] You got a lot of time. You could do a PhD in that time. You could get a job in Congress or something. Well, maybe not a Congress person, but like, you know, supporting the Congress or the White House, there's, there's actually a lot of time for these longer timeline worlds. So don't worry so much about early a GI scenarios if, if, uh, you don't think you actually have the capacity to be maximally impactful in those scenarios. [01:30:00] Anchor on longer timeline scenarios. Give yourself time to achieve and, and, and build yourself up, uh, and acquire stability because half of all futures right, have a GI arrive after 2033. We need people for those futures. More people probably 'cause AI is gonna be even bigger. [01:30:22] Jacob Haimes: half of all meticulous predictors. [01:30:26] Ryan Kidd: Sure. [01:30:30] Jacob Haimes: Okay. Awesome. Thank [01:30:31] you, uh, so much for, for joining me. I really appreciate you taking the time. Um, yeah, I just, I think this was great and I will, uh, I, I hope this is helpful to Matts as well. [01:30:44] Ryan Kidd: I am sure it will be. Thank you so much, Jacob. It's been great. [01:30:47] ​