IAIS Su Cizem === ​[00:00:00] [00:02:42] Su Cizem: [00:01:00] [00:02:00] Hi, I'm Su Cizem, I'm an analyst at the Future Society. [00:02:48] Jacob Haimes: Awesome. Oh, I'm, I'm really excited to have you. Um, we, before we get into it, I would like to know just in one sentence, um, how would you describe the AI governance and policy [00:03:00] space right now? [00:03:00] Su Cizem: Yeah. Um, I think right now it's a particularly difficult space. I think there's, um, yeah, geopolitically, it's a very difficult, uh, space to have like international AI policy, but it is fast evolving, I'll say that. [00:03:23] Jacob Haimes: Awesome. So walk me through your trajectory in AI safety, AI policy. Uh, how did you get to where you are now? [00:03:33] Su Cizem: Yeah. Very happy to. Um, I, I, yeah. I guess this is the part where I can spend a little bit of time because I had three, I guess. Shifts in my career trajectory that led me here. The first one was, I have a background in philosophy. I was mostly interested in staying in academia. Um, I wasn't really thinking about any kind of like impact I could have in the world. I was mainly just concerned with kind of like what I liked to do and what, what I thought I was good [00:04:00] at. Um, so I guess the first shift was like switching from thinking like that about my career to thinking a little bit more about like, okay, but what about the things that I can contribute and, and what does the world need from me? And like, what can I, what can I give it? Um, so thinking a bit more of like, I guess an impactful career. [00:04:19] Jacob Haimes: So that was like finding 80,000 hours. [00:04:21] Su Cizem: That was finding 80,000 hours. I was, yeah, it was, it was not even 80,000 hours. I took like an applied ethics class as a, like a requirement right before I graduated and. Uh, I, that was like the first time I came across Peter Singer and his arguments, and I thought, okay, we maybe have a moral obligation of aid, um, if we're in a position like both, just like financially and also just like capacity wise. Like if I was born at this like random place at this random time with all of these privileges, um, and I have these skills, maybe I ought to do something. And that was kind of the main argument that stayed with me. [00:04:55] Jacob Haimes: Mm. [00:04:55] Su Cizem: Um, I was considering at the time to stay in academia and do a PhD [00:05:00] and that was going to take like six years of a philosophy degree. And I was interested in philosophy of mind. Like I knew I had the kind of like interest and curiosity, but I couldn't really justify doing that. Knowing what I knew at the time. So I took a year off. Uh, from, from doing that, I thought, okay, if I still at the end of the year wanna do it, I can just always go back to academia, but I'll take the year to just see what, if anything I can do in the world. And so I started working for this profit for good company in the Netherlands. I had nothing to do with ai, uh, but it was basically this like business model where people like, there was like a startup and we were generating profit and donating it. And I then met like this whole effective altruism Netherlands ecosystem. And my boss at the time was like, quite nice and, and put me through this like intro to EA course. And then I, yeah, I found out about AI safety and it was, I guess like good timing because a month after I completed the course, I guess it was the blue dot course at the time, [00:06:00] or I don't remember exactly which one, but. ChatGPT came out. So it became this like, okay, well it's interesting because it's like we'd thought about artificial intelligence when, when I was studying philosophy as this like theoretical thought experiment. Like what if you had machine intelligence? And it could be sentient or conscious, but that was the extent of what I was interested in. And then when I started thinking about AI safety, it became more kind of systematic, okay. Like, is the world kind of ready for this? Are we ever gonna get here? And then it was like kind of in front of me, something that I could interact with. Um, and that's when like the second shift happened in my career into AI safety. It was more like, well, it's not this theoretical thing. It's actually right here in front of me. And like I, yeah, if I'm in a position where I can engage with it, I, I ought to. So then. I started looking into every resource that I could find, trying to understand what was going on. And I eventually hit like these technical benchmarks where like my, I [00:07:00] just had a philosophy degree, so I, I had no technical training. I, I didn't think like a computer scientist. I didn't understand like how the experiments were working. So then I justified like saving everything that I was making and going back to school for a year, um, as a way to kind of pivot into AI safety to see if maybe there's like on the ethics side something I could do. Um, [00:07:21] Jacob Haimes: So then just to make sure I'm, I'm following the timeline here. [00:07:25] Su Cizem: like 22. [00:07:26] Jacob Haimes: that's early 2023, [00:07:28] Su Cizem: Sorry. I graduated in 2022, um, and this was like, I guess November, December, like when. [00:07:35] Jacob Haimes: right? Yeah. So that's when Chachi BT came out. But then like the decision, like, okay, I'm gonna go back to school, would be like early [00:07:42] Su Cizem: Yeah, something like that. Like I, I was trying to upscale myself like with any resources that I could find. And to be fair, at the time I wasn't really using chatt PT to code or like, I, I wasn't really thinking about it in that sense. I was still like on, yeah, these like open. But then I got to a point where I thought like I really maybe just could justify [00:08:00] having one year find a program that can just give me like basic technical training, see if I have like the disposition to do this and, and understand from there. Worst case, I thought I could then justify walking away from it and I wouldn't think about it afterwards or I could just go back to like just the philosophy end of it. Um, so then I went back to school. I found a program in London that is exactly that, like one year there wasn't like a specific one on AI safety. I don't know if there still is, but, um, it was on AI and ethics. So they would teach all of the machine learning modules and, and coding and. Like actually understand like the ethical implications with a couple courses and we would get to do like a project at the end together, um, with the program. So yeah, so that was kind of how I went from like, okay, I'll think about, um, like from academia to like a more impactful career to AI safety. [00:08:53] Jacob Haimes: Okay. [00:08:54] Su Cizem: And then there was a switch, like the third one was, um, into AI policy, which was something I didn't see [00:09:00] coming, but had, like, it turned out to be like the best kind of place for me for the time. [00:09:05] Jacob Haimes: So maybe before we like go onto that, I just wanna build out the map, uh, mental map in my head of like, uh, also when I met you, um, and like where you were in that process. So, um, you're doing this, what, what was the program called in [00:09:22] Su Cizem: It was an AI and ethics, um, MSC at Northeastern University of London. [00:09:27] Jacob Haimes: Gotcha. Okay, cool. So you're doing this, this master's, and then you are also doing sort of like upskilling stuff, um, I think, and then you signed up for a, a blue dot [00:09:41] Su Cizem: Yeah, so, oh my gosh. So it was I think two years ago almost when we met. Yeah. I was, um, at that point, like I, by, by like default, I like to be quite busy, so I thought if I'm going to go back to school for a year, I'll take every free time I have to just like try to get as many [00:10:00] experiences and like upskilling as possible. And at like as a student, I had very little capacity, like visa wise to work, but Blue Dot um, was offering a course that was remote that was just a couple hours a week. And I found it to be super helpful in like my way of thinking about it. And I thought in my courses in the university, there weren't really. Those concepts that we were engaging with, like, we were still more like heavily on the ethics side. Um, people were quite opinionated, very skeptical about safety. The professors also were like, on very different ends of the spectrum. Like we were, we had professors who were more computer scientists and more philosophers, and oftentimes they didn't see eye to eye. Um, so it was like an interesting position. And I thought at the time that it would be useful for like the blue dot course to be carried into my school somehow. So I was doing like a weekly course on like the intro to AI safety, just to get like my kind of classmates also interested in it. And you and I met, I guess around that time through Blue Dot. Uh, I guess I was, I was the one who reached out to you. [00:11:00] And then we did a webinar together towards the end that was like for the public or more like for professionals who were a little bit more senior, who were also not technical, um, who were interested in finding out about AI safety. Um, yeah, so that was like while I was a student, like during, during university, I also, in parallel, like around yeah, the last six months, I guess, uh, started working as a research assistant at uni. Um, but this was for an international relations project. I had no formal training. Um, I, like, I hadn't even taken a class in years on international relations, but they needed somebody who was a little bit more on like the computer science AI side because they were doing a cyber governance research project for us and China and middle powers. Um, so that was a great opportunity. It kind of was the first time in my life I had engaged with cyber governance or cyber policy. Uh, I assumed a lot of things, like at the time I thought like, there are these agencies that kind of take [00:12:00] care of everything. I was very ignorant about like how cyber policy worked, um, how global governance like worked. [00:12:07] Jacob Haimes: Yeah. And instead we have these agencies that [00:12:09] Su Cizem: Yeah. [00:12:10] Jacob Haimes: of anything. [00:12:10] Su Cizem: Yeah. And, and even like beyond that, like countries themselves have just like at, like really looking at the literature, I was really taken aback by how different countries were dealing with cyber governance. A lot of it was tied to ideologies, like different political ideologies, economic ideologies, but a lot of it was tied to also like what kind of incidents occurred in their jurisdictions, and then like, what happened as a result? Like how did countries come together talk. And again, this was all very divine timing because around 2023, that was like the first Bletchley summit. Um, so we were doing this research where I was for the first time, like, okay, in cyber governance, it doesn't really seem like we're ready for the kind of world where I see very dangerous, capable, like AI models emerging. Uh, like I don't think the, this [00:13:00] situation that I'm reading about right now is ready for that. And. [00:13:03] Jacob Haimes: Mm-hmm. [00:13:03] Su Cizem: Then lets see happened. And that was kind of like one of the first times I had this like, oh, okay, so there are actual mechanisms. It doesn't have to be through these like multilaterals, it doesn't have to be through this. Like you don't have to cut through all of these bilateral red tape. Like you don't have to go through very formal processes. You could just have a topic, invite the relevant people and have a discussion about AI safety. Um, [00:13:28] Jacob Haimes: Okay, [00:13:29] Su Cizem: and yeah, so sorry, super long, but that was kind of when I first like got interested in AI governance and I, yeah, I had like one experience where I attended this, like track two as an AI advisor. It was just like a super lucky thing just at the right place at the right time. And there I left horrified. That was like kind of like the main. I guess like the most radical transition for, for me, from AI safety [00:14:00] to AI policy or like [00:14:02] Jacob Haimes: and [00:14:02] Su Cizem: anything else. Uh, so, so a track two is like when ex government officials or like government adjacent officials come together representing their jurisdiction's policies and you bring them together. I mean, this one especially was like people who had like a big network. They were former diplomats, um, who just cared about ai, uh, and wanted to discuss what was going on. And this was like one on AI and biosecurity. And here we had like US representation, EU representation, India, China, and we had also the experts, like people who were working at the kind of cutting edge of AI and biosecurity. Um. [00:14:45] Jacob Haimes: Mm-hmm. [00:14:46] Su Cizem: For like three, four days. We did a lot of information exchange. And at the end there was a scenario and, and the countries, the governments had to react to the scenario of like what they would do given the policies that they have. And at that point [00:14:58] Jacob Haimes: So is this like role [00:14:59] Su Cizem: it [00:15:00] was kinda like a bit, I mean, a lot of the times you like exchange information, you get everybody kind of up to speed about what's going on, and then you give them like, okay, so now that like we're here, something like this happened. So this scenario at this, uh, specific track two was like, what if there was a non-state actor that was using resources and models from multiple countries to attack like a specific country, but to track it, you needed this like multi-country coordination. And there's also like, it was like a bio weapon. So the way you respond also really matters because if you're late, then it spreads. And, and I think these scenarios are usually really helpful for policy makers because then they can be like, okay. Like given this, this is what our country would do. And [00:15:41] Jacob Haimes: Mm-hmm. [00:15:42] Su Cizem: at that point, everybody was convinced, like nobody was the skeptic about AI and biosecurity risks. Everybody knew what the dangers were and they were like, yeah, like even if we wanted to, our policy is X, Y, and Z. And we couldn't do that. And again, like my thinking at the time was like, oh, we just had COVID. Obviously we have like [00:16:00] good coordination mechanisms, especially about like pandemic preparedness or like response detection. And none of that was true. Like there are some, but they're so not developed and even more polarized than they were before. So on the plane back, that was kind of my main thinking. I was thinking, okay, even if AI does go well in this world, like AI safety from just model capabilities goes well. I don't think politics will be able to maintain it. I think maybe geopolitically we're just not ready. And I thought if I could do anything, it should be in this space. At least for like the first couple projects I, I could contribute to. That was my intro. [00:16:42] Jacob Haimes: So what did you [00:16:43] Su Cizem: So around the time I graduated was, uh, so yeah, this track two happened in May. A couple weeks after that. There was, um, so actually a week after that there was like the SOUL Summit. So we had, [00:16:58] Jacob Haimes: So that was the first AI [00:17:00] safety summit, is [00:17:01] Su Cizem: let's say, was the first day of Safety Summit. [00:18:00] So I was very lucky to be in the uk. I was actually at Ley. I didn't, I wasn't in the actual summit, but there was like a side event. So I was around like the time in the UK where this like first meeting was happening about AI safety and I saw that. I was like, a okay, but at least you have something [00:19:00] like this where. Countries and, and stakeholders that usually wouldn't come together are putting their differences aside and they're taking the situation at hand. They're saying like, okay, there might be outcomes that are catastrophic for all of us, so what do we do? And then the success successor, like there, there was a commitment to continue to do this. Uh, the AI safety institutes were launched, like the UK started them. I saw that. I was like really moving the needle towards a positive direction. So happened you had the frontier model commitments, um, and like AI safety commitments where like the companies themselves were signing pledges. You had the responsible scaling policy. So there was like a very tractable actual like improvements. And so around the time I was graduating, the next one was going to be hosted by Paris. And so I started just looking into how I could get involved. There was like ideas I had about what needed to happen at these summits, how to formalize them, how to like get a lot more out of them, maybe. Turn things from being voluntary to, to [00:20:00] like a bit more enforceable, improve, like what they could do, improve the scale. Um, so I, I, yeah, eventually I got involved with an organization that worked in France and the French ecosystem for AI safety, and they were helping around the summit. So I got to also be present there, and that was my main introduction into a policy. [00:20:21] Jacob Haimes: Gotcha. I, I know that was like a lot, but I think it is like pretty interesting in this case, especially to sort of track the trajectory, um, because it's not, you know, uh, what people think of. Um, typically, um, there are a lot of jumping, jumping around and, and shifts, and I think that's more accurate to like what most people experience than Typical like linear [00:20:52] Su Cizem: Yeah, I mean, [00:20:53] Jacob Haimes: Um, [00:20:53] Su Cizem: I totally agree. And I also think that's maybe like the biggest, um, like bias I have [00:21:00] about the whole situation is like I am, I knew this about myself from the beginning when I started thinking about what I wanted to do with my career, that there wasn't a specific role that I saw for myself. It was more like there were projects that I believed in, like there was some situation in the world that I thought isn't good and like sustainable or like could be improved and like what are the near term things that I can do to contribute to it? And I've only ever thought about my career in like a couple months sprints. So oftentimes I like talk to people who are trying to get into safety or like interested or trying to find out, I mean, this is a very young field we're talking about like, like three, four years. I mean, you and I met two years ago and that's, I think considered like still quite like old in, in, in this space. Everything is quite new. The technology is growing very rapidly. The space is growing very rapidly, but it is also quite niche in the sense that like there's still not that many things going on, and most of us do consult very similar resources. Um, and, and the funding comes [00:22:00] and goes. Like, this is not a very sustainable business model for like any industry. I do see it changing in the future, but like, this is a good disclaimer to think like everything we'll talk about in this, uh, like podcast, everything we'll talk about in my career. Like I, I can only speak for the near distant future. I don't know what's going to happen in AI safety and AI governance as a career in the long term where we're finding out as we go. [00:22:25] Jacob Haimes: Yeah, absolutely. and maybe that's also like a good, a good time to ask, you know, we said AI policy, a governance, uh, like, uh, as sort of job title, so to speak. Uh, but. What does that actually mean? Like, it's a very amorphous that I think a lot of people aspire to, but I actually don't think that I have a, like a concrete understanding of, of what that means necessarily. And so just at least hearing from your perspective [00:23:00] what it means so that we can orient for this [00:23:02] Su Cizem: Yeah. [00:23:03] Jacob Haimes: and then know that you might need to ask that again in the future. [00:23:07] Su Cizem: Yeah. I mean, I don't have a straightforward answer. I have an analogy, like when I've, we, we were thinking about like how to describe my position to my grandmother. Like, you should be able to like, do that with like, how, how you talk about your work. I think in general, uh, and in our organization we also like talk about a lot. Like, okay, to somebody who doesn't know anything about this, like to your grandmother, how would you describe what you're doing? Um. [00:23:32] Jacob Haimes: Mm-hmm. [00:23:33] Su Cizem: It is very hard. I have an analogy. Um, it, it's very hard to say it without it coming across as corny, and I always like use it as like a way to try to motivate people who like are in decision making positions, um, to do it. But like oftentimes people talk. Yeah. Yeah. So I'll, I'll, I'll tell you the analogy and I'll explain what I think about it. So I guess like, people make the knowledge of like AI being this like genie that's almost out of the bottle or like is [00:24:00] gonna come out of the bottle. Um, AI governance is like. Like the, the mechanism that decides what's in the room and like what the room looks like that, that like genie is released into. So I guess it's the governing bodies, like, and yeah, and there's like different ways of thinking about them. There's like national AI policy, there's company AI, policy and governance, and then there's like international AI policy and governance. And then there's different parts of it. There's like near term like future governance. Um, so yeah, there's, yeah, AI governance. It is a hard to define place, but that's how I think about it. Um, and depending on where are, where you are on that kind of spectrum of AI policy, uh, positions, you will be doing very different things in your day-to-day job. [00:24:53] Jacob Haimes: gotcha. And what is it that you [00:24:55] Su Cizem: So I am part of the Civil Society, um, [00:25:00] stakeholder group of AI governance. So I'm not associated with any specific countries or jurisdictions In my work, I work primarily on global AI governance. Um, this is like mainly because of a, like a couple things. Like one is like, I think when you're deciding on what you want to do in AI policy, you have to kind of ask yourself, okay, what are like this. What are, what, what is like the package that I come with as an individual? So like what kind of citizenship do you have? What are your ideologies? Um, what are the things that you're like willing to put up with? What are the things that like, are your boundaries? Like I, like, no matter how important I think it is, I can't overlook someone if they have like x, y, and Z ideology. If, like you, you're thinking about supporting a political party. Um, so when I had asked myself those things, there weren't any like very strong, um, positions I found myself in. I also move around a lot, so I thought probably I shouldn't be associated with any particular government, [00:26:00] at least in my work. Uh, I have like my own set of values, but I also thought like I want to address a little bit more of like this boiling frog problem where countries kind of respond to incidents in their own way, but because they're not cooperating, uh, there's like the benchmark of safety just keeps getting lower and lower and, um. [00:26:20] Jacob Haimes: Mm-hmm. [00:26:21] Su Cizem: Where, yeah, so it's the, so I focus a little bit more on like the international cooperation coordination, global governance part. What are the things that we could do collectively to make this problem a little bit better? And then we work with a lot of stakeholders in that spectrum. So like governments, different governments, diplomats, politicians, other civil society actors, researchers, people from AI companies, um, without having like specific political affiliations. [00:26:49] Jacob Haimes: Gotcha. Okay. And you just came back from the AI Safety Summit in India, uh, relatively recently. Um, maybe you can just say a bit about [00:27:00] that and, and what you were doing there to help contextualize this. [00:27:03] Su Cizem: So, um, yeah, again, the summit series has continued since the blech safe, uh, summit and Seoul and Paris. I guess Paris was a bit more of a shift from AI safety towards something else. I mean. At, at the conception of the AI summit series, they were quite narrow in their focus. Um, they were quite narrow in like the countries and stakeholders that they included. Um, they pointed to all of like the immediate relevant actors that at the time I think they thought was worth getting together. But obviously that conversation has grown as AI's influence has grown in the world. Um, a lot more people in the world are using AI models now. People are adopting them into their businesses and governments. Um, I think the majority of the world is now like starting to engage with AI models a lot more. So that conversation has gone from like a very specific catastrophic, like extreme risk focus to more like, okay, how is this technology [00:28:00] being used and a little bit more towards this like AI adoption crowd. Um, and that happened in Paris and, and the torch was passed to India, and India was the successor. And that actually happened a year later. Last February was around the time I started working with the French Center for Safety. And this time around I was with Working with the Future Society. Um, this India summit was the biggest one yet. I think there was over a hundred thousand people who were involved, um, who were present. And it was, I mean, from a personal perspective, it was extremely fun and like wonderful to be there. And just like seeing how much excitement there is about AI and like how differently people see it and, and interact with it. And it is like the first time a global majority country is hosting the summit. I mean, India alone is like over a billion people and population. It's like very hard to imagine that scale. [00:29:00] Um, and yeah, it's, they, they try to make as inclusive as like broad of a category for AI as possible. That being said, it looked very, very different than what the first summits, I guess, were, were dealing with when they were conceptualized. [00:29:17] Jacob Haimes: Gotcha, and like, but like, what did you. Do there, uh, yourself, were you running an event? Were you participating in, uh, events? Like, uh, you know what, maybe what did your preparation look like, if anything, and like what were your, um, what were the things on your [00:29:35] Su Cizem: Yeah. [00:29:36] Jacob Haimes: Maybe, I don't know, that kind [00:29:37] Su Cizem: Yeah. Okay. Um, I guess something happened also between the last summit and this summit that is worth mentioning because that's kind of the majority of what I was working on. But like a quick answer to your question. Yes. We hosted two events as the Future Society at the India Summit. So one was an official main summit event and another one was a side event. Um, [00:29:58] Jacob Haimes: Hmm. [00:29:58] Su Cizem: main Summit Summit event was, [00:30:00] uh, a session led by my colleague, uh, Kayo mdo. Was leading a panel, uh, where they were discussing cross-border infrastructure for incident reporting. So like when something happens, how do you report it, who responds and how can that be strengthened? Um, or more like where are we now? And they didn't partnership with OECD and that's where the biggest like incident monitoring mechanism is. Uh, I led a session, um, with our global team leader, Nikki, uh, on AI red lines. And this was more about like, are there, what are the jurisdictions doing about governing unacceptable risks in their territories and where could international coordination. Um, like where could that benefit them? So there's some risks that are like a little bit more contained to national contexts, just depending on how they think about certain human rights, um, laws, how they think about like what's important for their, like country's [00:31:00] situation. But there are some that are just like inherently cross border, like certain CBRN risks, certain cyber risks, certain like loss of control scenarios. Um, so how, like where do we escalate those? How do we cooperate when we need to on those? Those are the two topics that we like focused on. And our main engagements were around. [00:31:19] Jacob Haimes: What would a win have looked like for you, at the event? Like, um, coming out of it, you know, or going into it, you're thinking, you know, this is, this is what I would like to happen. And, you know, if you left being like, yes, I'm, I'm happy that, that we achieved this. Um, what were those for you? [00:31:41] Su Cizem: Uh, yeah, I'll say, I mean there was definitely this, um, I, I had it when I was leaving Paris and I think a lot of people in like the AI safety community had it as well. This like feeling of all, like we just had so much higher hopes for at least like the very narrow topic of AI safety on like catastrophic [00:32:00] risks. Just 'cause the previous summits had felt like they had so much momentum that we had gained from them. And, and Paris, like via broadening the conversation, we felt like kind of sideline safety. And this was happening in a political, like geopolitical environment that was quite like unique. Like the US administration changed, the UK administration changed, um, the whole like discourse around AI safety changed and all of this was happening around like the Paris Summit. So we, I don't think had enough like time to build expectations in a realistic way. I think we were more just thinking that the AI Summit series would. Go untouched. I guess that's what I was thinking at least. Um, for India I had a lot more time to prepare and a lot more geopolitical realism, so my wins looked quite different. Like my, I have this paper that I had written ahead of the, yeah, AI Action Summit on like global priorities for international cooperation where we make a bunch of proposal. What um, [00:33:00] sorry. We make a bunch of proposals on what the AI summit series should do, what the AC network should do, what should happen, and like funding and benefits. Like all of these like big ideas about well we can make these recommendations for the summit series 'cause that's like the appropriate platform for it. Um, we ended up not sharing that ahead of the summit, like just 'cause we realized how ambitious and probably unlikely it was going to be. So then it ended up being shared as like a bigger, these are the things that we think should happen in global governance kind of paper. Uh, I did this with the French Center for I Safety and there was an idea there on ai red lines. That carried over. So that's been like the main project that I've been engaging with at least since the last summit up until this one. And going into this one, I think because the scope was much broader because like the global majority countries, especially the ones that don't have their own AI models, uh, at least in like the capability benchmarks as like the, the west and China, um, like they [00:34:00] have a lot of very different concerns regarding like mass adoption of ai, what that's going to do for the, their societies not being able to have a seat at the table where decisions are being made. And I saw India as the platform to try and have those conversations. So win for my very narrow focus of like red lines and incidents and like AI safety in this like advanced AI safety way. Was like in these high level, I guess talking points, will leaders, ai, company leaders, country leaders, discuss AI risks in this way that's like constructive? Like is there going to be dialogue about red lines? Is there going to be like an appetite that's a bit more different than, um, like the previous summit about like international cooperation. Um, so those were like the small wins I guess that we were looking for, but they're not small in the sense that incremental process and governance is still a process, uh, a progress [00:35:00] and yeah, we, I didn't have hope that the declaration would be something super life changing. I, I, I probably knew that it would be something voluntary, but at least, uh, I thought it would get the US and China signature. And that's like always an important like indicator that there's still some willingness to cooperate. Um. [00:35:25] Jacob Haimes: Mm-hmm. [00:35:25] Su Cizem: I think like overall, uh, I, I, I look at the summit as a success in those. [00:35:32] Jacob Haimes: So of the things that the, the smaller wins that you were looking for did happen. [00:35:38] Su Cizem: Yeah. I mean, the world again is changing very rapidly. Um, but we don't really have a way to kind of come together and like check where we are. And this, I think, did that slightly. I think even just the fact that we were able to see how difference the focus of India and, and, and really like the majority of the world is with AI than like what the [00:36:00] AI companies who usually stay in Silicon Valley and don't leave like the west coast of America, think about the world. Like, I think even that like para, like even the fact that, I mean, I, I'm pretty unique in my, my way of thinking about this. Not a lot of people in our field share this viewpoint, but I even, I think that like lack of frontier AI company focus and like AI safety focus. Is like a big signal that like, okay, well like why is it that like that the AI summit isn't talking about this. It's because you are moving at a pace that's so much faster than what like the rest of the world can keep up with. And we can't really set the rules for the room if there's a lot of lack of transparency. There's a lot of lack of coordination, and that gap is just going to keep widening and widening. [00:36:48] Jacob Haimes: Yeah. So you, you, you touched on this a little bit already, but like, um, I think the idea of an AI safety [00:37:00] summit is great. Uh, but, but as you've said, like it seems to have sort of been, uh, hijacked almost by like, uh, this much more, what I would call like AI boosters, um, I think you said like AI adoption, um, which is. Kind of what I'm thinking, but is maybe a little bit more about like, access than I am thinking, uh, and people who want to like ride the wave of the hype, uh, and of the, the technology. And I guess my question is, do you think that the summit is still about safety? Um, or, or has it diverged. [00:37:46] Su Cizem: I think the summit has diverged, but I don't think that necessarily makes it not about safety. Like I think, again, we started at a much smaller scale, at a much narrower focus. It has since shifted. And I guess like one of the [00:38:00] main reasons for that is this isn't a very formal process. Like there isn't really, it's not like a classic multilateral. Institution. There's not really like a consistent source of funding. There's not like a steering body, there's not like a secretary. Like there's there a lot of the times. The summit is about what the host country wants it to be. We saw that with Paris and we saw that with India. I think now the successor is going to be Switzerland and it seems like it's going to be the United Arab Emirates after that. Um, it seems to me like if the summit doesn't mature in a more consistent way, then yeah, I think it will be at risk of becoming just whatever the country is, make it about and safety will continue to take a backseat until maybe one country decides that it doesn't. I think if we are really honoring what. This like really great initiative starting by the UK has [00:39:00] done for the world. It's like given us a platform where people actually do put aside their differences, countries, otherwise that wouldn't come together and stakeholders otherwise that wouldn't come together are coming together and exchanging like what has happened since the last summit? Where are we now and what's the world gonna look like until the next summit? Like as these kind of yearly, I guess horizon scanning benchmarks. There's a lot of opportunity there. Um, and a, a few tweaks would do that. I think for example, if Switzerland, when they're hosting the next summit, decide to formalize it in some ways, say we want cross summit continuity, um, and here's our suggested process. I think that would be a good development. I also think platforming safety. Having these like, very difficult conversations between the AI companies and civil society and, and, and the governments. Like these conversations still happen around the summits. They just don't have a formal platform for it. So like, for example, civil society had a lot of room to actually pitch ideas and, [00:40:00] and, and, and host sessions just like the Future Society did at the India Summit. Um, it just wasn't like proposed by the governments themselves. They let the civil society do it. I think probably there's a lot of hard questions that could be asked to AI company leaders. There's a lot of questions that could be asked, very hard questions to, to like nation states about AI policy, what the world's gonna look like, what's gonna happen to most people like, who is looking out for us. Um, and maybe there is a platform to do that in like an open and honest way during the AI summits. Yeah. So I, I dunno. And I guess one more comment I have about this is I do see the pendulum, it will swing back towards safety. Uh, I do think incidents are happening more and more in the world. I think people are seeing AI risks like materialize and it's not this like kind of distant capability in the future. It's like right there with you. And I [00:41:00] think that probably will bring that conversation back into the forefront. [00:41:04] Jacob Haimes: Yeah. One thing that I thought was interesting, you mentioned like people put aside their differences and I just remember the, uh, video of Sam Altman and Ade not [00:41:17] Su Cizem: Yeah. [00:41:19] Jacob Haimes: uh, at the end there. So it's like, you know, some people put aside differences. Um, some [00:41:25] Su Cizem: Yeah. [00:41:26] Jacob Haimes: Um. [00:41:27] Su Cizem: But they still show up. I mean, I think the point is that like, that, that even maybe like further proves the point that I have in my head at least of like. Maybe they like, I mean, those are just two individuals. There's like also countries that like fundamentally don't trust each other, that like avoid each other except for very, very few instances. And AI happened to be one of those instances. Like there's this really great, um, thought experiment in like philosophy. I, I, I forget who's the author. Let me think. Height, I forget his first name [00:42:00] about I, I don't know. There was like this book I came across, I don't remember what it was called. Something about like how could people put aside their differences? They need to have something in the middle that they care about. And I think AI kind of like did that for us. Like you could get along with a Republican or a Democrat if you fundamentally think differently about them. If you're family, for example, for like a family dinner and people around the table, like you just need to behave and there's like an issue that you need to discuss. Like you don't have to get into all of the other topics where you disagree, but there is maybe something in the middle that you care about. And I, I, I really think that AI and AI safety, especially like the catastrophic risks, and that's, that's where like this idea of red lines I think is really helpful. It's like there are outcomes in the world because of the current situation and where we're going that will be undesirable, desirable. Like whoever you are will be undesirable for Sam and Darian. It will be undesirable for China and the US and everybody else. Um, and that maybe is going to happen because we just have not been able to have like really hard conversations [00:43:00] at this point where we still can. [00:43:01] Jacob Haimes: Gotcha. And so yeah, thinking about like the, the thing that we are coming together on, you know, maybe it's that AI is important, but uh, maybe it's also that there are things that we can do toter to prevent, um, AI from doing certain specific bad things. Um, and that's sort of the idea of red line ai, red lines, is that correct? Maybe just talk to me a bit more about what AI red lines are how you're involved [00:43:32] Su Cizem: Yeah, happy to. So the idea of ai red lines originated around the, I mean there's actually a really, a really nice backstory that, so around the Paris Summit, the Future Society, before I was working with them, um, conducted I think the largest survey to date around the Summit series about what the public wants and what civil society wants from the AI series. And I guess one of the top contenders was that people want red lines, like at sole [00:44:00] one of the declaration commitments were that where we were going to have thresholds agreed upon by all the stakeholders and AI companies and countries of like, things that we're just not going to do with ai, like certain behaviors and use cases and like AI capabilities that we just like deem as undesirable. So like. I, I, I can get into what that looks like, um, like specifically, but that was kind of the general idea that people were like, if the Summit series accomplishes anything, at least we should talk about like what, what, what are the guardrails? Like what are the rules of the road that we need to abide by if we want to go this like full acceleration route. Um, [00:44:38] Jacob Haimes: Why this framing of like red lines, um, 'cause that's different than just like a, the things we don't do or like [00:44:47] Su Cizem: yeah, I mean, I, I guess it's more, I never asked myself like what the exact framing. Like I, I, I think the red lines is like a good way to think about things. 'cause you see like a. Boundary. You see like a threshold and you [00:45:00] can point to it 'cause it's set and you can approach it. Like, I think that's kind of where it is. Like you either cross a red line or you approach it and there's like an like a kind of like conceptual relationship that you can have with red lines. It is very like hard to think about it if you like, like immediately reject it as a concept. So there's other ways to think about it as well. Like you can think about unacceptable risks and what the India workshop that we did, uh, with the Future Society revealed is like actually different countries are thinking about unacceptable risks. They just have like, they just use their own lingo for it. They have in their laws, they call it like high risk or severe risk or unacceptable risks, and they have different degrees of risks that they're willing to tolerate. That's been the case like in their law for a long time. Not just about ai, but there's like things that they just don't tolerate. And then there's like things that they don't tolerate because of AI as well. And some of them are adopting existing laws. With that language. And some of them are [00:46:00] creating new laws like the eu. Um, so all of that to say like red lines is just one epistemological concept. And we, that's like a frame that we call this like category of risks where people just kind of shake their head and go, yeah, obviously, like obviously we don't want that. And it's the least common denominator, like it's supposed to be like so agreeable that like you have very little objection to, and we've been playing around with that idea. So after the fir, after the Paris summit again, I was working with the French Center for I Safety. We were kind of disappointed in the outcome for the lack of, um, attention on safety for de platforming safety. And we wrote the statement. The statement kind of was around like, we think the AI summit series should focus on safety again for the next one. And we. We're circulating this statement. We got a lot of signatures from really high level people and we thought, okay, like here's a group of people who share our frustration and like the impact that this statement could have, could maybe be [00:47:00] like in India we like put safety back on the agenda. But then there was like a slight brainstorming that happened with our team, which was like, is that really the actual thing? Or do we want an actual like change in the world? And does it have to be like limited to the cemetery? Like can we just ask for something bigger? And then with the like research that we had, like with other like conversations with like, yeah, different, different groups of people, different stakeholders. We landed on red lines. There was like a big discussion that took a couple months and we thought that this was like an important policy area to focus on. And since then, so since the Paris Summit, we've been working with this concept of red lines. Exploring, like how experts are thinking about it, how countries are thinking about it, what are the categories, what can it hope to achieve, how can it help people and like where can we make them happen? Um, so the campaign for ai Red Lines was launched around the UN general assembly, [00:48:00] um, in 2025. So last September. And since then, like again, a lot of behind the scenes activities were going on and India was kind of where we platformed it once again, just to see like from a, a broader perspective, like what's actually happening in the world. [00:48:18] Jacob Haimes: Gotcha. So there is, there is this, this campaign, and I guess one thing I'm curious about is just, does it seem to actually have, uh, like, uh, an effect like so. Are things actually changing? Um, is the conversation shifting is it really more just like people are signing uh, call for red lines and then it's sort of put down again? [00:48:51] Su Cizem: Yeah. I mean, I think yes and no. Like are things happening? Things are happening. Are things shifting? Yes, things are shifting. [00:49:00] Is it like immediate and very palpable? I think, no, not to people who aren't like paying attention to the right places. Like again, all of this is going on in the backdrop of a very geopolitically difficult world. Like the idea of red lines conceptually is like, sure, like put me in a room with anybody. We could agree on these risks and shake hands. But there are barriers. There are barriers that are political. There are barriers that are technical and there are barriers that are just more like, like what's realistic given, like you're talking about agreeing on potential future, like preventing potential future. Unacceptable risks while right now people can't agree about like, current things that are happening that are unacceptable. Um, and we can't put aside. So there's like, again, like on the backdrop of all of AI policy, there is actual like geopolitical policy that's, I think, really difficult to have right now. Um, anything that comes like from this international, global [00:50:00] inclusive, like we're all gonna come together and shake, like this kind of framing is just like very difficult for people to land. Also, because I think people are quite skeptical, people are skeptical of their own government's abilities to govern ai. I think people are skeptical about like multilateral. Yeah. People, people are skeptical about multilateral platforms to govern ai, enfor. Like even if we make these agreements, like they think, okay, well how are we going to enforce them and verify, like there's a lot of challenges. That being said, like that aside, the lack of it existing is still a problem. And there are things that we can do all across the AI stakeholder map to make it a more like easy task for the very, very, very high level people. So like heads of states or heads of AI companies. I think it looks a little bit like we, we engage with this concept of red lines with like different stakeholder groups and it looks different for everybody. So one thing we try to do [00:51:00] behind the scenes with our, like other experts in our ecosystem is like figure out what those categories of red lines are. Because like I have my opinions about what I think they should be and like our team has their own opinions, but that's not really, that's not really what red lines do. It has to be a conversation that comes from like all of the relevant people. Um, so. What we did is like, we did kind of like a horizon scanning of like, okay, what do experts who work in AI safety, um, in different fields like biology, economics, like children's rights, like, I don't know, education. Like what are the, what are the red lines that people are thinking about? So we did this Athens round table that happened in December, um, where we gathered over a hundred people together to discuss this. Exactly. So we had like just one question for everybody, and that kind of arose this discussion of like, what are the risks that you are like most worried about that you deem as like the most like [00:52:00] unacceptable given your area of expertise. And then from there we got like certain risk categories. And then from those risk categories, we asked, okay, what would thresholds look like? Like what would a red line for this. Specific instance look like. So for like self harm, like how many people like in the world or like, so there's like very, very diff like very difficult, like it's very difficult to do that. So already that like revealed to us that even just setting those definitions, setting those boundaries in a way where like people point to it and say, yeah, yeah, sure. Like that's exactly when we'll say that the red line is being crossed. Um, but that being said, the categories are there and they look kind of like the same in most of the conversations that we engage with. And there are some categories that aren't global. So then like when we ask the same question to policy makers or like, what do red lines look like in your jurisdiction, in your nation? You know, a lot of like, people care about children's [00:53:00] rights. A lot of countries care about, um, their vulnerable populations. A lot of countries care about cybersecurity and. Like countries do also care about loss of control. Like they also want to be able to, when they use AI models in their government processes, when they use AI models in their education systems, to be able to rely on them, um, to be able to monitor the harms, to be able to respond to them. Um, so yeah, all of that to say like, it's a very difficult task, but it's a necessary task because at least it shows us some things about the world. Because if everybody is just pointing to something and say, well, we can't say that because, for example, for loss of control, it's a very hard conversation to have because what would you de, like, what would you define as a scenario of loss of control with an AI model? Is it when it starts to write its own code or is it when like the developers aren't able to interpret exactly what's going on because people can't already do that? So like a lot of the times when you try to set a red line, it [00:54:00] probably has been crossed somewhere already in some capacity. So what do we do about it? [00:54:05] Jacob Haimes: So. That's related to something that I wanted to bring up. 'cause you mentioned like a, a while ago, um, talking about how, you know, some of these risks, some of these harms are already already materializing. And then also, um, in sort of the, the last response that you had, you talked about how, um, you know, talking about red lines for these things that are, that are far in the future. Um, and I guess the, the thing that I think is well, but there's actually a lot of harms that are happening right now, um, that you, if you create sufficient red lines, if you create sufficient, stops or, or mechanisms and not stops, or stops and mechanisms to enforce those stops, then you cut off a lot of the more concerning risks that are [00:55:00] further down the line and. Even so people don't seem to be, to be talking about these risks, uh, and talking about solutions to them. And so that because the capability has already been reached, that has been, um, sort of into the, uh, model of how these AI companies and the ai, uh, models they deploy are being consumed and, and used. Is there a way to impose more strict regulations? Um, if it's something that's already been crossed? [00:55:41] Su Cizem: Yeah. I mean, yes, I think it is definitely harder to impose restrictions on like thresholds that have already been crossed, but it has happened before. Like we have been in these positions before in the world where, I mean there, there's two answers to your question. So one about like actual recurring instances and like how is that different from red lines in [00:56:00] the future. I think that's a slight misconception. I think those two things aren't very like separated, I think exactly as you said, like catching instances now of incidents or harms or catastrophes that are caused by AI as an indicator of us reaching thresholds. That's maybe like a way I would think about it because. Probably if you were to try to look at the world now, you could elicit a lot more dangerous capabilities out of models and people's use cases of those models than what we are assuming. I think a lot of the reason why we're not talking about it is because most of the instances aren't being detected, and if they are detected, they're being either concealed or not talked about enough. And I think really, like a big, big barrier in the world is that there's just a big lack of awareness. There's a lot of reasons why this is the case. It's like a lack of transparency across multiple counts, but also because we live in a very geopolitically dense and, and, and difficult world where I think most of the world is quite desensitized to [00:57:00] incidents. Uh, like we hear about like a lot of harms because of people's social media use and, and, and, and, and like really, really, I guess escalatory situations in the media. Like there's a lot of, I think, very difficult narratives that people are grappling with all the time. And AI is. I think sneaking in there a lot of the times. And, and so I, I think there's definitely like a, a virtue in having better incident monitoring response, like, I guess awareness raising about like there are these risks there are already materializing. Um, we should be able to like report them and respond to them in a, in a coordinated matter that will already make the world like a slightly better place. I guess red lines provide the framing here 'cause it's, you can say something like, if it keeps going this way, like you can maybe try to make some predictions, like we're going to cross a, a threshold where we won't be able to come back from it. Maybe for some of the risks we'll find out that we're already [00:58:00] there and maybe for those we will need to have much stronger response mechanisms, but maybe for some we just have to keep our eye on them and like, there just has to be like a couple people monitoring them in the world. So without being able to actually put those topics on the table, have those discussions, it's going to be very hard to. Have a collective response to it. I think policy will then only ever have the role of like responding to incidents. And I mean, knowing the, the nature of the technology that we're talking about, even if you put all of your technical talent in the world together, like it will be very hard to do that. Um, [00:58:38] Jacob Haimes: Well, yeah, it's not a purely technical [00:58:40] Su Cizem: yeah, [00:58:41] Jacob Haimes: right? It's a socio-technical problem, and so you can't just say like, it would be really convenient if it was just a technical problem, but I don't think [00:58:49] Su Cizem: yeah. Um, and it requires a lot of different groups of people working together. Like the, that's also another thing is like we put all of these harms into the same like unacceptable risk category, but [00:59:00] each one of them requires a very different skillset. Those people have very different ways of working together, have very different ways of like cooperating. And so a lot of like. I guess my first initial conception when I was thinking about red lines was like, okay, there'll be like a international treaty like we had for atomic weapons and like, you know, and then we'll just have like an agreement and people will like take that responsibility in their, in their jurisdictions and like meet ones. But it's not like a world like that. I think it's more like there are different categories of risks for different domains and you get like the best of the best, like the most relevant people together to like constantly engage with the ecosystem and make sure that this is like taken care of. And then maybe there's a way to like bring all of those together and have like a global dialogue on it. But as far as like what it will require to actually set them and force them, I think you need all the relevant actors like focusing on them like very narrowly. Again, I think we are also seeing this kind of the [01:00:00] repercussions of not having this like dialogue, I guess sooner. Because now we're actually in a position where like AI is being used in military warfare, and there are incidents that are related to like AI being misused in military warfare. There are accidents or there's like differences in ideologies about how. A company wants to use their tech or wants their technology to use and like a government wants to use the technology, where are we at actually with capabilities and reliability and like, where are they being applied? Like there's a lot of very difficult conversations that are needing to be had around like actual, like fundamental changes to the world. So like, I mean, military use is like a really good example because it used to be something that was like, the better military you have, the bigger, like the more capable, the more that deters your adver adversaries from attacking you or, or posing a threat to you like military use of [01:01:00] technology. Having the best access to technology has always been like a national security concern. We're seeing a world in which like that might not be true anymore. 'cause now you have. People who have this like frontier access to these technologies saying no, because if the technology itself isn't reliable, and we don't think it is, at least for like these use cases, it's a, it's a harm to everybody involved. It's a harm to like, like your government and like your military and, and like everybody else who's going to try to copy you as well. So I think again, we're try like we're, the technology is moving very fast, but it's not moving very reliably. And every time I think we deployed in new context, AI behaves in ways we don't predict and the stakes are just way too high, uh, to keep like going like this. [01:01:50] Jacob Haimes: Yeah, I, I mean, I do agree with you in, in like the last bit, but I also think that to some extent it's worth pushing back against the, like AI and [01:02:00] warfare is, is new in a sense because it. Really the problem is lack of oversight, lack of engagement with the gravity of, uh, people's lives. and if you go back, I, I think I mentioned this in like a previous podcast, but the 2010s, you know, there was a, a scandal where, um, the military, the US military was, uh, targeting people, uh, with drone strikes based, uh, purely on metadata. Uh, and to me that isn't really different from, uh, some of the, the more modern instances of alleged, uh, misconduct, uh, use of AI and warfare. Like, um, what, essentially the same thing happening, um, but with, uh, sort of uh. [01:03:00] in, uh, the war in Gaza. Um, so like both of those are appalling. The fact that it's AI doesn't change much. It's, it's that the incentives that have been constructed, I guess treat the loss of life as okay. and like that's, that's the problem. don't know. That's sort of how I think about it. [01:03:30] Su Cizem: Yeah, I mean, it's a very difficult topic also because most policy, at least like national policy, also tries to stay away from like, okay. Another big change that happened since the last summit is that like AI really has become a national security concern, uh, for the countries involved. Um, there's good reason for that. I I also do think that like, to a certain extent, thinking about AI as a national security concern is very helpful. I think [01:04:00] it makes countries take, take it more seriously and like they do put a little bit more resources into at least like this monitoring, um, like what's going on kind of thing. But also it does something quite harmful, which is like a national security way of thinking about ai. Like you, there's very little room for cooperation in the sense that It does two things. I think it forces you to think the worst of your adversaries. That like you're constantly looking for, like malicious behavior, deception. Like you are actually looking constantly for so, so already, like by the time you're in the same room with them, you've already gone through the whole spectrum of like, here are all the things that they're potentially going to do to me or have done wrong to me. and it yeah, promotes secrecy. So you stop being as transparent, you know, um, you, you take away from the public discourse. Again, there's a level [01:05:00] to this that's necessary. That's necessary because just, I mean, there's a lot of problems in the world that have existed way before we were born and like way before we were talking about ai. But part of like, the problem now is that with a new technology that's fundamentally different from. Like, yeah, there, there have been instances of mass surveillance, there have been instances of AI use in the military. This has been happening long before the situation in Gaza. It's been happening like as early as like the governments could have got gotten their hands on the technology, but a lot of it has been private. Now it's kind of like, okay, now we're seeing it actually being used and deployed and we can maybe try to have conversations because it's in front of us. But a lot of the like assumption is that if it keeps going like this, it's not going to be a matter of national security. It's going to be a matter of international security. And there we don't really have any mechanism to be able to respond to it. So like, yeah, I mean, and military is hard because again, cut. If you limit your own use [01:06:00] of AI in the military, you're sending a signal to your adversaries and allies that there is a limitation that you've posed to yourself. So this is a very hard thing to have, at least like in the world. Another one is that like countries. Also want to use ai. Like they also want to, like even the AI act, which is like the most risk-based approach to AI policy and governance. They, they don't touch the military. That's out of the scope of the AI Act. So military is like an easy one to point to because it's like so obviously real. It's so obviously in front of us. We're grappling with the effects of it right now, but still we have like very little that we can do about it. Like very little opportunities, like very few people need to change their minds for this to be able to be taken seriously. There's a lot of other red lines that I think are a lot more in our reach as people who are actors in civil society, as researchers, as people who work in AI companies about just more like how will this technology be used? How will this technology be governed? Do we have what it takes right now and like, how can [01:07:00] we make this happen? Because if the answer is like, no, no, no, no, then, then also that sends a signal to like, okay, then maybe we shouldn't do it anymore. Like, we shouldn't do it this way anymore. I, I, I think the best outcome is that we realize like in a, in a like really true way, every coming summit that like, we just need to do something. And it's not just like you and me who are kind of on the sidelines trying to force narratives, but it's actually people who are making the decisions. And I think recent incidents are getting us there slightly, slightly better. [01:07:33] Jacob Haimes: Yeah. I guess then the, I like just being more, more skeptical and untrusting and that sort of thing, um, as I am. Um, a concern that I have is that like having a red line almost implicitly accepts everything [01:07:50] Su Cizem: Mm. [01:07:51] Jacob Haimes: um, [01:07:53] Su Cizem: [01:08:00] Mm-hmm. [01:08:11] Jacob Haimes: the times when, you know, ai, uh, is used, uh, and it results in self-harm is okay, um, because there isn't a red line there. Uh, and now obviously I don't think that, but just having that signal may be a, potentially a bad thing. I guess I'm [01:08:31] Su Cizem: Okay. [01:08:31] Jacob Haimes: about how you think about potential trade offs there if you think that isn't as much of a concern. [01:08:37] Su Cizem: Yeah, actually I was expecting to get this question a lot more since this work became more public. Like I, I was thinking like, okay, like one fun, like more philosophical problem with red lines is like what, what you said, maybe this perception of like, if you put something out in the world that people can point to, then maybe you're excusing in some way. Everything that comes before that, I think obviously that's like not what the concept of red lines [01:09:00] wants to do. The alternative with it not existing is that there are like, there's nothing to point to that each single incident with all of its complexity and nuance is permissible in some way. Like, I mean, AI causing self harm, suicide, like these are things that are actually happening obviously, like even one case of it is happening should be like a red line and, but they're happening and they're probably going to keep happening more and more and maybe there's going to come a point where like. Maybe there's like a certain percentage that people will start paying attention to. So, I mean, it's like a very hard world that we live in, but it's like, what point do you react? Do you react when like you're, obviously it shouldn't be when you're like crossing the threshold. And that's maybe like the main thing is that like we should be on the way there already at moments where we can intercept and we, we have ways of intercepting, obviously like the point of like crossing the threshold. I think about it as like the point of no return. Like you, like it's so bad that [01:10:00] like anything like above that would be, you can't even, you can't even come back from it. Everything before that isn't permissible. It's just. That's our indicator that we're going towards a dangerous direction. Maybe we're already there. Like I think thinking about it in like risk thresholds is good. So there's like the unacceptable risks, but before that you have high risks. You have like medium tier risks, low risks. This is how like more the EU AI Act thinks about it. And all of those require different levels of monitor monitoring and evaluation. And they have different repercussions. I think already by the time you're at a high risk system, like everybody should kind of be on board and like monitoring it and responding to it. Um, yeah, as it's approaching. And the alternative again is like, what's the, the world isn't a better place because of a lack of this. [01:10:51] Jacob Haimes: yeah. Okay. No, I, I think that that makes sense. Um, I guess also thinking about just sort [01:11:00] of ways that red lines might not do what we want them to. Uh, do you think that like current voluntary commitments, uh, voluntary safety and security frameworks established, by some of the AI labs, for example, um, tell us anything about the perspective. Um. Efficacy or, um, value of red lines. Like can we learn from the current implementations about how we need to treat, uh, an implementation or an, uh, framework, uh, related to red [01:11:42] Su Cizem: Yeah, I mean I think, um, the responsible getting policies were great development as far as like country level governance, like how are countries actually thinking about governing their own. Like models, but also just like information around their [01:12:00] models. Like who has access to models, what are like the pain points? Who notifies who, like what? And every, every organization is shaped differently. Like every organization's information sharing, every organization's like hierarchy looks different. Um, so there isn't a one size fits all kind of model, but these companies are where the frontier ais are, like, where the kind of cutting edge of the technology is. And probably, I mean, unless we like massively miss something like the more like the, the bigger the companies get, the more powerful the companies get, the better the models will get. And yeah, it might, yeah. Yeah. So like even beyond like country governance or, or global governance, a lot of the visibility that we have right now is coming from the companies themselves, what they choose to tell us publicly. There's um. Uh, this is something that's very hard to like, engage with because I think as a civil society actor, as a [01:13:00] government actor, it's very hard to think about AI policy for like what it actually is. Like you're trying to engage with a technology that you don't have access to in an organization that you don't like, you don't have access to. And even if you did, you wouldn't really understand exactly what it's capable of. Because once it enters the world, it's like it proliferates in like much different ways. So it's, you have to have, I think, a lot of foresight. You have to have a lot of imagination. And I think the companies, they're very limited in how much they can do that. They also have like an organization to run. They also have different incentives. Um. So companies alone can't govern their own technology. They can govern what goes on behind closed doors. Who has access to that information? I think they can do and, and, and I think they do do like a lot of good cybersecurity and like, I don't know, I guess employee management, but then there are these off cases where you have like a [01:14:00] whistleblower or two that says, oh, actually they're not doing what you think they're doing. So we have a little bit of insight through people like that, or the technology is just like released into the world and they kind of watch what happens with it and try what their best get. So no actor alone in this whole landscape, I think is positioned to respond to it. I think companies have a responsibility because they're breeding the technology, they're growing the technology, and, and they're releasing it to get as much help from the rest of society as possible to lean into the experts that are outside of the organizations. It's not a purely technical issue, and it's an issue that's beyond their own capacities, like. It depends on what country they're positioned in, what that country's allies are. But then like, what do you do if your technology is being used by a country that isn't, like, not an ally, but they're also, so there's like also a lot of limitations that the AI company employees have with what they can report. Who can they report to? Who does this information go to? [01:15:00] Who do they trust? Not all the time is the, the case that every single employee trusts the government as well. Like, so there's like, there's a reason why there's a balance of different stakeholders. Um, so yeah, the responsibility policies gives us some insight. They're voluntary. We're completely beholden to the, I guess, position of the company in the current time. It, it has been the case that companies have been pretty cooperative, at least to some degree, better than they were before as a result of these summits. I do think there's also a lot of appetite in the companies for. Some red lines. We've seen that with this recent open letter that was signed by Anthropic and open AI employees, um, against the Pentagon situation. [01:16:00] [01:17:00] [01:18:00] I think, yeah, we could just do the [01:19:00] world more service by cooperating with all of the stakeholders and not leading any single party to govern their own technology. [01:19:07] Jacob Haimes: Gotcha. Yeah, I, I guess the, the thing that I'm, like it, it indicates most strongly, and then I, I'd like to get your take on this, is that, um, we need enforcement mechanisms, um, of some kind and they can't be, purely internal and the red lines can't be subject to, um, easy modification. So, for example, like philanthropic recently said in one of their like, model release cards or whatever, uh, they resorted to what was essentially like an. Internal vibe check as to whether, um, the model could be used in, in some way that they didn't, uh, deem, uh, [01:19:55] Su Cizem: Hmm. [01:19:56] Jacob Haimes: And they were just like, yep. [01:20:00] No one said that it, that they thought it could be used in this way. So were good even though it passed all of the benchmarks that we had created try to [01:20:09] Su Cizem: Yeah. [01:20:11] Jacob Haimes: And obviously then there are also, that's, that's, we just know that because they shared that, which is, which is good, but I'm sure that similar things are happening at other organizations. I have some, uh, yeah. [01:20:24] Su Cizem: Yeah. No, absolutely. And I think, I think also something that's also quite frightening is like, so like the responsibility again, like they release, at least we get an idea about like the kinds of things that they're looking for and. Um, and they like the most information I've been able to get outta responsibility, scale and policies is like, if we detect this, then we have a plan internally to like respond to this. We don't know what that plan is. Again, there, there is also like an info hazard in all of this. Like the more information you release about your internal processes about a really powerful technology, the more information could go into malicious [01:21:00] hands and like there are bad actors in the world and there's like a balance that you have to strike there. It's a very difficult balance. I don't think it's an easy job that people in AI companies who think about safety have that being said, like most of the time they do release the models even though they pass certain security like thresholds. Again, this is a narrative that was completely constructed. From the Soul Summits, like this whole concept of responsibility scale policies didn't exist before then. Like all of these like security levels were like done by one thing tank. Like all of this that we're grappling with that I think has been good developments. At least it gives you and me something to talk about to say like, well, it could be better, has been constructed over the last like couple of years. This is a very ripe, this is a very new space, but most of the time I think the disposition is to just like kind of see how things play out. Um, and I just think that there's so much of that we can get away with before we hit some kind of uncharted territory where we can't come [01:22:00] back to how things were or maybe go somewhere that's like better. Like our options will be much, much, much more limited. So a little bit of anticipatory governance is necessary and the anticipatory governance is what gives. Like language for us to use to be able to refer to things like wouldn't it be nice to live in a world that didn't have like X, Y, and Z concern? Yes. How do we get there and enforcement and verification, if that's really the bottleneck. There's things we can do, like there's very good research about enforcement. There's very good research about verification. How it plays out is just like that research reaching the right audiences. Or maybe there's a big gap. Like I think verification, especially I think if you're a technical talent, you care about safety, you're looking for an area to work in. If this is really the gap in the world, like then you can kind of steer your talent towards there. You can see your investments towards there and you like see an whole ecosystem flourish. I really do see like a perfect world. A better world than this where the AI safety [01:23:00] ecosystem is like one of the most profitable industries. Like if you're in the US and you're really good technical talent and you care about AI safety, I think you'd want to work in an AI company because they have access to the technology, they have the resources and like you'll get to BA basically like be able to play with it and contribute to some degree. Now, if you're the same kind of profile of person, but you live in the UK and you care about public service, let's say, then you maybe go to the UK AC because like X, Y, and Z. So people have like very different reasons for wanting to work in different like specific parts of this stakeholder map. And I think that's where governance can also help. You can say like, listen, if you make this an industry that people want to in like get into an ecosystem and you'd like have funding in that ecosystem, you have research like competitions, like you can do things as a government to kind of proliferate this. So anyway, I would rather like know what those bottlenecks are and then we can think about a way to address them than avoid [01:24:00] the conversation. [01:24:01] Jacob Haimes: Yeah, no, I, I, yeah, I definitely get that. So what, what is something about the policy space, uh, and the people you work with that you wouldn't expect, uh, that you didn't expect initially? [01:24:15] Su Cizem: Yeah, obviously. I mean, I think from the outside looking in, everything is so, um, everything seems so set. Like everything seems like it's been there since before I was born and it's probably gonna stay there, but I wasn't, I, I think I underestimated how much impact you could have by just changing the mind of like one relevant person and like that relevant person, changing the mind of someone else and how actually, like human governance actually is, like these are still people. I mean, this is something that was like quite jarring in the very beginning. 'cause I like thought. Yeah, I, I'm always, I've always been somebody who like really respects institutions and formality and I respond really well to it for some reason. And I thought like, it just seemed like such a bigger [01:25:00] feat than it actually is. The, what's actually happening is like a couple people putting their heads together, spending time together in like a shared space and coming up with solutions and behind them is like a whole army of people who have put in a lot of work and effort just for those people to like come together and shake hands. Like that I think is such a human trait. Like I, I know I also, after, after like spending a year like engaging with like a little bit more of a technical crowd, like I had really forgotten about how like what we're doing is actually like for humanity. Like we're doing something for humanity, by humanity, um, for, for the time being. And so that, like, I don't underestimate your own impacts, like sometimes just by being in the right room at the right time saying the right thing. Befriending someone who like later turns out to like become someone important, like the very human things that we oftentimes overlook end up mattering, which I think is, um, a good thing. [01:26:00] And don't look at anything as like, yeah. [01:26:02] Jacob Haimes: oh, I was just gonna say, don't, also don't like put pressure on yourself to always be doing that. Like, you know, every single thing that I do matters at this, you know, kind of thing. Because like, I could see someone taking what you just said and, and feeling like, oh no, everything I do it matters. I need to be, you know, performing at a hundred percent all the time. Always. And I don't think that that is the correct way to execute on the advice that Sue was just [01:26:31] Su Cizem: Thank you for the, yeah. I mean, yeah. Uh, maybe there's a better way to say it. What I'll, what I'll lean towards is like. I think governance has something like very special in the sense that it is like a very human thing that we're trying to do. Like we're trying to [01:26:49] Jacob Haimes: Yeah. [01:26:50] Su Cizem: preserve things that we think need to be preserved about humanity and like allow for room for, to flourish and, and try to minimize harm. Like to our best capacity [01:27:00] for the, for the groups of people at least we care about. Um, I think that like always grounds me. Like it's not like everybody's watching you and your every move and like everything you do matters. It's more this kind of like. Really engaging in the governance space as someone with compassion to the world, I think really pays off because you get a lot back from it. Like, it's not that you [01:27:19] Jacob Haimes: Mm-hmm. [01:27:19] Su Cizem: to be like very strategic and talk to the right people, it's more just like, here's someone who could be doing something else with their time, but they're here today, like giving you the time of the day because they care about something that you also care about. And I think that kind of like constantly grounds me in my work and, and how I engage with the policy space. Even people who are like just trying to enter the space, like the fact that on a spectrum of all the activities that they could be doing, this is what they chose to do. Like that's, I think, a special thing to nurture in the world. And yeah, create room for, [01:27:51] Jacob Haimes: What do you think needs the most support right now? Within AI safety, not necessarily within, uh, AI [01:28:00] policy and governance, but it can be. [01:28:02] Su Cizem: Mm. I think what needs the most support right now is, well, I really would point to red lines and incidents like, I think that would already like the, because people just assume that it's difficult to do, like. They try to focus on other things, but I think like already the world would be a much better place if at least we had more people focusing on this and working on this. I think a lot more buy-in from the companies and a lot more buy, buy-in from the US and China and, and, and a lot more like political will. I think that would be great. Um, like if people didn't look at the world and everything that's happening that's like completely like disorienting and just like throwing the towel. I think people, it would be nice if like the will was still preserved, like despite all of that, like given that we still need to do X, Y, and Z and I, yeah. The two projects I really believe in are like, at least for the certain category of risks where we can all [01:29:00] shake our heads and say yes. Like can we do something better than what we have right now? [01:29:05] Jacob Haimes: Gotcha. And then one last question. What is your favorite part about what you do? [01:29:11] Su Cizem: Um, I think my favorite part about what I do is that, I feel like I've found. Like a group of people in the world who care about the same kinds of things that I do and are actively trying to contribute. I think being around, like identifying myself in that group of people really helps, uh, like give me a lot of job satisfaction. My teammates are really great. They're people I genuinely like, really like to be around and, and respect and admire, and they're, you know, always teaching me new things that I wasn't like aware of before. Um, so it really, like, it was really funny like working in like AI policy and AI safety end up being [01:30:00] like a very human element that, that keeps me here. Um, I also think like. I always ask myself like, what are the chances that I was born now and not like 50 years ago, or 50 years in the future, or like a hundred years ago, a hundred years in the future? Like why now? And I get a lot of satisfaction like by trying to justify what I'm doing with my life based off of that question. 'cause I look at like, what's going on in the world? Like what are the chances as this develop, like as this development happens that I am like just entering my career or I'm located in the uk or I'm, you know, I have a US passport. Like all of these questions that I, otherwise, you know, and, and I, and and, and I get a lot of satisfaction. Satisfaction after like, uh, from being able to point to my job as like, okay, like this is kind of my answer to, to that. [01:30:58] Jacob Haimes: Gotcha. Well, Sue, thank [01:31:00] you so much for joining me. It was, it was great talking to you. Um, I, I really appreciate it. [01:31:05] Su Cizem: Thank you Jacob. It so nice to see you again and yeah, thank you so much. I'm sorry if I went on too, but yeah, two hours of your time. I appreciate it. [01:31:16] Jacob Haimes: Oh, you know.