Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.
Alix: [00:00:00] Hey there. Welcome to Computer Says maybe this is your host, Alix Dunn, and I am coming at you, jet Lagged after a really amazing few days at MozFest last weekend in Barcelona, and we are gonna have some episodes coming off the back of this. I got to sit down with like eight or nine people that were all amazing in different ways.
Alix: People like the CEO of the Onion, Ben Collins, I got to sit down with Abeba Berhane, a pod fa, um, and lots of other people who. Kind of came at technology from so many different directions, but it was all political, it was all creative, it was all interesting. And I think Mozilla did a really good job of putting on a really fun few days.
Alix: I feel like it may be the first time it felt like a proper festival. I don't know. Anyway, Moz Pass was great, more from us on that soon. But in this episode, we are going to dig in to the role of [00:01:00] video in our lives, which maybe when you're thinking video, if you're as old as I am, maybe you're thinking DVDs or VHSs.
Alix: But we're gonna get into all of it with one of my favorite people to talk to about video. Sam Gregory, who's the executive director from Witness, and they've been working since the early nineties. And if you think back to that arc of the last 35 or so years, it's just changed the role that video has played in our lives in the world.
Alix: And they have been a part of basically all of it. Um, Sam, I wanted to have on the show because he has some really interesting perspective on Sora. Two. And basically it felt like they had been preparing the world or trying to prepare the world, uh, for the moment when something like Sora two was released.
Alix: Um, so we get into that with him and so we'll start there. So let's go to Sam Gregory who's gonna start off with a little bit of analysis on what Sora two means.[00:02:00]
Sam: I'm Sam Gregory. I'm the Executive Director of the Human Rights Network Weakness that works to support human rights defenders and journalists who use audio individual technologies and increasingly helps them navigate an AI mediated information world.
Alix: So when SOA launched, I don't know, it felt like a shot across the bow.
Alix: It didn't feel as cataclysmic as it felt when SOA two, when SOA two came out. I don't. Did you have that feeling that like SOA was like a, them kind of trial ballooning this thing and then SOA two was like. The thing you'd been worrying about for a long time.
Sam: I'd say that there's a distinction between the two, which is also about the pace of innovation.
Sam: I'm doing scare quotes here. Sorry. Can't see that on a, we
Alix: love a good innovation scare quote. Yeah, carry on.
Sam: You know, which is so at the time was really impressive. I was impressed by it, right? Like the realism, the real world physics, the, the generativity. Then you had VO come out from Google that added in audio.
Sam: I think the thing that's probably important with what's happened here, the two things that I think are important with the [00:03:00] recent release, you have an app, right? So there's Sora, the app, and then there's Sora two, the kind of video generative tool. And the video generative tool is just another iteration of this, right?
Sam: It allows you to do audio greater storyboarding control. What you've got in the app and in the tool is the ability to have easy likeness. Simulation and falsification and duplication, right? And also the appification, right? And I think that drives a different pattern of usage. And so my frustration with say, the Sora app release is that the competition to get to something that is measurable from a commercial point of view, like a number one app position, versus we've released the latest version of a generative video tool, like the latest CR twos, threes, 3.1, 3.2, whatever you, you know, is that it normalizes the behavior.
Sam: That is, and the way that's playing out. So what I mean by that is the SOR app normalized identity theft and likeness theft for live people, dead people and everyone, right? Like it just said, [00:04:00] Hey, this is fine until you complain. It normalized falsification of events as being a humorous act that you do in your app, like context when the safeguards weren't in place.
Sam: We know the watermarks are easily removable, visible watermarks are incredibly hard to hold onto and easily cropped. And we know that although they put in C two PA data, which is great, they showed that it was made with ai. We also know that it wouldn't have shown up in other venues. So it's sort like it's there, but we also know it's not gonna show up when people actually watch it.
Sam: The app choice drives both a set of normalization of things that I think are deeply harmful, like a normalization. It's okay to deceive people, what's authentic and what is synthetic? A normalization of likeness theft. That also encourages others to think that, and it drives this acceleration of like.
Sam: Let's move this into something that we should see as our, our norm of communication and a primary mode of communication. So it's less about the advances in generative video, which I think some of them are positive, like witnesses, a video centric organization. We live in a video centric world. Many of my team are thinking about like, how do we [00:05:00] do better video storytelling, better accessibility of video, use generative video.
Sam: When you can't get into a scene of a of a violation, use it to tell personalized stories and not actually negative about AI video. Conceptually there lots of questions about how we do it and like the rights implications and the human rights implications. It's more the choices that have been made about what are the parameters around this and what do we normalize that worry me and its broader implications, particularly this sort of epistemic implication that is already there.
Sam: We live in a world where trust in a shared reality is already pretty frayed, right? Making choices that are about fraying that further and also fraying, you know, our control of our own likeness further, feel deeply cavalier, careless, and inappropriate.
Alix: I really struggle with how Sam Altman positions his role too.
Alix: So when talking about these things. He's so techno deterministic about the trajectory of the future. Like he says things like, yeah, you know, there's gonna be like, it's gonna [00:06:00] be a bumpy ride, but eventually this is just how things are gonna be. So people like, once we iron out all those bumps and people get used to it, it's all gonna be better or fine.
Alix: Or like eventually we'll just like accept it or something. And I just find this premise that someone like him through rapid advances and product roadmap is gonna just drive society into a particular place. Because that was like always gonna be the place we get driven to is like just such a false premise and also such a seizure of social power.
Alix: There's so much hubris in the way he talks about it too, that it's just this casual, I don't know, like just kind of deal with it guys and it's gonna be really cool on the other end of it, you know? And I don't know, like have, has he been cha, I mean, are there advocacy. Attempts to kind of narratively frame that for what it is.
Sam: I think the idea that there's a hubris and an almost an offensive egotism is obviously not just Sam Altman, and [00:07:00] I think it's like, sure. You know, the back, the backstory of of Karen, how's Empire of AI is I think kind of articulating how single person's hubris and lack of ability or willingness to understand how other people.
Sam: Perhaps want and depend on other parts of the vision they're trying to destabilize is clear. So I don't know if he's heard that directly. And I, and I do think part of what we need to continue to do is like articulate why there are things that are lost with this and why there is collective value in this.
Sam: I don't think we can give up on persuading people that epistemic fracture the loss of a shared reality. Actually that's pretty important. And the people it harms most are the people who have the least voice, the least respect for their voice, the least agency, right? And he just doesn't get that right.
Sam: And that's always been the Silicon Valley problem. The inability to recognize that other people don't live in your reality, don't live with, uh, affordances. You have to just float on by even if something goes wrong. And in our work, we, for the last couple of years we've been running, you know, our deep fakes, rapid response force doing much more direct work with [00:08:00] people around the impact of ai.
Sam: And you can see the slipperiness of. Trust in information, trust in images, trust in video that is happening, right? Like you see it very clearly. And then when you map that also to the societal trends, we are seeing, this doesn't feel like something we should be just like, it's fine if we lose this, we'll work it out.
Sam: And certainly not without a much broader shared way of like, this is how we're gonna work it out, you know? So I'm not gonna be techno deterministic myself and say, if we had like, you know, a shared way of understanding in every piece of content and communication, the recipe of AI and human, that that fixes our epistemic problem.
Sam: But at least it gives us a foundation for knowing, okay, I can navigate more clearly the environment I'm in. And I am not seeing that commitment to actually address these in the same prioritization as getting to a GI or getting to market dominance or driving an advance that might move them towards that.
Sam: Right? Like a lot of the work on AI video is about understanding real world physics so you can have better robotics. So you, there's like an instrumentalism to destroying a sense of reality that is about this bigger [00:09:00] goal, right? That is not a shared goal by most people in the world. It is not a shared goal for people not to have a shared basis of fact.
Sam: Like I think most people don't want to be stuck in a world where they have to just guess what is real and what is synthetic. There's no evidence. That's how cognitive we wanna work. Most people, like our cognitive load is we like to have as many things as possible that we can recognize and not have to engage cognitively with.
Sam: So we don't wanna have to disbelieve everything we see. We don't wanna have to disbelieve everything we hear. We don't wanna have to research everything. That's just not how people work, right? We need as humans to be able to do that only for a subset of. What happens and, and exercise our skepticism. And, and I see very little attention to try and say, actually, how are people gonna navigate this world?
Sam: Where the solutions that are often proposed, and not that OpenAI is proposing this, but like in our sort of contemporary discourse of everyone just needs to look harder, listen harder, you know, individual vigilance, we'll kind of like, we'll be able to make it through is like, really, have you met any of us?
Sam: Right. No human is like that. Not even people who spend their [00:10:00] lives trying to spot deep fakes. They don't do that as, they're like scrolling their feed. They don't want to do it. It's not appropriate. So that hubris, that inability to, to listen to a broader set of stakeholders who actually can articulate why this will be harmful to them, and also the inability to invest in the things that actually will address something.
Sam: That is clear. Even if we know that we might change some of our assumptions, right? Like as I said, like even in witnesses work, we've never said, seeing is believing and said that you are always gonna assume that you see something and you trust it, right? But we do need to have some understanding that we can assess visual truth in a world that's highly dependent on seeing things and sharing things.
Sam: It's a, you know, a visual centric world. You can adapt to a newer reality as we have with previous communication tools. So I'm not like, this is completely different from anything we've done before, but there's a big difference between let's work out how we see and believe things in a world that is more complex to don't believe anything you ever see, ever.
Sam: Right? And let's work out how that works for us.
Alix: Okay. So now that we've dug into [00:11:00] how video's affecting our lives today, this river of AI slop, um, and the capacity to very easily just copy someone's likeness and make video content that appears to come from them, I want you to kind of. I don't know. Rewind.
Alix: Imagine going back, uh, to the early nineties when making video was an extremely expensive endeavor. Um, largely done in movie studios and. For TV and imagine a horrific incident of police brutality happening. Uh, and nobody has like smartphones around to capture that. But we happen to have someone that has a camcorder and they're able to capture that moment on camera.
Alix: What happens in society when there is this documentation of a rights violation that is so egregious and galvanizing around something that people knew was happening, but that there's not that much evidence of, are people going to believe that it's happened? How might that video be used? And what effect might it have on the social conversation about a [00:12:00] really deep structural violence issue?
Alix: Let's go all the way back. It's 1992. That was when Rodney King was, was that, was it 92?
Sam: Uh, Rodney King was in 1991, so 91. Okay. Um, March, 1991, George Holiday, who's, uh, just a guy with a, a video camera. Really just a guy looks out from his balcony and or his window and spots the la PD striking Rondy King 50 plus times after a traffic stop, and he records it.
Sam: It was the impetus, obviously, for witness being created, but it's also a really solitary lesson also, right from the start, that like what you think when it's documented on camera should be like the most transparent, clear evidence of a human rights violation. Doesn't always stand up unless it's placed in a narrative that's supportive of that or an evidentiary context.
Alix: So what happened when, so he takes the video, he gives it to media first or to the courts.
Sam: He, he shared it with a news outlet first, if I recall, and then of course later it makes it to the courts. Now, of course, the [00:13:00] Rodney Kingston is surrounded by so many different things, right? Obviously there were, uh, public disturbances around it.
Sam: There was public anger, there was so much happened around it. And then when it makes it to the court, of course the defense of the police officer sort of breaks it down. They go shot by shot. They really try and deconstruct this visual evidence and it ends up not being. Central to any prosecution. Right?
Sam: And so
Alix: it's, it's, I didn't know that. Yeah, it's, so it wasn't actually a useful piece of evidence in It was. It was.
Sam: It was part of it. But the police officers are not prosecuted in that case, ultimately. Well, they're prosecuted but not convicted. Exactly. Right There, there's a prosecution. They're not convicted.
Sam: I, I'm hoping, I'm remembering right. I haven't thought about wrong king story. It's, sorry, this is like, it's so long. But it's, but it, but, but what it's like the sort of solitary lesson which was visible even then when there's not that much handheld video. Not that much documentation is, it's not just enough to have the video.
Sam: There's a funny dynamic and witness of, like, we used to use the slogan, seeing as believing, and I would always put up my hand and. Not always and [00:14:00] not always when actually in terms of making a difference. That's a really extreme example of how you sort of deconstruct a video to oppose it. It's also sort of a solitary lesson that, you know, the video has to come within the context of a, an audience or a public that's ready to see it, a mechanism that's gonna turn it into accountability, and in some cases, unfortunately, a victim that you know, fits what the expectations are of what a judicial system or society is willing to support.
Alix: Okay, I'm just pulling this up. So yes, the video was shared with a news station before it was shared with the courts and submitted as evidence for officers were tried. Three of them were acquitted. The jury failed to reach a verdict on the one charge for a fourth, and then the LA riots started after the acquittal in rage response to something that probably for most.
Alix: Black members of the community in la you, they did see the video and they were like, uh, it was on tape.
Sam: Yeah. And, and, and, and that's the [00:15:00] huge dynamic around video that is, you know, as we've come closer to the present day of like, do we need to show the evidence of something we all know happens? Right? And this is obviously true of police brutality on black Americans, right?
Sam: This is a reality we've experienced. We don't need the proof. Conversely, like. Clearly there are scenarios where sadly, you still do need to show it to audiences who are external and, and I don't wanna undermine the role that in other context video can play and has played over the past decades. That in the right context, it can be a very powerful tool.
Sam: And, and that's part of the crisis of today is we're undermining, and it's the philosopher Regina Reney talks about the epistemic backstop. We've had the idea that like you go to the tape. Yeah, when you have, you know, you don't go to the take now, you go to the digital file when you're trying to like prove something and, and largely our courts have been ready to accept that it's been kind of the way you prove reality is to show the video.
Sam: And so even when it's not been winning in a court with showing visual imagery, like often it's the way we shape public [00:16:00] opinion, it's how we agree on what happened in an event.
Alix: I hadn't really thought about that until this very second, which maybe it sounds obvious, but that it's the disconnect of perception between an audience person, a lay person watching a video, and then a court proceeding where time slows down into a very procedural like deconstruction and reconstruction of what truth is.
Alix: And that the outcomes can probably be just so dramatically different for a lay person in terms of what they might otherwise expect, and that that might make it hard to kind of manage processes of accountability for the purposes of the public understanding what just happened.
Sam: Yeah. And courts are typically adversarial, right?
Sam: So the whole point is to deconstruct rather than to take something on face value, right? So you're deconstructing someone's testimony, you're deconstructing, so to speak, a video which places all the emphasis. And like, I think this is another of those sort of themes that's cut across our work over many years is like, how do you prove something is real?
Sam: How do you prove something's authentic? And historically that's been a challenge to prove it in a courtroom. And now we have that challenge of how do we prove [00:17:00] that a video of an event is real when it's so easy to deny it? And how do we know that that TikTok video that claims to show something almost too absurd to be true is real or is generated with Sarah?
Alix: Okay, so, so in this context, so it was 1992, the LA riots happened. You have this video, you have this court proceeding that's like trying to deduce fact, so then witness is born like right thereafter. Was this like how, how did that happen?
Sam: Witness born in the years after the Rodney King incident and it was a result of, we were founded largely by the actions of the musician, Peter Gabriel, who had been traveling in the late eighties as part of the Amnesty music tours, right?
Sam: These big jamborees of celebration of human rights that traveled globally. And it was interesting because he saw both the Rodney King incident as evidence of the power of handheld video. And he'd had a handheld video he'd been traveling with. But also he realized that a lot of the testimonies and stories of human rights defendants had met during the tour were not broadly known.
Sam: So actually we came out of a, this power of this idea of the handheld video, right? We're all gonna have [00:18:00] video cameras, but like simultaneously the idea that it's not just about the visual evidence of atrocity, but also the stories of human rights defenders and that both play a part in this. Right, that it's actually about narrative as well as visual evidence.
Sam: And that obviously the two sometimes intermingle, but for different audience they matter. So we were born in the, those early 1990s out of the idea of how do you use visual evidence, you know, that is now perhaps more widely available of the things that many communities knew already happened, like police brutality in America.
Sam: But also that there are stories being told by human rights defenders in communities that are simply not known and that need to be seen and heard. And so our first decade was, you know, we were tiny, right? Like witness was like three or four people atmos trying to think about what it meant to get cameras in the hands of human rights defenders then.
Sam: What's the next step? And I think that's always the question is like when someone films something, what does it matter? Right? How do you use it? At least that first vision was quite heavily centered on like [00:19:00] how do you film stuff and then hope it gets in the news media, which is a real crapshoot, right?
Sam: The news media in most of the places where human rights defenders work isn't interested in showing compromising footage of human rights violations. And then the second part of it is like, how do you make sort of short advocacy documentary that might speak to different audiences, right? Because this is pre-internet, so you don't have the ability just to like broadcast, right?
Sam: We have the ability to create but not to broadcast. Right? So how do you do that when you can create but not broadcast is think about who has broadcast power. That's the media who has spaces where material can land and be shown. That might be a public hearing or a evidentiary space where you can show a video, but it's not about broadcast and it's not about the idea that you're mobilizing broader populations because there simply aren't many options for that.
Alix: That's so interesting. So there's two things, or two buckets of things there that I find really interesting. One is the difference between producing video for the purpose of evidence and accountability systems, and then the purpose of transparency. So just like sharing [00:20:00] or mobilizing around it, which obviously has like very different.
Alix: Implications for like how you would collect it, who you would want to be equipped with, the ability to collect it, how to manage people's expectations about what happened. One of the biggest challenges I see is that more and more people, I think, had an expectation that if you did capture some type of human rights violation on camera, that like, that was it, you know, that was like something was gonna happen.
Alix: Um, and that that expectation grew and wasn't met at all. So you wanna talk a little bit, I have another one, but like, do you wanna talk a little bit about that tension or in the witness's work, thinking about those two use cases of like why one would want video.
Sam: Much of my early career and I've worked at Witness 25 years, so I have like this kind of viewpoint that goes from before we had internet video, uh, streaming evidence is hard knowing what is actually gonna be useful in a court, getting access and I think actually the as early assumptions in witness where it's gonna be evidence or it's gonna be news media and both of those are really hard.
Sam: Right. And in fact, a lot of the [00:21:00] first decade of witness and then into the two thousands before we had mobile video was very much actually shifted to focusing on what I would almost describe as the third thing, not the two you described, which is. Advocacy videos, how do you, for a very specific audience, engage them for a very specific purpose, right?
Sam: You're trying to get a un monitoring committee to understand really the patterns of abuse by a police force against a community. You're trying to engage a decision maker to, with a compelling argument about why a pattern of incarceration is inappropriate or harmful and or maybe too expensive, right?
Sam: Whatever the argument is you're trying to make, and so. I think a lot of like the space that was found early on was like what we would describe as video advocacy. This kind of like, you actually need to find all of the audiences in the middle space, right? That's everyone that isn't an audience of millions or thousands.
Sam: It's the audience of tens and hundreds of people who aren't always far away from the communities at risk. Like video advocacy doesn't mean you need to reach, you know, your president. It might mean mobilizing your [00:22:00] community. It might be persuading a local council. And so it's very much outta that sort of video activist, video advocacy moment where you are actually realizing the limitations of evidence.
Sam: It's hard. There aren't that many venues to do it in most activists. Don't have access to them or don't trust their court systems, right? The courts may not be the place you go for justice, right? They're the place that prosecutes you, not the place you get justice for your issues. And then the news media simultaneously, right?
Sam: News media is corporate captured, it's government captured, or it's not interested in these issues 'cause they don't have commercial value. And so many of the founding assumptions a witness were around evidence and news. And then the progress was to think, how do you do this very targeted, smart, narrow casting, right?
Sam: Like how do you, not broadcast, but narrow cast to particular audiences. Of course, that's also still existing in an environment. Where you are, knowing there's not a huge surplus of content. And you also know that you can't access the broadcast outlets. So you're trying to bring evidence or narrative, [00:23:00] structured narrative and evidence to particular audiences to influence them and and that was really, you know, in some sense the bread and butter of how we thought about our work in the early two thousands.
Sam: Right. How do you do that? And it's because of those limitations of the evidentiary venues and the limitations of the news media.
Alix: That's really interesting. I really like that frame too. I think narrowcasting is a nice word for that. So like as things evolve from narrowcasting to broadcasting, 'cause this idea of pre YouTube, I guess, or pre.
Alix: Internet platforms that allow you to share video easily, and we have sufficient bandwidth for people to engage with those videos. I hadn't quite thought about how much you'd rely on traditional architecture of media reach to get that message out. It makes loads of sense. How did you reimagine the strategies for human rights advocates when all of a sudden there was this growing capacity to be able to To broadcast?
Sam: Yeah, and I, and I think it's worth noting that like in the late nineties onwards, you have the indie media movement. Clearly like this broader video activist movement that was thinking [00:24:00] about community-based screenings. Obviously a long history that predates this, right? We as witness drop into a long pre-history of community media or video activism that dates back to the forties and fifties, right?
Sam: And I think that's always important to historicize everything, which includes right now is like, you know, you mean video? Didn't start with YouTube. It didn't, I don't understand. Video didn't start with YouTube and you know, and, and video didn't start with Rodney King, like thinking about like kind of community-based media making.
Sam: I think that's the most interesting space where you've just obviously tremendous innovation in Latin America, dating back decades of community-based communication to mobilize, to organize, and also innovation in public broadcasting. Like a good example is the national film boards work in Canada in the seventies, pioneering the ways you had participatory filmmaking and in the development community, like participatory media making as a way of securing genuine participation in processes dates back in into the eighties.
Sam: And I think what happens over the course of the two thousands for us is like you're getting more and more [00:25:00] diverse filming that as of around 2005, six you can share online. The key question for us was like, what is the right environment in which a more diverse kind of sort of citizen filming reaches the right audiences.
Sam: And in 2000 and. Late two thousands, we start experimenting with a project called The Hub, which was, um, video sharing site for human rights video. So it prioritized all the different values that we saw as important around this type of content, right? Where you, which are values of context, you want to know?
Sam: What this is, it's not shock content, right? This is also a period where you had a bunch of sort of shock sites, live leaks, other places like that that were, you know, the early kind of shock video sites. So context matters, an option for action matters, right? You watch something, you can do something rather than just watch it for lulls or for shock value And consent matters, right?
Sam: Consent for both. You know, the participation of people in the video, but also consent for your data to be gathered by the platform. So we really interesting moment where we were trying to think what does it look like to have the values that [00:26:00] matter around human rights content represented in a human rights video sharing site, roughly around the time that all of these were being popularized.
Sam: And I'm really proud of the work we did then, and that the colleagues at Witness did, because you know, those are still things that we're often lacking in those sites. The problem was, and that what we've had is obviously an explosion of content, and I subscribed to Ethan Zuckerman, the, you know, the internet.
Sam: Academic talked about the cute cats theory of the internet, which is like no one blocks or takes down the internet site that has cute cats on it because you're gonna annoy too many people. Versus, you know, you can take down the human rights site and like, you know, the human rights people complain. And also that people encounter content in the mainstream sites who are not going for human rights content necessarily.
Sam: And that like if you're actually trying to reach audiences that are kind of place where you go to find human rights stuff and you're a human rights person already is kind of rather circular, right? It's a little bit like going to a human rights documentary film fest, which is always the weirdest experience for me.
Sam: It's like we're going to [00:27:00] watch the products that describe our problem made beautifully, but they really don't go anywhere. It was a very interesting lesson learned for us of like, we ran this site and then we shut it down and we shut it down because we realized that we needed to be engaging with the mainstream sites and working about where they were, and also trying to think how you made sense of a huge volume of human rights content.
Sam: So a lot of. A phase of our work in the early 2000 tens was working with YouTube. We ran the human rights channel on YouTube that aggregated human rights content on YouTube and experimenting with what it meant to kind of curate human rights videos. So you make sense of volume. So that's the dynamic shift that happens is there are certainly huge constants that are, have always been true around video, right?
Sam: Do we trust this? Are the people in it safe when they're filmed, do we have consent? Does it lead to action? The big shift, of course, of the two thousands is more participants creating it and more volume of content, which raises all kinds of questions. Like in fact, now it really. Doesn't matter if you film something terrible because it's quite likely people won't see it, and [00:28:00] even more likely they won't act on it.
Sam: But at the same time, it's more easy to show things like the pattern of abuses, right? You can actually show that this happens across a country, right? We've got videos from 30 sites in Western Sahara. We've got 50 different videos that show the pattern of military repression of um. Uh, thing we have clarity that in every community in our country, there are Warner shortages.
Sam: We have clarity that there have been forced evictions in multiple sites around an Olympics taking place in Rio or a football championship taking place in Brazil. So I think there's the dynamic that appears in the late two thousands. It's more participants, which is in many ways great because it means more human rights issues.
Sam: Talked about more coverage, more voices that didn't have access before. More voices that were not considered professional before, so didn't have the access to be media creators. But you have a volume problem. That means that things get lost. So you have to start thinking about like, what does it mean to actually see volume as a virtue, not as a problem.
Alix: So as the number of people producing. Content [00:29:00] related to human rights abuses grew. I imagine those same people or the general public would be less swayed by seeing imagery of human rights abuses.
Sam: Are people like desensitized to it? There's always been a question around desensitization, right? Like, and how much that happens.
Sam: I still think in that era, it's actually not that common for people necessarily to have been seeing tons of human rights footage. But there's certainly a kind of risk of desensitization that you know, becomes really apparent. Like it's much easier to see something truly graphic and truly decontextualized, right?
Sam: Which again, is the point of curation, is like it's about volume, but it's also about context around the volume and context around the individual items. Because citizen footage often doesn't have a lot of context, right? It can just show the graphic incident, and frankly, people turn away from graphic incidents.
Sam: They don't wanna watch it like ordinary humans. In general, thankfully, are not deliberately looking for decontextualized violence to entertain themselves, right? And so that actually almost the last thing you want to do with that volume is get more and more single [00:30:00] incidents of decontextualized violence in front of ordinary people, because rightly, totally humanly, they're gonna turn away.
Sam: And it's also important to think the curation might also be for audiences like those advocacy audiences as before. So like how do you make sense and explain a pattern really matters for human rights audiences for accountability. 'cause it's saying not an isolated incident, it's a pattern of abuses. We can show who is responsible for this, who's the perpetrator.
Sam: And it's around that time also that we really start to develop very strong guidance on how to think about video as evidence. Because you have many more people saying, I'm filming this, surely it's evidence. But actually you, you start explaining things like, actually maybe it's not about just showing.
Sam: Another terrible incident of the aftermath of a bomb strike. But it's also about getting things like the linkage evidence, the explanation of who was responsible, and it's also around then that we start to think about how does some of the recurring problems that have been more contained before, right?
Sam: How do I trust this video? And how do I protect people in it? And the reason I say they're contained is before, right. You know, if it's a human rights group [00:31:00] filming it, you know they have processes, right? If it's a filmmaker, they store their media files. If it's a news entity, you know, the trust questions are very different from like, I don't know who filmed this.
Sam: It's open source information. It's a civilian who may be anonymous. Increasingly people are saying that you can't trust things you see online, right? Somewhat rightly, right? Like some things you can't trust online. And at the same time you've got these, what I would describe as visual anonymity problems, right?
Sam: And we see it, you know, very starkly in the early days of the Syrian conflict of like, you know, protesters and their faces visible in a protest and they're carrying a banner. And then you see people trying to blur their faces, like very crudely in post-production. And it becomes visible that we've built an ecosystem for creating content and sharing it, but not creating context around it, creating trust in it, or protecting people or necessary at leading to accountability.
Sam: So we've sort of like we've. Put in the distribution part, but we've not really addressed the trust, the safety, the consent, and the action parts of it. Right. [00:32:00] And so then how do we start to think about those at a technical level, skills level, and then are there actually human rights and journalistic venues where this kind of content can secure change?
Alix: You're also making me think about, I'm sure you know the idea of structured journalism. Which is like essentially rather than think about journalism as a series of stories or artifacts that contain in them a narrative that has stitched together lots of facts, but it's not structured in a way that's reusable or building out an information architecture that you can kind of continue to draw on as a newsroom, let's say.
Alix: I never really thought about it as like this video production essentially is the bottom up development of something akin to structured journalism or a structured policy research and like these patterns. Do you feel like the people in positions of power and influence that nominally should be looking for these patterns so that they can find ways of holding people accountable and sort of changing systems?
Alix: Do you feel like they've tried, like do you feel like it's something that they've been like, ah, there's this wealth of knowledge that we can now like politically [00:33:00] engage and, and use to improve the systems we govern?
Sam: I think it's a yes and a no. Right? And, and probably it's good that it's a yes and a no as well.
Sam: And I'll explain why as. Yes, certainly institutions of accountability have gotten more and more receptive to thinking about the role of video, of open source investigations, that includes video, social media posts, all kinds of other information that are not created in ways that were, you know, traditional kind of trajectories.
Sam: And it's very normal also in like a policy setting to show a video to say like, I'm gonna show it. It's very normal in a trial setting to show content, to show victim impact statements or clemency type things. Right? So across the board, right, it's adjusted. That said, like, we're still a fairly text centric and oral presentation centric society in so many settings.
Sam: And so it's only to a certain place. That gap is very visible right now that we're still in most of our policymaking venues not oriented that way. Most of our accountability, the reason I also say yes and no is I think, you know, there's been a lot of academic discussion around the [00:34:00] professionalization of video activism and I think it's really important to recognize, we wanna make sure we don't co-opt every act of like civilian resistance into being.
Sam: This has to be part of some trajectory into like influencing formal power, right? Because that's not why people are doing it, right? Sometimes they are and we should support 'em if they're doing that. But if they see power sitting close to them, like in communities, or if they don't wanna engage with formal power, the last thing you want to do is co-opt that into like the way you've gotta do it is this way.
Sam: And like I'm very conscious of that. And I think the witness team has always been conscious of like, don't co-opt people into like a professionalized dynamic on the first side. 'cause that's. Classic problem of like something gets professionalized when it shouldn't be. And then the second is don't co-opt it into formal systems of power or like systems of power that are not where people want to have their fights or don't believe they should be, where they should have their fights.
Sam: I think where that can get really complicated is in like the environment of curating content online. And so a lot of our work has been around like what are the ethics of making choices about what people wanted when they [00:35:00] filmed something. When you can't talk to the person who created it, you don't know who they are, you don't know the risks they may run.
Sam: And how also do you kind of broaden, and this takes us back to like things like obscure cam and proof mode, like how do you broaden the accessibility so people can make. Good choices about their own desire to be trusted or to add indices of trust like metadata into their content or so, or to conversely protect themselves by blurring their face from really when they make it so they don't lose the agency about it further down the line.
Sam: Because what happens is like once the video gets distributed, you have very little control over it. Right. You know, it gets co-opted, it gets used perhaps in ways that benefit you but may also be harmful.
Alix: That's super true. So getting into obscure camp, I mean, I think one of the, like I think one of the first times I encountered someone at Witness was when you guys were doing, it was a guardian collaboration.
Alix: I remember seeing on like a really now old, but I remember it was the first time I'd ever used a phone that had a haptic thing when you pushed the screen and it would, and I remember thinking that was like the [00:36:00] coolest. Most new feature, which is kind of lost to think about. Um, but like getting asteroid on it and like so that I could try out Obscura cam.
Alix: I don't know. That was always one of the interesting things about what you all did as an organization and do as an organization is like you're interested in the political evolution of video as playing a role in society and human rights organizing, but you're also trying to like make stuff and I feel like obscura cam, I dunno, do you wanna describe what it is?
Alix: Sure. For people that probably, 'cause it's not around anymore, don't. Is it it? It
Sam: does still exist. You can still use it. Oh, okay. But there's a reason why it's probably less used, which is a good reason. So this was around 2010. Again, there'd been a constant dynamic of you have to blur faces, right? In documentary work, in human rights activism, right source protection.
Sam: But it's was, let's imagine late two thousands or 2009, 10, and you have the explosion of citizen filming. You see people struggling to work out how to do that, right? People in the Syrian protest, but also globally, right? Trying to make good decisions about visibility and obscurity. And sometimes you wanna be visible, but obscure, and those are decisions that [00:37:00] need to be made close as possible to the person filming the person who's being filmed, not at a distance, particularly in this distributed environment.
Sam: So there was clearly a gap, right? Like how do you think about doing this in a way that's as close as possible? Well, that, that's on the device, ideally, right? Because also like devices get seized, right? You go through a checkpoint, someone takes your device, and if your content is on there and you haven't blurred the faces already, that's when you get seized.
Sam: So obscure cam was a conceptualization with. The team at Guardian Project, Harlo Homes, Nathan Freis and others. Really amazing digital security, mobile digital security experts. And it, it enabled you to blur faces on a mobile phone, like one of those old devices. There were two rationales from witnesses, point of view, from developer.
Sam: We wanted a practical tool, but we also wanted a reference designed to persuade others to build it at scale. And I think this was also the lesson we had learned from building tools like the Hub, this human rights video sharing site was niche tools don't reach this broad diversity of people who are now the participants, including the [00:38:00] participants who are least prepared for this 'cause they haven't done a human rights training.
Sam: They haven't thought about how they blur their face. And so we also used it and as a reference design to engage tech companies and say, why aren't you including this? And we would point to use cases from obscure CAM that weren't human rights ones, like people doing what everyone does now, blurring their kids' faces in photos on.
Sam: Facebook then when people actually posted their kids on Facebook in like 2010 11, or people saying, you know, I'm concerned, you know, about very mundane privacy issues. And so when we were talking to YouTube and we were actively engaging YouTube particularly, we said like, why haven't you got this in your platform so that it's much more broadly accessible.
Sam: Now, of course that was moving it into a platform, not on a device. It's not ideal. And that was a very strong sort of thinking point for us was you build activist tools, but there is a challenge with activist tools. They don't, particularly in the medium WeWork, reach actually the people who need them most, most often.
Sam: And so you need to persuade the mainstream to come up with. [00:39:00] Effective as possible, a version of that that can be used by them. And, and that's also been the story of the last 15 years for us around kind of how do you make an authentic able image? Because we were building at the same time, this tool called at the time, the very clunky secure smart cam, which nonprofits never know how to name tools.
Sam: We die on
Alix: Mount literal, it's like
Sam: super literal, secure, smart cam, s, SC for short, which is of course equally intelligible. So we're building these tools then for like the opposite side of the problem is you want visibility and obscurity and sometimes to be visible but obscure. And you want the ability also to say, this is actually what it says it is from a certain time and place, which might also have people in it who are visible but obscure, right?
Sam: Like these are all kind of cross. So at the same time we started building these tools that were for that again, with the Guardian project and you know, the technical guidance and, and also the inspiration for this comes from like that amazing team at that. Group and from the people we were working with globally who are saying like, we're navigating this sort of dilemma.
Sam: And again, that's an [00:40:00] example of a tool that absolutely you can still use, right? It's still a good tool, you know, that is available in in the app store, but also for us and increasing over the last 15 years, like how do we translate that from being a niche concept into a mainstream concept while preserving things that matter for the human rights use, and frankly for anyone who's vulnerable, like privacy, access, and ability to make sure it's not weaponize against you in ways you didn't anticipate.
Alix: Yeah, I think that also that theory of change of. It kind of a skunk works within civil society to build tools that need to exist. That was like right at the same time that Moxie was making Tech Secure and red phone and stuff, which then, you know, not directly, but eventually became Signal and then or his work, uh, I think built on top of Tech Secure to build Signal.
Alix: But then I think both the features that you all were working on and the features that he was working on, I don't think that many people know that the reason that WhatsApp was able to turn on end-to-end encryption overnight for like over a billion people is because of a nonprofit organization that had built a new encryption protocol that the private sector basically just [00:41:00] like couldn't be bothered to do.
Alix: And I think it's such an interesting theory of change that like maybe that period of time is where a lot of the best examples of it come from, but I think it's a really powerful and cool. Trajectory for tech. Unfortunately it doesn't end up getting tremendous amounts of funding for the people that came up with the, with the
Sam: features.
Sam: So what we learned over like the 2000 tens was just an increasing pervasiveness of, you know, claims that real footage was falsified. Often what we described at the time as shallow fakes, right? Someone had mis contextualized an existing image, likely edited it, and obviously you saw people in power glomming onto it.
Sam: Like in 2017, Bahar A says, you know, a video, you can never believe it because it's so easy to falsify, right? It's a very classic example of like someone in power, in that case denying the chemical weapons attack that had just taken place. And this sort of interrelates to kind of how we started to think about trust in images as the AI era started to take off was it became clear to us around 20 17, 18 and we held.
Sam: In fact, one of the [00:42:00] first interdisciplinary meetings around DeepFakes in 2018 that had technologists in it, it had civil society, it had the companies in the first time. I think everyone had met around this topic. Um, and we asked them, what do we need to think about looking ahead? What are we gonna need?
Sam: And one of the recommendations was, we gotta think about like a trust infrastructure for what is real and made with ai. Right? Very forward thinking at the time. And what we did, and we were still working on proof mode, still being iterated, we figured there was another way to approach trying to land the concepts from it.
Sam: And so we worked on a research report that said, here are the 14 dilemmas you're gonna need to address. If you're gonna make this type of infrastructure work in the world. So it's based on lessons learned from proof mode, but also from similar apps. You know, folks who built Teller true pick eyewitness to atrocities like, you know, a family of fairly niche apps that had started to emerge to prove authentic.
Sam: And we did that basically to like set out a position really early on how to do this right. What happened after that was a number of the companies in 2019. [00:43:00] Started to come together saying, well, how are we gonna build this infrastructure? We came in and said, here are the problems, right? These are the things you're gonna need to do.
Sam: 'cause we've been working on this for seven or eight years now. And so the strategy for us in the context of something like C two PA, which is the Coalition for Content Provenance and Authenticity, which is a standards group that is building a set of technical standards for showing the recipe of what's AI and what's human in the content and communication we're experiencing.
Sam: Now, our strategy there was to come in from having worked on this area, but also having really thought about like the key dilemmas and say we are gonna like from right at the beginning of building a technical standard rather than a product argue for the values. So it's slightly different from YouTube where you're saying build a product to blur.
Sam: It's like come into our technical standards group and say we are gonna. Bring expertise and knowledge around how to do this, right? If you care about human rights. We made a choice as witness, which was a new one for us to like lead a task force on threats and harms within a technical working group of a standards group, right?
Sam: Because that was the right way to think. [00:44:00] How do you shape this from the bottom up, right? And it's the same rationale is like the people we work with. And I think, you know, this is a lesson learned in social media and we're learning it hopefully not in completely horrendous ways with ai. That if you're not in it early and you're not arguing about the things that are foundational, that you end up tinkering around the margins.
Sam: And even your ability to think around the margins is directly proportionate to how close you are to the center. You're a human rights activist in New York. You can tinker around the margins a little better than unfortunately, someone in the global majority who's got a much more direct experience, often of the harms that are being driven out of social media.
Sam: And so our lesson learned was like, we need to be super early. On this emerging question around trust and authenticity that's gonna come. And we were talking about it because we'd seen the shallow fake problem, and we just started in 2018 to bring together the communities we work with and said, deep fakes that can feel really hypothetical.
Sam: And it did feel hypothetical to people, like we would show it to them and they'd be like, we've not seen any of these. They're not happening. Right? We'd be like, we know, thank goodness. Right? And the cases where it was [00:45:00] happening were non-consensual sexual image cases, which were a few, right? It's not a huge volume in 2018.
Sam: But what we said to people is, we need you to say what you want and you need, and you demand out of the emerging infrastructure and policy in this 'cause. Now is the time to say what we need, not in five years time where this is a pervasive problem because we will be. Starting from a standing start in five years time, we have much less chance of influence.
Sam: So we engaged in essentially a technical standards and a lot of other places around deep fakes, but from the same core values, like how are you gonna have privacy? How are you gonna have trust in information? How is this gonna be premised on the needs of a broader set of people than Silicon Valley and Washington and London and Brussels, and how, how's it gonna reflect the harms that those communities see?
Sam: Because they've seen parallel harms. So again, ization that it's, I remember in those 2018 meetings when, in 20 19, 20 20, 21. You know, people are like, this correlates to problems we already have around state surveillance, around corporate capture of information of gender-based violence towards women in civil society, as well [00:46:00] as in politics.
Sam: It correlates to a disregard for the visual evidence that is manifested in something like Bashar. A sad statement about you can't believe anything you see. So civil society had a really good read on what the problems were with synthetic media and deep fakes that informed our read. So like witness engaged on this early because we listened to what other people said and we listened to our own experience of what was seen as these ongoing dilemmas that had happened with other generations of technology.
Sam: All the way from handicaps to mobile video to social media to live video, which was another place where we'd done a lot of work in the interim space and now with this new way to create synthetic content.
Alix: So we almost managed to preemptively. Build into technology, something that could have prevented the dramatic harm that happens when people just casually launch new products into the world and say, I wonder what's gonna happen when I do this.
Alix: And then requiring a wave of harm that then everybody has to get socialized around. And then philanthropy has to get it enough to resource [00:47:00] organization, like the slow reaction time of society, which feels like it's only getting slower, even though we should have gotten used to it by now. It's so frustrating that like, you were basically like five years ahead being like, okay, so we're gonna need this infrastructure and we're gonna need to be prepared.
Alix: And then people were like, yeah, yeah, yeah, yeah. No, there, there's not too many cases of it yet, so we can, this cycle is just so frustrating.
Sam: Yeah, it's frustrating and it's like I, you know, I have an ambivalence of pessimism and optimism altogether around this. Right. I also know that, that we're only addressing part of the problem here, right?
Sam: Sure. We could trust piece of information, but if we have. Corporate control that determines the bigger flows of what information we receive. So I sadly, even if we got this right in a perfect way, we'd still have lots of problems with how AI is intersecting with the social media ecosystem. So with total humility, even if we'd succeeded beyond our wildest dreams, I think the reason I described that, ambivalence, optimism, and pessimism, it's like I still think many of [00:48:00] the things that we spent time working on are gonna set us in good stead.
Sam: Still, I just wish they were in a further along place than they are. Right. And so, you know, I'll give an example of that. Like over the summer, Google launched their new phone. And normally I don't celebrate any company launching a device in any great place. It's not a cause for me to like, like candles or anything.
Sam: I have never queued in front of a tech store, not my thing. But I did celebrate that because it was a phone that in a privacy protecting way, allowed you to prove that a photo was authentic. It's just a photo so far and showed the edits if they were done with Google Tools. And for me, that's very much a validation of many of the things we pushed for and fought for internally in that standard setting group to say, look, you've gotta think about this way in which this, you know, could backfire a few.
Sam: Conflate how something's made with who made it. If you don't think about how, you know, fake news laws and weaponization of those will be conflated with authenticity and trust. If you don't think about questions of access, this has to be in a phone, not in just, you know, a set of tools that [00:49:00] are, you know, in the next phase of evolution.
Sam: So you see places where you're like, actually this is the right vision that you could be able to do this. What I see now and is how could we have moved this faster with more political will from the companies, good people within companies who worked on this. And like, and I have a lot of respect actually for the people I, you know, that we collaborated with and colleagues collaborated with within engineering teams and product teams in a range of companies that are the big companies, right?
Sam: Like in Adobe or Microsoft, uh, you know, Google. They weren't really thinking and they listened to this and said, actually, can we try and build in a way that. Responds to those threats. We're not trying to create those harms. We hear that. We actually are grateful that that's there. But what you're seeing, and you've seen it in the last couple of weeks or the, the period since Sora launched, that the C two PA standard, for example, is embedded in so R two videos very consistently actually.
Sam: But you can't see it if you're actually just watching a video in most tools. And it's being lost as it goes to other things. And, and I, I'm watching most of the Sora content I see on TikTok, right? 'cause I just [00:50:00] spend more time in TikTok than I do in other places. I actually, not in a totally negative relationship to TikTok at the right time of day for what I want it to do for me.
Sam: But you're seeing there that it doesn't have the C two PA data. It doesn't have the watermarks. 'cause people have often deliberately stripped it out or cropped it out, you know? And that's because it's a big technical challenge and a big governance lift. You know, I've been talking a lot about this as an epistemic.
Sam: Fracture risk, not an individual deepfake risk. So I'm not that bothered if we get fooled by DeepFakes here and there, although I know that can have a horrible impact in particular circumstances. What we're failing to see from the companies is a commitment to invest the serious political will, the serious engineering capacity to speed up the process of jumping over the technical hoop, right?
Sam: Interoperability of a metadata standard and watermarking while preserving privacy and access. And doing that internet white is a big task. I don't underestimate that. I don't think that's a light lift. And having governance that works and that is done well is not a light lift. But neither is impossible.
Sam: If we're trying to build a GI, [00:51:00] you can build an interoperable way of. Preserving metadata in a way that's privacy, protecting, accessible, and helps people know when they see the TikTok Granny, you know, jumping off a cliff, that she really didn't do it or that, you know, the cat wasn't baked into the loaf or that Stephen Hawkings didn't do that horrible act.
Sam: You saw him being abused in, right? Like we can manage that. And that's what frustrates me as like we need the attention to the epistemic risk we are all facing of like not being able to know what's true and false or real and authentic, perhaps better than true or false. 'cause I think they're not always the same thing.
Sam: And I do think it's possible, right? So we've got to a place where there are standards that I think are generally pretty good in terms of some of the human rights values and the concerns we heard. It's just how do we generate the pressure and the momentum to make sure that the engineering and the governance gets put behind the people who actually wanna make it work.
Sam: Both within companies are demanding it from civil society and increasingly are demanding from regulators, particularly in Europe, California, and a bunch of other global majority countries. It's become increasingly clear from communities we work with who do work on [00:52:00] land rights and indigenous peoples.
Sam: And you know, and that's a third of our work in the sort of human rights space, like our core, like how do you work with people in the human rights world who are, you know, fighting for their rights, using video, using audio visual technologies increasing, using AI is the ability of those communities.
Sam: You're saying, we wanna engage on DeepFakes, we wanna engage on the way this is undermining confidence in, in our leaders in narratives, and we wanna talk about data centers and we wanna fight for our land rights around that. Clearly in an organization like our own, like the intermingling of how people understand information harms with.
Sam: Physical material harms to land, like the most existential thing for many people. Like where do you live and do you have a place to live and is it sustainable? Like a very visible and obviously I think that's really important that we be able to talk about information harms and, and material harms and critique both, which might include exactly as you're saying.
Sam: Like, wait a second, how much did it cost to create this? Was that like a good use of precious land, water, electricity in a way that creates something that then potentially creates an [00:53:00] information harm to that very same community. So like you're paying to create information harms for your community via the data centers that are constructed on your, and exploit your, your land and your your resources.
Alix: It also kind of goes full circle. 'cause I feel like Karen Howe had this kind of offhand remark that I think about all the time, which is that any journalism about data centers and AI is by nature investigative journalism. Because basic, the secrecy around it is so extensive. And so thinking, going full circle that like having, even when people see what a data center looks like, even when people hear how loud they are, even when people can like, get a sense of like how drought ridden, a lot of the areas that these data centers are put in, that like, that visualization of the physical infrastructure necessary to power a lot of these systems, I think is actually needed right now.
Alix: Like, I think more people need to see and understand what's happening. And I feel like, I don't know, like I feel like it's, it's happening. I feel like people are coming to understand what's going on and I, I, I don't know. I feel this momentum, [00:54:00] I feel like people are, are not gonna be too happy. For very long with this.
Sam: Yeah. And you know, and I obviously like from an organizing perspective, it's easier to organize around physicality and direct and local impact. I think the information harms one is harder, right. In some sense because it's like, it's very easy. Like, and I, I think about a lot with the epistemic fracture ones like.
Sam: I've basically been looking at a lot of these like, you know, the TikTok sharing of sorrow videos and just reading the comments, looking at how people share them in some sense, it's sort of funny, right? And a lot of the commentary is like, you didn't get that was ai, right? And like, you know, the sort of just the back and forth and it's, and it, and it feels much less distinct and obviously there are very clear information harms, you know, in the DeepFakes and synthetic media space.
Sam: Ones we encounter in our DeepFakes rapid response force that are like, you know, try to impact and election context, try to compromise a piece of war crimes footage, but those, they can still feel a little abstracted from everyday's existence. I think the place, obviously where it's starting to feel really visceral is the explosion of the sort of commoditization accessibility of notification [00:55:00] apps where it feels very proximate to young people, to people in like much more.
Sam: Normalized contexts, and I think the worrying thing is like essentially the explosion and unification apps. It's terrible in and of itself, but it's also a kind of a foreshadowing of like just all of these ways of just undermining people, undermining confidence in facts, attacking people in ways that have damage, even if something is ultimately.
Sam: Adjudicated, you know, authentic, right? That's irrelevant obviously in like the space of ification and frankly irrelevant in a lot of the erosion of trust ones. It's both collective impact of watching too many videos where you're like, I just give up. I'm so wary of trying to work out what's real or false or it's the thing that is in fact false or real, where your ability to correct the fact that people thought it was a deep fake is so low like six weeks later when everyone's like, actually did we really worked out?
Sam: That wasn't what it said it was. So, you know, I don't know if it's a good or a bad thing that we are, we're increasing the proximity of the information harms to people, whether that will spark, you know, a [00:56:00] greater pressure, right? Like I think one of the ways we might want to think about how we increase the momentum towards things like, you know, use of C two PA better safeguards is actually consumer pressure.
Sam: It's people going like, actually I'm tired of this. This is not what I asked for. This is undermining confidence, you know, transactions in business, right? And it's saying we've been thinking about and witness, there's like a suite of things. We've been trying to think how do we act proactively towards the future?
Sam: Like provenance infrastructure that makes it easy for people to know what is synthetic, what is real, and the mix of the two. So you can make your own decisions about whether it's funny, deceptive, malicious, or just you know, you communicating with your friend and you're choosing to use memes, right? Or political communication, illustration.
Sam: Allegory. Satire, right? Like we need to have greater transparency there and we could get there. We've got the, some technical foundations just needs political will, engineering and regulation that drives it, right? And consumer pressure, probably detection that is available with people who need it most likeness protection that actually works and is available.
Sam: Future proofing our workflows in [00:57:00] a way that means we are resilient to that rather dystopian future of a viewers time. And then making sure that our documentation workflows ensure that when people are actually trying to access this information, it comes out in a way that is not stripping out voice and point of view from the frontline actors who've risked a lot often on and made really deliberate choices to create inform.
Alix: It's sociotechnical in a way that I think makes it so complicated to assert a path that feels good and also anticipates and I, yeah, the scale of the problem, I think makes solutions hard to wrap your head around, but also. Yeah, I think accountability of the people making these decisions that are so hugely impactful globally, so casually, I feel like is probably also just important to keep hammering away at.
Alix: They shouldn't be in a position of making these hugely consequential choices without any input, um, from anyone. Um, and maybe they're nice and they, like now they're gonna set up a big foundation. I don't know. We'll [00:58:00] see. You know, maybe there's gonna be a little grant in there for witness. God, I hate it.
Alix: Yeah. Oh,
Sam: depending on goodwill. And good faith only goes so far, let's say. And, and in this case, we, we don't have the time, we don't have the time to depend on goodwill and good faith. And there's fairly limited evidence that Goodwill and good faith is in enough supply to justify the risks we're now facing.
Sam: So,
Alix: no, I don't think we should trust them. That's my takeaway. Um, cool. Okay. Well, thank you. This was amazing. I'm so sorry for making you go, go back through that history, but I, I feel like it, every time we talk about witness one, I learn something new about the history of it. And two, I'm just always amazed at how ahead of the curve you all have been and how much my understanding of sort of audio visual politics, it comes from you, you, Sam, not even witness.
Alix: Um, and it's just such a pleasure to get a chance to kind of walk through how we ended up in this moment and what we might do about it. So thank you.
Sam: Thank you, Alex. Appreciate it. The chance.[00:59:00]
Alix: All right. Thank you to Sam for taking us through that 35 year history of how video has sort of changed in relation to society. So next up we have s Soizic Pénicaud , who is one of those people that is an example of people we've had on the show that just have really interesting careers around technology, politics, and have really interesting insights where she had multiple stages of working in different types of institutions and has come out of it with just, I think, a really interesting strategic analysis and insights into the role technology plays in society instead of what we might wanna see.
Alix: So stay tuned for that. Also, next week I'm gonna be at the Vatican, which is kind of wild, and we'll hopefully be producing some content off the back of this event I'm going to next week that is focused on multi-faith engagement in what happens to society when AI is so fetishized and hyped, um, and how does that change people's relationship to meaning and understanding in the world.
Alix: So more on that soon, but just to say I'm back on the road and. Uh, I'm looking forward to it. So we [01:00:00] will see you next week with an interview with Soizic Pénicaud . Thanks to Georgia Iacovou and Sarah Myles for producing the episode, and we'll see you soon.