RRE POV

Deepfake: The Battle for Authenticity in the digital era 

This episode explores deepfakes and generative AI’s impact on our society today. Will, Jason, and Raju discuss the challenges of regulating AI and the importance of authenticity in an era with widespread misinformation. From the dangers of political deepfakes to the need for robust detection tools, this episode covers the various implications of this technology on our understanding of what is real and what is fake.

Show Highlights:

(00:50) Introduction to the impact of deepfakes on authenticity
(02:05) Generative adversarial networks and the public's reaction to AI advancements
(04:51) The investment in deep fake detection and the overall responsibility for identifying deepfakes
(08:06) Societal distrust generated by deepfakes
(12:54) Potential for businesses focused on authentication and verification of communications and news 
(20:41) Importance of Identifying subtle deepfakes threats and the idea of individual trust scores
(24:55) “Gatling gun” segment
(29:26) Closing remarks

What is RRE POV?

Demystifying the conversations we're already here at RRE and with our portfolio companies. In each episode, your hosts, Will Porteous, Raju Rishi, and Jason Black will dive deeply into topics that are shaping the future, from satellite technology to digital health, to venture investing, and much more.

Raju: Best fake singer: Milli Vanilli or Britney Spears, when she was caught several times?

Jason: I don’t even know who the first one is, so I’m going to have to, by default, go with the second.

Will: Girl, you know it’s true.

Will: Welcome to RRE POV—

Raju: —a show in which we record the conversations we’re already having amongst ourselves—

Jason: —our entrepreneurs, and industry leaders for you to listen in on.

Will: Welcome to our RRE POV, the podcast where we unpack the conversations we’re already having here at RRE. Today we’re going to talk about deep fakes. And for me, I feel like this is something I’ve been thinking about for a long time. This question of authenticity I think is one of the central problems of our time. Is this thing that’s in front of me what it actually appears to be?

I actually started thinking about and worrying about this in the late-’90s when I was working on early implementations of digital certificate technology between trading partners. Companies that bought and sell stuff between them needed to be able to confirm that the other party was in fact who it appeared to be when they were exchanging documents and funds and that sort of thing, and authenticity was at the core of that question. I feel like I’ve watched the gradual erosion of confidence around authenticity in the intervening 30 years with a real acceleration that obviously began with the emergence of the social platforms, and the active misinformation campaigns sponsored by state actors like Russia around major elections. And we clearly saw that in 2016. Well, we’re in another election year, and this question of deep fakes and how the world is going to deal with them, and process them, and what deep fakes might actually lead our electorates’ thinking in one direction or another is a central question for all of us.

So, we’re going to start to tackle this today on RRE POV. So, Jason, I got a question for you. When you think about deep fakes, like, you’ve watched the generative AI phenomenon, I think, perhaps closer than any of the rest of us. What’s got you excited, and what’s got you scared in this world right now, particularly with recent events?

Jason: I mean, the funny reaction to all this AI development is, like, immediate regulation and putting it back in the box. And this is a genie thing. The deep fake thing has been around for a while. GANs are—Generative Adversarial Networks—those have been around for years now. Those are, you know, the original deep fakes that, kind of, came out, you know, the tools have gotten easier to run, are now very widely available, and now buttressed with a huge suite of additional tools for text generation, now video, et cetera, et cetera.

So, I don’t think we’ll see any of that stop, I think the response that is not useful is we need to immediately regulate generative AI out of existence, and we need to stop advancing. Because that’s important for, like, our productivity, our security as a nation, our technological leadership is incredibly important across a huge variety of domains. And the cat’s very much out of the bag. It’s just, like, more and more people have access to it. The nice thing is that when—the technology itself is kind of this amoral piece, whether it’s used for good or bad.

Fortunately, we have better and better tools to detect when things are fake. It will be a cat-and-mouse game. I think the forefront of generative images is expanding rapidly, and we’re going to need to see a subsequent tool set that might be lagging a little bit behind, which is why I think this year in particular being, call it, within 18 months of the ChatGPT moment that put, you know, these language models in the hands of pretty much everybody, you know, you can sign up for a Midjourney or even download Stable Diffusion for free onto your laptop and generate your own images. And now that is widely available. But I think the vast majority of the potential, like, negative political misinformation content, like, all those models are already out and in the world, stopping the advancement of further technologies, we’ve reached a good enough point relatively rapidly. So, I think this year in particular is going to be maybe one of the messiest ones because we have a little bit of a lag between the detection tools versus the rapid expansion of the capabilities and accessibility of the generative tools. So.

Will: I think that’s a really good assessment of where we are now. You know, the thing that I’ve sort of wrestled with, we know for a fact that the big platform companies have invested tremendously in this area, in deep fake detection. Google has huge groups, Meta has huge groups of people working on this. And yet, I think there’s a significant question of where does the responsibility lie for actually trying to identify deep fakes? And are we counting on the social platforms to start to police this activity? It’s probably beyond the scope of any one company, or frankly, any government to adequately police this. And I think you can go a step further and say, well, not everybody in the population actually cares, and so how do we feel about the need to police this?

Raju: Well, it’s a great question. I think there’s a responsibility to the individual. And people can take steps, right? They can follow a diversity of people and perspectives, they can look at a broad range of news sources, they can consider the source, check the author, nobody’s—everyone’s not going to do that. There are going to be people that care that are going to do that, and then there are people who don’t have that, you know, sort of depth of intellectual interest to care about doing that.

So, I think there is some responsibility at the technology layer. And I don’t want to be the one that says, “Hey, the social media companies are the ones that need to filter out or allow in,” but I think there are some technology tools that are broadly available today that can help filter things out. And my thesis is, if you do things like digital fingerprinting on sources that you create as opposed to ones that are, you know, sort of assigned to you through a deep fake, or you leverage the blockchain, right? The blockchain is actually an interesting social tool to help determine authenticity. We do it with Bitcoin today. We do it with other cryptocurrency today, so why couldn’t we leverage the blockchain for news sources, right? It’s just another digital media tool.

And then I think that there’s a bunch of AI analysis that can be run by the social media networks and other fabrics that can look for inconsistencies in facial expression, blinking, lip-syncing, you know, file, format, location, all of that I think can be done. So, I think the responsibility lies at three different layers: it’s the folks that are absorbing the information to go a little bit step further and do the research and not just believe a single source; it’s the people who create the news to say, “Hey, I want people to know it’s authentic, so I’m going to sit it on the blockchain, I’m going to make sure I put a digital fingerprint on this, I’m going to sanction this with some sort of nomenclature;” and then the social media companies do have some responsibility here, but it’s not just on them. And so, I think you can do that. But I agree with Jason. I think this year is going to be tough. It’s going to be really tough because we haven’t really kind of normalized how the game is working.

Jason: Yeah. I think Mike Dempsey has been talking about this, kind of like, distrust in society. And, you know, he was very early to the GANs technologies. And you know, once you see GANs, it’s like, you can just play the tape forward a little bit and see how this could very quickly get out of control. I think I totally agree with you, Raju. Like, we need the tools, we need the stamp, I think that will build trust among people who care about that what they’re reading and seeing is real.

I think the thing that isn’t part of the technology, but it’s more of a societal thing—which you kind of touched on—is, like, I think there’s a huge portion of the population that kind of doesn’t care whether or not it’s [laugh] real. It doesn’t—they’re not looking for, like, the authenticity of the original ideas; it’s whether or not those ideas conform to their worldview, which you know, is arguably much more about just the way technology has progressed towards these networks and then echo chambers, more so then generating new images, generating false information. Because false information has been around for years and years, like, you know, people have been lying to each other since we invented the language, you know? Like, that’s kind of like part of humanity.

Raju: Yeah. The difference, Jason, is like, virality has become so much faster [laugh], you know?

Jason: Absolutely.

Raju: You can just move a piece of fake data infinitely around the globe, you know, as quickly as you want. And, you know, we love it. We love viral videos. I mean, TikTok, everything.

Will: But the crucial moment was when we reduced the marginal cost of publishing to zero, right? Like, when you had publishing infrastructure that supported a trusted brand, then it wasn’t so easy to just distribute millions of copies of things, and we lower those barriers some time ago. I think we collectively believe that the—and I want to test this hypothesis with you guys—I think we, as a group, collectively believe that the advancement of defect detection is a public good, right? I think that there’s a lot of collaboration within the technology community, the bigger platforms, smaller companies, et cetera, around advancing it, even as you say, Jason, it’s inevitably going to lag a little bit behind the progress of what can be produced by the larger AI engines. So, do any of us believe that there’s an opportunity to build a company around deep fakes? Like, is there something to be done here as a business? Or is it merely just a problem of our time that we all have to learn to live with and navigate?

Jason: I think it’s interesting because you can think about it as, like, a societal layer, but then you can think very specifically about people. Like, there’s the fake news, falsified videos of a politician saying things they don’t believe that kind of perpetuate and get around on all the social media things, which is kind of where we started this conversation. And then there’s, like, the incredibly targeted—it’s based on the same technology—incredibly targeted false information for scams, you know, like, people getting tricked into, oh, my daughter is in trouble. I need to wire $10,000 to this thing. And it’s their own daughter’s voice [laugh], you know, saying these things, there was a recent financial institution that wired $25 million because the CFO was on a video call with the person. That obviously wasn’t the actual CFO, but the video was faked, the audio was faked, it sounded like the person.

And so, I’m wondering, like, there’s the societal level, like, how do we do this as a group of humans and as nations around the world, but then also very specifically, like, how does up our president, or any president, make sure that the calls that are coming in and the people that they’re talking to and the information that they’re getting is real? Or an individual citizen. It’s the same thing. That’s where I think that authentication is really important and very tangibly felt because that’s, you know, the broad-base kind of lie and that viral lies, you were saying, Raju, that’s a tough thing to get one’s arms around and solve technologically, but for people who do need to trust information that’s coming to them, those authentication technologies are incredibly important. And that can be to support a, you know, major political decision, or a government action, or a financial transaction, et cetera. And so, in those instances, getting that wrong is incredibly costly. And so, therefore—kind of the flip side—the value of not getting it wrong [laugh] is quite high. So, I think those are the areas that could be very interesting.

Raju: Yeah. So, I think just to get directly to your question, are there companies that can be created? Is there a business that we would fund in this area? I think absolutely, right? If you think about a trusted telecom communications layer, right, just—I mean, for God’s sake, right, this is—I worked on this for a dozen years of my life—you know how do you—and packet technology kind of created a little bit of rift [unintelligible 00:13:22] circuit technology, it was point to point, baby. And, you know, you couldn’t sort of crack the code in that. You had to go through circuits and packet switching where you could disaggregate, reassemble voice clips in real time, kind of, made it a change.

But I think you can go—I don’t think you go backward to circuit switching, but I think there’s absolutely, you know, layers of communication, companies that can be created, that know that you’re speaking to the individual you think you’re speaking—unless the phone has been stolen or something like that. But you know, you’re not going to pipe it in from someplace else in the world. I think there are going to be companies that are going to be helping to digital fingerprint or leverage blockchain, right? Blockchain doesn’t necessarily need to be—I mean, it is a really valuable tool for crypto assets; you can think of this as a crypto asset. And, you know, you can say let’s leverage the blockchain and get the world to, sort of, sanction something to be real.

And maybe it’s a piece of news that important. I think there’s a company around there. I actually think the world is searching for a Consumer Reports. We used to have this thing I got every month, and I’d looked at it, and it was like great dishwashers in it, and they ranked it one through seven, and I bought number two because number o—

Jason: No stains on the glass.

Raju: No stains on the glass. And then number one was always too expensive for my household, but I could buy the number two or three. And it was a question of features, and they did a nice contrast. You know, Better Business Bureau, whatever you want to call it, I think there’s an opportunity for a company to create that trust icon that we all, kind of, are looking for [laugh]. You know, it’s funny, I created a company in the safety space many, many years ago, and we, you know, would measure our results, the acceptance—so we sold to public safety officials, we sold to universities, we sold to 911, centers, we sold to police stations—and if you had logos of other police stations on there, other police stations would buy. And it was that testimonial that you kind of needed.

And so eventually, there was a branding—I don’t know what it was called; it’s been long time ago—where you could just stick this one logo and say, “Accepted by the police department’s all around the”—and you would just sell because people will say like, “Hey, man, it passed that sanctity test.” So, I think there is a company to be created. I just—I don’t have it right there.

Jason: Yeah. I mean because we have Snopes, right? Like, these have kind of existed. They’re just, like, the scale at which you have to validate information is just so much larger. We’re not just talking about articles anymore that are published by some random blogger that appears to be valid, but is completely misquoted. This is, like, the scale of billions of pieces of information that are—I mean, honestly, all information is suspect at the very top, right, until you’ve kind of like validated it. You know, I was listening to Ben Thompson, as I am wont to do, and he was—you know, this was kind of like the early days, like, just as ChatGPT was really blowing up, and they were kind of trying to, like, project the future a little bit. And Ben was like, “I’m hoping that we kind of go all the way around the horn where, you know, there’s, like, trust in mainstream media because there’s only, like, four television things, and all the news broadcasters around there are, like, trusted institutions. And then eventually you get down to, like, the individual Twitter person who just, like, has a giant following that is somehow, like, QAnon is, like, a trusted source for people. And do we get to the point where people just don’t trust anything, and therefore [laugh] it’s actually even better?” That like, there should be a higher skepticism about, like, ideas and information generally. I’m wondering if we get to that shift.

Will: Well, I think, unfortunately we’ve discovered that people prefer a trusted source that agrees with the way they view the world as opposed to a trusted source that’s recognized widely as being an authority, and appeals to a more rational, analytical mind. You know, there was a thesis in the early days of the media companies on the internet that having reduced the marginal cost of publishing to zero, the most powerful and best known brands in the world would rise to the top and become stronger and more powerful. And by and large, a lot of that has happened. This is part of the reason you’ve seen the ascendance of news brands like The New York Times and Wall Street Journal online and the decline of local publishers. They just lost a lot of relevance outside of immediately local news. But it also became so easy to create a new quote, “Trusted source.” And so, people can kind of shape their own reality, and it can be full of fake news and deep-fake videos that really appeal to them. And I wonder how much—whether this is the long-term ramification of having, kind of, made it so easy for anyone to be in the news?

Raju: Yeah. Think about the contrast, though, Will. Would you rather live in a world where we had three radio stations and three television stations and three newspapers, and we hear Orson Welles’ War of the Worlds, and the entire nation freaks out and thinks we’re all going to die because that was a trusted source, or would you have—

Will: The original deep fake.

Raju: The original deep fake. Or live in a world where we’ve got millions of sources where you kind of got to put your brain together and think through whether this is real or this is fake? And yeah, people are going to believe what they want to believe, but you know, the power doesn’t sit inside three, four or five different, you know, sort of organizations. I don’t know, I think both of them are weird, right, but like, I kind of prefer now.

Jason: Well, one certainly highlights more voices, right? Like, what we did get is a much wider—like, you include the crazies, obviously, but like, we have a much wider perspective on the world because we have so many different people able to publish and reach an audience. So, I think, you know, as I said earlier, like, tech is amoral. It’s like how it’s used. So, I definitely agree. It was a great—[laugh] The War of the Worlds, that’s—

Raju: I know, man—

Jason: —a fantastic reference. I actually kind of like low-key wish I were like—had lived that because that would have been—I mean, obviously that would have been, like, momentous and scary, but very cool.

Raju: I know. I mean, I wasn’t even [unintelligible 00:20:10], it just occurred to me when we were talking about sort of this, you know, sort of, spread of information. If you guys have more questions, let’s go for it, and go round-robin, but I can move to Gatling gun whenever.

Jason: Let’s do Will. I mean, I know Will’s done a whole smattering. You did a deep dive a while ago. You’ve been asking most of the questions here, Will. I’m curious, like, you know, obviously, you do believe we’ve we have—there is space—I think we all do—a space for—I don’t know, whether you’re playing the cat or the mouse, but who’s ever defending authenticity.

Will: I think we have to worry about deep fakes at every level of what we consume in terms of information. And I think that it’s the subtle, deep fake that I’m actually most concerned about, right? We have, I think, a great deal of potential to identify deep fakes if we create really good collaborative mechanisms. I’ve also felt for a long time, though, that outside of technology, there could be good social mechanisms for collectively voting on the quality of a trusted authority. Like, I feel like I should have a trust score on the internet, like, a personal trust score for anything that I publish or for anything that I promote, like.

Jason: Yeah. These hips don’t lie.

Will: [laugh] Basically, yeah.

Jason: These hips don’t lie.

Will: Yeah.

Jason: Yeah.

Will: Or they only lie about certain things. Like, I feel like—

Jason: Right, on occasion.

Will: There should be some nuance and some subtlety in that, and I feel like we haven’t evolved good trust mechanisms, sort of, at the personal or atomic level that would be an interesting input into all of this. When Jason publishes something, I’m going to be interested in it because I know that he’s a trusted authority in my world. Just as in the digital certificate world all those years ago, you had the concept of a root trusted authority, I’m kind of looking for us to create social equivalence for that, as we battle deep fakes in the world to come.

Jason: Yeah. And I think just generally, like, the next 18 to 24 months, a lot of the generative space is going to be about agents and those agents doing things on your behalf. We’re going to have little talking LLM bots on our phone that’s like, actually the capabilities of Siri, as it was initially imagined that the one that can only set timers and remind you to do things. And I think there is a certain amount of, like, Google is kind of like as unopinionated lens of, like, here’s just the stuff you had directly asked for, and I’m not for, like, mandating a worldview from a specific, kind of like, centralized source—because these are private companies, and we want to diversity of opinions and perspectives—but I think the natural state of these bots are going to be grounded in information that you can find, and a lot of those will be find and verify. Which I think should be a mechanism for when you are seeking information, you’re getting cited sources on the internet.

Which might not necessarily mean truth, but it is, there’s a lot more that goes into looking for trusted sources when you’re going through one of these agent and platforms. If you’re using, you know, the Copilot from Bing, if you’re using the generative search through Google, which is kind of now Gemini, ChatGPT, like, they look for sources on the internet when they’re referencing things. So, they’re kind of doing that work.

Raju: They do. Just like, you know, even Google did.

Jason: Yeah, they’re doing a little bit of the work that is—you know, you mentioned, Raju, like, not everybody’s that interested in it. I also think that a huge number of people around the world generally, like, there’re working two, three jobs, they’ve got kids, they’d love to spend the time and try to validate their worldviews; they just don’t have time to read the news, or educate themselves on, like, politics, et cetera. So.

Raju: I mean, there’s actually a reason why people went to Google as a search engine years ago, and it wasn’t just speed. That was their first sort of modus operandi and sort of differentiation, but they actually rank websites and PageRank was a big thing. And they looked for how many links you had to government sites and other validated sites to give you a better PageRank. So, it’s not new in many ways, right? You’re just leveraging technology in a different way now. And I agree with you, Jason, I think the bots are going to get super interesting in the sense that it’s not going to be the first-pass filter. I’m not going to find you the result that you are looking for, it’s going to be, I’m going to find you the results that you’re looking for that has been validated. It’s going to be quite interesting.

Will: Great. All right, Raju, do you got some questions for us to wrap up?

Raju: I do. I do. It’s the Gatling gun part of the episode. It’s my favorite part. I’m just going to ask the two of you, and you know, I’m going to start with Jason here. So, best fake singer: Milli Vanilli or Britney Spears, when she was caught several times?

Jason: I don’t even know who the first one is, so I’m going to have to, by default, go with the second.

Will: Girl, you know it’s true.

Raju: [laugh].

Jason: Second one. Britney Spears.

Raju: All right, fine. You’re going to go with Britney Spears?

Jason: I grew up on Britney Spears.

Raju: Fine.

Jason: Absolutely. Yeah, she lip-synced a few. But Milli Vanilli was like the most famous one of all. Okay, so this one’s for Will. Best fake news: the earth is flat, or birds are not real.

Will: [laugh] Oh, definitely birds aren’t real. But BuzzFeed was the original fake news [laugh], according to one of our former presidents [laugh], so I can’t let that go by without acknowledging it. But yeah, I’m a huge fan of birds aren’t real. I think that’s a massive achievement.

Raju: How about you, Jason?

Jason: Oh, you know I’m a birds aren’t real man. We met the guy at—

Raju: That’s the three of us. That’s the three of us.

Jason: —[laugh] which was fantastic. Yeah.

Raju: Consensus. Okay. Jason, lets start with you. Best Sacha Baron Cohen fake: Ali G or Borat?

Jason: Ali G.

Raju: Will?

Will: Oh, Ali G. Yeah, I remember watching him interview Newt Gingrich, like, a long time ago, and it was clear that Newt Gingrich had stumbled into, like, an interview he was—he had been totally unprepared for. It was a beautiful thing.

Raju: I’m going to go with the ice cream cone glove Ali G. And, but, “I like my life.” I like that too, but I’m going to stay away from that right now. Okay. All right, this one, let’s start with you, Will, just because I know you’ve seen both of these best con movie con [laugh]—

Will: Oh, Wrath of Khan.

Raju: No, no, no. No, Dirty Rotten Scoundrels—

Will: Oh.

Raju: Or Trading Places?

Will: Trading Places. A hundred percent. The original.

Raju: Okay, Jason? You haven’t seen either of these, have you?

Jason: No, I was going to say Sal Khan is pretty cool, but—

Raju: Yeah, fine.

Jason: That’s not what you’re talking about.

Raju: Okay. Worst real con man: Bernie Madoff or Sam Bankman-Fried? Jason.

Jason: Sam Bankman-Fried.

Raju: Will?

Will: Yeah, I think Sam Bankman-Fried. Bernie Madoff was a con man for the era of the fax machine, [laugh] so it’s scary to think about what he would have been able to do 20 years later.

Raju: All right. I’ll start with Will this time. Craziest fraudulent company: Theranos or FTX or Outcome Health?

Will: I don’t know Outcome Health, but I think any fraud on the medical side—so I’ll pick Theranos—is actually an immoral act. Fraud in general is immoral, but when you start to deal in terms of promises of health, you start to really deeply manipulate people’s lives, so I’d say Theranos.

Raju: All right, Jason?

Jason: Same here.

Raju: Same there. Okay. All right. So, this is an open-ended question, and I’ll start with Jason here. If you could deep-fake yourself with one individual, who would that be?

Jason: Oh, man.

Raju: Okay, you can say me. You can say me. You can say me.

Jason: [laugh] I can take a real-ass photo with you.

Raju: Okay, fine [laugh].

Jason: I’m going to say right now, that Michael Cera football, like, Superbowl commercial, like, got me just back on the Michael Cera train. That would be fun. I don’t know. He just came to mind immediately. CeraVe.

Raju: Okay, fine. Will?

Will: Oh, I don’t know [sigh]. I’m trying to think of what I can answer that won’t really get me in trouble [laugh]. I think I don’t have a great answer. Sorry.

Raju: I’m going to throw mine out now.

Jason: You clearly have a good—you—I was about to say, you—this is a self-inspired thing. You clearly have a hilarious answer.

Raju: No. It’s not self-inspired. I just thought of it. I was just like, I should self-inspire here.

Jason: Hit me. Yeah. Okay.

Raju: I’m going to go with Gandhi. He actually looks like me a little bit. Like, people don’t realize that—

Will: All Indian men think they look like Gandhi, dude.

Raju: Okay, fine.

Will: [laugh].

Raju: [unintelligible 00:29:01]. I mean, just—

Jason: We can make this happen. Like, this can happen very soon. We might make this the album cover.

Raju: I know. It might—

Jason: It was Raju and Gandhi.

Raju: I would like that.

Jason: And me and Michael Cera. And Will with—

Raju: Everyone in the world.

Jason: Casper the ghost.

Raju: [laugh].

Jason: Yeah. I realize that I could probably put in fictional characters, and then I was like, damn, I—

Raju: Yeah, it’s true.

Jason: —that would be fun. Maybe, like, Rick from Rick and Morty. That would be fun.

Raju: That would be fun.

Jason: That would be cool.

Raju: That would be fun. Anyway, I want to thank all the listeners here for hanging in there with us. I know this Gatling gun section throws you guys off a bit, so I apologize about in advance. But it’s wonderful having you on. I know we’ve had a bunch of episodes and we look forward to you guys listening in and maybe even commenting here and there, so appreciate it.

Jason: Yeah, see on the next one.

Will: Thank you for listening to RRE POV.

Raju: You can keep up with the latest on the podcast at @RRE on Twitter—or shall I say X—

Jason: —or rre.com, and on Apple Podcasts, Spotify, Google Podcasts—

Raju: —or wherever fine podcasts are distributed. We’ll see you next time.