Fixing the Future

Nick Brown, vice-president of product at Truepic, describes how the company's technology and standards developed by the Coalition for Content Provenance and Authenticity is fighting fakes and other forms of image tampering, by securing data from the camera lens to the users' screens. 

What is Fixing the Future?

Fixing the Future from IEEE Spectrum magazine is a biweekly look at the cultural, business, and environmental consequences of technological solutions to hard problems like sustainability, climate change, and the ethics and scientific challenges posed by AI. IEEE Spectrum is the flagship magazine of IEEE, the world’s largest professional organization devoted to engineering and the applied sciences.

Dina Genkina: I'm Dina Genkina for IEEE Spectrum's Fixing the Future. This episode is brought to you by IEEE Xplore, the digital library with over six million pieces of the world's best technical content. Today I'm speaking with Nick Brown, vice president of technology at Truepic. And Truepic is a startup that works to certify the authenticity of digital content in an era of deepfakes. Nick, welcome to the podcast.

Nick Brown: Thank you, Dina. It's great to be here.

Genkina: So let's start with setting the stage a little bit. So Truepic is this photo and video verification platform. Can you explain a little bit of the motivation for creating such a platform and why you thought it was important to do so now?

Brown: Yeah, absolutely. So the company was originally founded back in 2015. One of our cofounders, Craig, who's still kind of the president of our company, he saw a fake picture online, and that was sort of the impetus of the entire idea. I think, at that point, it was still very, very early on in terms of fake images and the damage that sort of fake, inauthentic images and videos can cause online. But the thing that was interesting was, he didn't attack the problem in a standard way. So we're called a photo and video verification company. He didn't do what some others might think to do, which is to try to detect something is fake or inauthentic after the fact. Instead, he flipped the problem on its head, and he tried to prove or record the authenticity from the start and carry it through into the time when it's displayed, which was a really novel new approach to this. The problem existed in 2015. It's obviously gotten exponentially worse since then. And the interesting thing is that as the company formed and more and more members joined, they all brought different perspectives or real-world experiences where this problem existed. So it's the financial and insurance industries, for sure, but there's also others that have been in sensitive areas, where war crimes are being committed and inauthentic media is being used for misinformation purposes there, or citizen journalism and the same kinds of stuff or just personal experiences on social media. It kind of comes from all different, disparate worlds. And I think that Craig, again, our cofounder, saw an example of it, and as more and more people formed, there were so many more examples that came about that just strengthened his conviction into solving this problem. And then that brilliant idea of flipping it on its head kind of— everything took off from there.

Genkina: Well, devil's advocate might say fake content has existed as long as we had Microsoft Paint and Photoshop. Why is it becoming such a big problem now? Do you think that deepfakes are really changing the game, or is it broader than that?

Brown: I think it's broader than that. It's a very good question, and it's a very understandable argument, and it's largely true. I think there's two big shifts that are happening in society and then a technological shift that's occurring right now that's totally exacerbating the problem and making it become something that really does need to be solved. It's not just that we want it to be solved; it needs to be solved. The first one is the prevalence of the internet. Internet has obviously transformed society in so many ways since its inception. But all things are being digitalized at this point in time, and people are depending more and more on digital media and digital content overall. And with the advent of deepfakes, as opposed to, say, just standard editing of photos and the like and, even more so, the new version of generative AI that exists, I think that it's not just the prevalence of people's trust in digital media for day-to-day transactions that they do. So whether that's social interactions, financial interactions, or any other kind of interaction, digital media is used so much in today's day and age, but the accessibility of these generative AI solutions is really what's transforming people's ability to trust the things that they use for day-to-day transactions. So deepfakes are a problem, they're still a problem, and they're always going to be a problem, but they still require some semblance of sophistication to create quality versions that can deceit people— or deceive people. Generative AI, you can type in text and create anything you want, whether that's photos, videos, code, marketing language, a book. Anything can sort of be created with this generative AI. And it's being open-sourced, which means that it's accessible. And there's certainly some really creative, cool things that are going to come from generative AI, but there's also a lot of damage that's going to be caused because it's so accessible. And so I think that's the change. People are depending upon digital media, and they can't actually trust that digital media is real anymore, which wreaks havoc.

Genkina: Great. That was a really good summary. So maybe we can talk about the solution or at least one of the solutions that Truepic is working on.

Brown: Yeah, absolutely. And you said it right. One of the solutions— I think that the problem space or potential threat hits a lot of different walks of life and the internet as a whole, and Truepic is a piece of this puzzle. So Truepic and C2PA, I think, in general, this concept I'll talk about, are sort of a piece of this puzzle. So the technical solution that we've adopted, too, or helped bring forth is this concept of C2PA, or it's actually CCPA. It's the Coalition for Content Provenance and Authenticity. And technically speaking, that is a group of individuals and organizations that are trying to develop a technical standard for certifying the source and history or provenance of media content. Now, specifically, that applies to photos and videos to begin, so. But the applicability of the concept applies to all sorts of media, specifically box media format, which is a pretty pervasive version of media formats. And the spec is going to be applicable to a bunch of different types of media in oncoming versions of C2PA.
And talking a little bit more about what that definition means, what it means to have technical standards that certify the source or history of media, it is a recommendation or specification for how to take information, specific information, and cryptographically secure it into a file itself using PKI or a public key infrastructure. And the novel aspect about this is that from the point in time when that item is cryptographically secured into that file, it becomes what's called “tamper evident.” So each individual piece of data that's written into the file, as well as the entire file itself, they each have a specific hash where, if any aspect of that piece of data or the file itself changes, it can be seen that it's been edited in a noncompliant way, but because it's a specification that applies across the board, which I can talk about a little bit more later, it means that each step of the way, anything that's purposely happening to that file, so whether its creation or its specific edits that are happening, making a photo look better or even artistic purposes, as well as anything that has to happen to make it display faster, say, downsizing a file so it can get from point A to point Z faster, all of those things can happen in a compliant way to trace the provenance from creation to display perfectly. So it's a really interesting specification, and it's the reason why we helped create it but also are completely adopting it. Because we feel like it's the best version of an interoperable solution to this problem across the internet.

Genkina: So maybe we can explain this a little bit with an example just to make it more digestible. Let's say you want to certify the authenticity of a picture of your dog because you want to prove that your dog is the cutest one. So how would that process happen when you take a picture with a camera?

Brown: Yeah, great question. So I'll talk about Truepic's solution in particular but then also talk about how C2PA is implemented within that and how others within the C2PA ecosystem can kind of latch onto that or also play a part in that. So the system of C2PA is sort of meant to be a glass-to-glass implementation. So from the glass of, say, a camera lens to the glass of a screen that somebody is viewing that digital content on, the originality or authenticity and the provenance trail throughout is all recorded, and then therefore, you can present it back to that consumer or viewer on the other end. So if I'm taking a picture of my dog, our version of something has four distinct components. We capture something, which is actually a secure version of capture. I'll come back to that in a second. We then sign the information that we've captured into that file or secured into that file. We analyze it after the fact to make sure it is still in its original state and it's an original picture and not a rebroadcast, as it's called, or a picture of a preexisting picture, which is shockingly common, by the way. And then we display it using the C2PA standard that verifies that things are still intact and the provenance trail can be displayed along the way.

So if I'm taking a picture of my dog, I would use our software or our SDK within an application or our app that has the SDK built into it. It's just like any other standard camera app that you would use. It just has this secure capture mechanism in the background that doesn't just make sure that it is an original picture, but it also makes sure that the time, date, location, all the other metadata that comes along for the ride, is in fact verified, authenticated information. Once it's been captured, we cryptographically secure it into the file. It's a JPEG, so it's also a standard file that other softwares can also recognize. And then let's say, because it's my dog, maybe I captured red-eyes when I took the picture because I had flash on. So I could go into Photoshop, and because this C2PA system is interoperable, it can be imported into Photoshop, and I can do the red-eye reduction or remove the red-eye all in a compliant fashion, and it will say, "This was originally captured with Truepic. Here's the original information. And it was edited with Photoshop and what they did was red-eye reduction." And then it can go on to display. And again, we have our own display mechanisms, or it can be displayed elsewhere. And because that C2PA information can be read and verified in real time using standard SSL schemes, all of that information can be displayed back to, let's say, the Instagram users that are seeing a picture of the cutest dog in the world, which is my dog.

Genkina: Okay. And so where does this stand in terms of integration with apps right now? So can anyone kind of buy the software and use it on their device? And then can you also use it on Photoshop, etc., like you mentioned, or is that sort of a work in progress?

Brown: Great question. So the concept as a whole technically applies to any type of device, as I said, that glass-to-glass, any type of glass that's recording and any type of glass that's viewing and all the different tools in between. And we're again deploying tools for all of those. The specific verified capture mechanism that we have, so this secure capture that does secure capture all the way through signing, all the way through display, and the like, that is a SDK and a corresponding API that we have, that Truepic has, and that is a commercially available product that anybody can use. There's also a signing component of that that others have used with their own capture mechanisms as well. So it's kind of like the infrastructure of what C2PA requires to be implemented. We provide tools for all of those, one of those being a Truepic SDK or capture mechanism that allows you to take trustworthy photos from the start.

Genkina: I don't know what an SDK is.

Brown: Software development kit. No problem. So think of it as a library or a prewritten piece of code, a camera, that somebody can put into their own mobile application. So we have our mobile application that has it injected into it, but others can do the exact same thing, and other companies are doing the same thing. So if you see a camera inside of their app, it would actually be the Truepic camera running in the background, even if you don't know it's the Truepic camera.

Genkina: Gotcha. Gotcha. Cool. So maybe we can get into some of the applications and kind of areas where you think this is useful or areas where this is already being used. So I know, for a lot of people, the first thing they think about with deepfakes and things like that is misinformation in the political sphere. Does Truepic work with any clients like that? Is there any impact in that sphere that you can see?

Brown: Yeah, absolutely. I think it's interesting that the positive impact of the tech as a whole, what Truepic actually deploys to, and then the concept of C2PA at large, it reaches all walks of life, so consumers, whether it's in the general consumer space or the political space or otherwise and then the business space as well. And we certainly apply to both of those different types of use cases. A lot of our use cases have started in the business world. To call a spade a spade, misinformation has a huge financial impact to businesses when it's used incorrectly or used for harm. And so businesses who are always trying to digitally transform their processes move from physical or manual processes to digital or virtual processes. That's always something where it introduces some risk, and so the ability to have this disinterested party, this, "I'm not going to financially gain from misinformation if somebody happens or otherwise." And so our technology can be the thing that allows them to digitally transform and have a more efficient business, a more cost-effective business, something that their consumers like more and more, but also something that's trustworthy, just as trustworthy as sending somebody out in person to do so. But that also applies to the consumer sphere, whether that's for citizen journalism, which we work with quite heavily, even into the NGO space, and for, say, international monitoring and evaluation, especially in sensitive or high-risk or threat areas. We kind of operate in all sorts of different use cases around there because digital media is just so heavily depended upon across all walks of life right now. Within the political space, there are some emerging use cases that we're working with, I'll say. I can't talk about them in too much detail, but it's definitely something that, especially as we come up on the 2024 elections, people are thinking about how to prevent bad situations from happening.

Genkina: Can you explain what you mean by citizen journalism and how this applies to that?

Brown: Yeah, absolutely. So citizen journalism, as opposed to, say, traditional media or traditional journalism, there's a lot of times where, say, the traditional media companies would want to get photos that were captured by others, or they deploy solutions to capture specific photos with certain people on the ground. We deploy tools for both of those purposes. So the latter is the most specific one where there are specific people that have our camera technology to go into sensitive areas to capture certain types of photos that are used for media purposes.

Genkina: Is there any other cases where this might become important?

Brown: Yeah, it's a great question. And there is some definite nuance because C2PA sort of allows for both, and even Truepic operates in both scenarios. And it really is use-case specific. So our core capture technology does not require an association to a person, but there are certain business use cases, say, the insurance industry, where an association to a person is actually a specific goal that you want to be able to apply. So that specification allows for both use cases. So when that information is cryptographically secured, it comes with a producer. And that producer can either be a named individual or it can be software. And so with our core implementation, when I said that disinterested party before, what we really try to make sure is it's our software that's actually capturing and writing this information into this file, and therefore, it's objectively trustworthy information. It is authenticated via software, not subjectively written in by a human. I'm not saying this is captured where I live. The software is verifying that it's captured in a certain place, then I can say, "Yes, that's also where I live." And so both of those use cases are necessary, and C2PA supports both of them. And Truepic, as well, has use cases where each one of those is required. Those high-risk, sensitive areas are definitely the ones where linkability to a person is not something that you want to be able to do. The picture should be able to live on with the trustworthy data on its own to make certain decisions, say, whether this should be a part of media or not, but a linkability to that person actually becomes a risk.

Genkina: Cool. Okay, so maybe we can talk about the maybe more mundane-sounding use cases as well. So when you say insurance, how would this be used in insurance?

Brown: Yeah. So it's a space that we work in quite a bit. There's a ton of different business applications. As I said earlier, all businesses are trying to digitally transform these days. So anything that used to be a manual process, whether that's sending somebody on the ground to inspect a site and make sure it's in the condition that you expect it to be or asking someone to drive a car into a shop to assess the damage for a insurance claim or sending a consumer electronic in to assess damage for a warranty claim or all sorts of different versions of, like I said, international monitoring evaluation, each of these different versions are times where you used to do physical processes that, now, all businesses are trying to digitally transform and turn those physical processes into virtual processes. It creates better efficiency for businesses. It reduces costs for businesses. It also is just what consumers expect in today's day and age. It's an on-demand technology or on-demand society. Everybody wants things done very rapidly and not have to do things like drive their car into the shop. And so what we allow for is, we have a platform that is essentially a workflow, a customizable workflow, that allows you to collect all sorts of different information for those business purposes. But at the core of that is our technology. And so, although we're providing this really efficient workflow to get the information you need, if you can't trust that information, that digital transformation effort is essentially all for naught because every bit of cost reduction that you have is going to come back in terms of fraud increase. And so by having our technology at the core, it means that the information you're gathering is objectively trustworthy, and then the business workflow, the transition from physical to digital, can actually happen because you can trust it.

Genkina: So instead of driving your car into the shop, you can kind of take a few pictures and have an estimate that way?

Brown: Yeah, that's exactly right. So we have top 10 insurance carriers that are using us for underwriting. So whether it's a car or a home, they're taking pictures to assess the condition before a policy is underwritten. We have Equifax that uses us for their business credentialing purposes to verify that a business is set up with the right version of security implementations and the right signage and the like to get access to that credential data, to peer-to-peer marketplaces or digital marketplaces that are looking to verify the possession and condition of something before it's rented, sold, listed, whatever it is. All sorts of those use cases are places that deploy our technology.

Genkina: Is there any other example use cases that you're particularly excited about that we haven't mentioned?

Brown: So I think a lot of what we are used for is definitely exciting, and they're super valid and valuable use cases. But I think the advent of C2PA as a internet-changing technology and, really, the collaboration that that requires to bring forth— Truepic, we have the end-to-end solution for certain things, but we can't be everything to everyone. There's no way that happens. But C2PA or authenticated media and the provenance trail required is something that has to be pervasive. It has to be scaled for it to be as successful as possible. And I think that that collaboration, working with other companies to bring forth a solution that can actually impact society and inflict the change that we're looking for, is kind of a once-in-a-lifetime, really exciting version of an opportunity. It's pretty cool.

Genkina: So what do you think is the biggest challenge to overcome right now before this more widespread adoption can happen? Is there technological questions left, or is it mostly cooperation?
Brown: It's a really good question. I think there are some technological questions left, but it is largely based on the adoption of the technology as a whole for it to work. So first and foremost, actually, is education. This is a new concept, and it needs to be a consumer-facing version of transparency. Consumers need to understand what it is, be able to digest what it is so they can make their informed decisions on the information they're seeing, and that requires education. And that's always a hard process. That always is a hurdle that companies have to overcome when you're looking at something very, very new. That also touches something that everybody already understands. People already look at pictures and videos on a day-to-day basis. And so to change the way you're looking at pictures and videos is a pretty big hurdle, and the education required is definitely something that we have to consider. But that adoption piece— so going back to when I said that glass-to-glass concept, images and videos and digital content, overall, is touched many, many times by different pieces of software from the point of capture or creation through the point of display. And each step of the way, if it's not doing it in a way that preserves the information or compliantly rerecords information, it gets lost. And so that technological hurdle essentially, which is really an adoption hurdle, it's such that all the softwares need to adapt to it, account for it, and then preserve that information moving forward. That's definitely the biggest hurdle outside of education, I would say.

Genkina: Okay. Is there anything important that you think you want to say that I haven't asked you?

Brown: Yeah, I think the one thing I probably didn't hit upon— because we're Truepic, it's often thought, well, what all we are trying to do is just create this concept of Truepics and Truepics and Truepics, and you should always know what a Truepic. But really, authenticity as a whole is a pretty broad topic as it pertains to photos and videos. Understanding that something was generated by AI is just as necessary as understanding that something was generated by a verifying camera at the point of capture. Consumers are going to know that and need to know that. And when we talk about— the smartphone adoption rate is like 85% across the world right now, and creation is largely moving from desktop and server to mobile overall because so many people have access to this. And that creation is always going to occur. And so knowing that something is synthetic is just as important as knowing that something's not synthetic. And the concept of C2PA and what Truepic is trying to bring forth as well with our tools is the ability to distinguish between those. Not simply if it's not a Truepic, it doesn't matter, but really, if something is generated by AI, it's cool, but know that it's generated by AI, just like know that something is generated by Truepic's authenticating camera. Both are critical for the solution at scale, and we want to make sure we're part of both of those because it's critical to everybody's safety and success.

Genkina: Are you optimistic overall that society will adapt to this new landscape, media landscape?

Brown: I am. I think I am optimistic. It's definitely hard. The education and the adoption are hurdles that have to be overcome. I think that the creativity that gen AI is going to allow for is going to be something that people adopt too quickly, and it's also going to open up risks very quickly. And people are going to want a solution that preserves the creativity but provide the authenticity at the same time. And so I am optimistic that people are going to want that education and are going to adopt because it's needed to be able to tell the difference between those two and have fun but be safe at the same time.

Genkina: All right. Thank you very much. Today on Fixing the Future, we were talking with Nick Brown about his company Truepic and how they authenticate digital content. I'm Dina Genkina for IEEE Spectrum, and I hope you'll join us next time.