What if we could? A podcast exploring this question that drives us.

"What If We Could" Podcast Episode: Navigating the Ethical Maze of AI with David, Calvin, and Kevin

Join David DeVore, Calvin, and Kevin on this insightful journey through the ethical dilemmas and trust issues posed by artificial intelligence in our modern world. In this episode of "What If We Could," the trio dives deep into how AI challenges our norms, focusing on the necessity for ethics and trust in technology. With Bob deep into coding, David, Calvin, and Kevin explore various thought-provoking topics, from watermarks in AI-generated content to deep fakes and the human capacity for forgiveness.

Watermarks and AI-Generated Content
Discover OpenAI and Meta's recent initiatives to embed invisible watermarks in DALL-E 3 outputs and AI-generated content, aiming to ensure authenticity and trust. The discussion sheds light on the effectiveness of these watermarks, the challenges in maintaining content integrity, and the ongoing battle against digital deception.

The Deep Fake Dilemma
The conversation then shifts to the alarming capabilities of deep fake technology, exemplified by scammers who managed to defraud a CFO of $25 million through a Zoom call using deep fake representations of his colleagues. This segment explores the implications for security, the potential for misuse, and the urgent need for safeguards in our digital interactions.

Human Forgiveness and AI
Delve into the complex relationship between humans and robots, particularly how we perceive and forgive their errors. Drawing on examples from autonomous driving and its potential to save lives, the speakers discuss the psychological barriers to accepting AI failures and the societal impact of shifting blame from humans to machines.

Ethics, Trust, and the Future of AI
Wrapping up, the episode addresses the broader ethical considerations of AI's role in society. The trio contemplates the balance between leveraging AI for its immense benefits while navigating the ethical pitfalls that accompany its advancement. They encourage listeners to consider how personal and collective actions can shape a future where technology enhances human life without compromising our values.

Tune into this episode of "What If We Could" for a deep dive into the ethical frontiers of AI, exploring how we can foster trust in technology while safeguarding our societal values. Subscribe to the MeshMesh AI-powered newsletter at meshmesh.io for daily updates on fascinating tech developments.

What is What if we could? A podcast exploring this question that drives us.?

"What if we Could?" A podcast exploring this question that drive us. We explore the practical application of artificial intelligence, product design, blockchain & AR/VR, and tech alpha in the service of humans.

David DeVore (00:02)
Hello, hello. What's going on? This is David Dvor with the what if we could show, uh, where every week we are asking that burning question, uh, in regards to artificial intelligence and tech alpha and the new things on the scene. Um, and so with me today, I have Calvin and I've got Kevin, our other partner, Bob is deep into the code. And so, uh, you got us and we're thrilled to be here.

And so we've got a bunch of lists of really interesting topics, all sort of around the topic of ethics and trust and the dilemma that artificial intelligence places for us humans in terms of how we operate in this new world and what is appropriate and what's not appropriate and how do we build trust in machines.

And so we're going to, we're going to talk about, uh, watermarks and content. We're going to talk about deep fakes and digital deception. We're also going to talk about, uh, human forgive you forgiveness and how it's a little bit easier to forgive humans than it is to forgive robots. Um, and so, so let's dive in. So the first thing that, uh, out this week that was interesting was, uh, open AI, uh,

basically introduced watermarks on Dolly 3 as outputs to basically ensure a mark of providence and trustworthiness on content that was coming out. At the same time, Meta did something similar, which in order to enhance transparency, which was a new policy around detecting tools and labeling content that had been.

generated with AI. And so it's pretty interesting. I mean, I think part of the thing to keep in mind here is that, and so the way that the watermarks work, right, this is not a watermark that you can see. This is a watermark that these are literally pixels within pixels that a machine sees, but the human eye doesn't pick up. So it's not like, hey, here's your logo on a video.

it you know, here's the meta logo on a video. It's like, hey, it's just an image of a beach. But somewhere in there is some pixels that tell the machines that this thing is special. And those pixels have a have a cipher for a number of a bunch of information in terms of where the where the image of the video originated and so forth. The one of the interesting things is that it's not infallible, right? Like, somebody could come along.

Pick, pick up one of these images, run it through a another AI to clean out the watermarks. And basically the now the watermarks are gone. So it's really interesting. Cause like, there's no real great solution. Like there's no like perfect solution for how we actually take content and give it a source of, of, of Providence and say, this is the thing.

And while watermarks are a start, it feels like it is a bandaid to an underlying problem, which is like no content is safe anymore.

Calvin (03:37)
Yeah, it's fascinating. It almost reminds me of sort of when NFTs get around and the, yeah, I can just screenshot it kind of debate started, right? Where you're like, what is the provenance worth if I can just screenshot it? And open AI has the exact same problem, right? Yeah, and apparently, I think there was a lot of discussion about it. We don't know the details yet, but there is that you could just screenshot it and then it would be gone. And there was a lot of discussion.

David DeVore (03:48)
Oh yeah.

Right click save as. Yeah.

Calvin (04:03)
if bringing these watermarks into AI images would actually make it less trustworthy. There was an interesting opinion article that I stumbled upon is because basically now where do you land, right? Now you think you can rely on the fact whether you know or not whether it's AI generated, but you really don't, right? Because as soon as it's a screenshot, you don't. And so now you're kind of in this new web where you are underlying this false trust of thinking that you know you can trust it where you actually cannot.

David DeVore (04:23)
You stood out.

Calvin (04:34)
And so I think it's a really thin line when you don't actually achieve true provenance by giving the perception of doing so. Is that better or worse for the public trust in terms of AI -generated images? It's a really interesting, complex topic there.

Kevin Nuest (04:53)
Can I pose the question of why is this, I'm playing along here, right? Why is this such a big issue now? Fakes have been around for hundreds of years of images, right? That all of a sudden now we're concerned about how do we ensure when we look at an image that it is truly the real image and that it has been authenticated.

You know, we've used our human eyes to in judgment to determine that that to date. Um, I'm asking the question, the obvious answer here is because content generation is now going to see a hundred and a thousand and a 10 ,000 X fold of additional image content getting ready to be in front of everyone's eyeballs and the level of effort required to make something that didn't exist before is also dropping to the floor. Right. So.

to make Ronald Reagan writing on a T -Rex took like some Photoshop skill at some point. Now it takes saying that sentence. So, so that's the why, but why are we trying to use the same? Why are we trying to use the visual methods to verify visual content? Just like we have been for a hundred, 200 years. Like, why are we still using that same methods? What I'm trying to ask, right? Respectively, like a watermark is still just a visual method.

David DeVore (06:16)
Mm.

Kevin Nuest (06:20)
To do the same thing we've been trying to do for a hundred years and say, is that image real or not? Why not a different way?

David DeVore (06:28)
Yeah, it's interesting. I mean, you know, the the the fangs, right? So you know, Facebook, Facebook, Amazon, Google, they're under a lot of pressure, right? They're sort of under like all of this. They're sort of under all of this cultural pressure, which is also political pressure to basically like protect us, right? Like, you know, Mark,

protect me, right? And, and so consequently, like, even if it's not a solution, they at least have to, like, come out and, uh, feign that they have a solution, right? They like, even, even if like, they sort of know, like, yeah, it's only get us so far, they at least have to have the like PR machine on that they are on top of it, right? Because, you know, last thing, like,

you know, last thing they want to do is end up in some, you know, Senate subcommittee. Well, you, you, yeah. And you know, you, you, you're, you're, yeah, you're, you're, you're platform interrupted an election, you know, by allowing this content to exist and proliferate on it. Right. And, um, so it, it,

Calvin (07:35)
Yeah, again. It wouldn't be the first time.

Kevin Nuest (07:35)
Again, again, again, again.

Calvin (07:47)
Mm -hmm.

David DeVore (07:56)
And then the other side of it, I mean, so that's one side of it is like, okay, well, fake news. And then the other side of it is, is really more along the lines of, uh, really more long lines of, um, like IP, right? Like, Oh, you're using Morgan Freeman's voice, right? Without paying him.

Or you're using Joe, you know, you're using Joe Biden to like, you know, turn people off from, from, you know, showing up at the polls. Right. Um, and they're, they're related at the end of the day. It's like, well, you know, and it's interesting. I saw this thread in LinkedIn where they were talking about, they're specifically talking about voice. They were talking about, and they were, they were talking about, well, hold on. So as an employee,

If you, you know, you you let's say you're on customer success and the business decides to, you know, produce a avatar of you to answer customer questions in your voice. Do they own that? Do you forever? You know, no matter what they want to do, you know, and so without without paying you right?

Calvin (09:06)
Mm -hmm.

Kevin Nuest (09:14)
Yeah.

David DeVore (09:15)
So it's interesting, you know, they own your do they do they own your synthetic avatar forever without without paying you. I mean, this is the same the same argument that was going on in, you know, the same argument that was going on in Hollywood with the with the strike, right. And it's like, these are these are big, complex ethical questions, right. And

Calvin (09:40)
Yeah, it's almost like we're evading the answers for the more complex questions by using, like you said, a watermark solution that effectively doesn't do much to the actual problem. If we're looking at IP rights or anything else, we've always enforced that through lawyers, legal rights, contracts, all other ways than the actual tech. The tech has done almost zero to prevent any of that.

David DeVore (09:52)
for sure.

Calvin (10:07)
Whereas there are some real issues as in it is, you know, the web is getting flooded by that content. And there is a need to kind of figure out how to do it, but is labeling the right solution here.

David DeVore (10:20)
Yeah, it's interesting too. Cause the, uh, in that, in that LinkedIn thread, every solution was like a legal solution, right? Like, Oh, let's call it, call in the lawyers, right? Like, Oh, we need to make a law and.

You know, it's like, I'm skeptical that we, I'm skeptical that it can hold, right? And it's sort of like the can spam app, right? Like we've had the can span act for what? Like 20 years now or something like that. Like the can span, you know, can spam act is like, all right, you got to have permission. Can't, you know, spam people, but.

it. You know, I mean, if you're you if you're if you're big enough target, you're gonna get in trouble. In the meantime, it's just like everybody's spamming everybody. Right? Like it doesn't it's not

Kevin Nuest (11:16)
I checked my inbox today looking for a document and I had, I was like, oh wow, here's all this spam that I didn't even know I was getting.

David DeVore (11:22)
Yeah. And so the, right. And so the, you said it. Yeah. So the, the tech, the tech is actually at the end of the day, the tech is actually the better solution. Right. It's like spam filters, like spam filters do a way better job of keeping the spam filtered out of Gmail than the can spam act. Yeah. Right.

Calvin (11:23)
It's just the spam filters have gotten a lot better. Thank you, AI.

So maybe it does make sense to kind of maybe kind of to get back a little step in terms of watermarking content just for a second. So if we think about that, I mean, if it is AI labeling AI, it might still be smart. But I think we underestimate the amount of data that would need to have access to that. But it's probably like in terms of a technical solution, still the more feasible one.

in auto labeling it, right? Like basically AI labeling itself as being generated. And we saw it a lot with GPT checkers, right? Like, I mean, the plagiarism checker is one of the biggest ones out there where it was at least fairly successful. Of course you get into a situation where you say, you know, so I'm going to run it through a rewriter and then I'm going to run it through another rewriter, right? And I kind of put more steps and steps and steps in front. But the thing is like how many people are, how many, let's say,

bad actors do that on a day -to -day basis? Are you still going to reduce, let's say, 80 % of it, and that is a good first step? Or do you guys think that's actually not worth it, so we should just look at complete different ways?

David DeVore (12:57)
I mean, I don't, I would love to have this discussion with some of the folks on the, on the water, watermarking initiatives, because in my mind, like, I don't understand how, okay, you're going to watermark an image. Um, AI can come along and remove that watermark. In addition, another AI could come along and potentially figure out how to spoof that watermark. Um,

So I'm, I'm, I'm not sure that I understand the solution as it holds. I also don't understand.

And we've talked about this like blockchain. It feels to me like the better, like the better solution is a lookup. Right? Like the better solution, the better solution is not trying to like, the better solution is source of providence as opposed to like trying to track this image all over the internet with a, with, with a watermark. So I don't know, like there's bigger brains than mine working on.

Calvin (14:03)
It's so interesting to interject there. Yeah, so it's interjected, but the provenance is a very interesting point, even if we talked a lot about blockchain and data integrity checkers, or data authenticity through blockchain, the content authenticity integrity. There's a lot of these topics with provenance on blockchain for content being discussed right now. The challenge here is a very interesting one. It was a similar discussion back when there was a lot of talks about

automating, I don't know if you remember that, and like the ICO days where there was a lot of solutions out there to bring on chain, supply chain mechanisms, right? There was a lot of talk about like, how can we track something here, just some from beginning to end in the entire supply chain. And where it always broke down is that in the end, you still have a human putting a tag on a thing, right? Like a human can be bought, a human is fallible. Like that's just the way this works.

Kevin Nuest (14:42)
So many.

Calvin (14:56)
And so now I think about provenance on chain and I still need somebody to, I would love this concept to work. The challenge here is exactly the human part, right? Because yes, I can be my address, putting that piece of content on it, but I can also get someone other's piece of content and put it there. I can slightly adjust it and change it. Like the actual, the actual, you know, putting that into practice would still probably go through a legal way. So I'm really curious how you would solve that. And I'm happy to have more conversations and be enlightened about it.

But I have my challenges understanding where you can really get the human aspect fully out of it.

David DeVore (15:27)
Yeah, I really wanna...

I want to get melody, Hilda Bratt from Fox on here. And like, we're gonna we're gonna unpack this. And like, you know, let's we want to unpack it and figure it out. Because like it is there's something to it. And you're you're 100 % correct. At the same time, there are tools, dams and otherwise that brands use for organizing all of the myriad of, you know, media and content that they are.

Calvin (15:38)
Yeah, let's unpack this. Yeah.

David DeVore (16:01)
producing.

So yeah, it's a fascinating subject. We're gonna come back to it for sure.

Calvin (16:06)
Yeah, is it a technical solution, or is it another entity that actually needs to verify, which sounds very cost inefficient, but it's a validating certification body or not? And there's a lot of interesting talk about it. But yeah, absolutely. Let's get on. Let's get the discussion going. Speaking of like, go ahead.

Kevin Nuest (16:27)
No, just going to round out like, yeah, the solution, the right solution to the problem depends on the cost and opportunity cost, right? At the end of the day. So a certifying body for a $50 application fee may be really great value if you don't have to go through $100 ,000 court case later.

Calvin (16:35)
absent.

Yeah. And if you can get to like three bucks on a chain to do that, right. With a gas transaction as a foundation, that's probably even cheaper. So if we can find a way, that would be awesome. Speaking of like, of like the whole fake and deception model here and faking content. I think one thing that was really interesting this week was the case where an AI stole or well, we would say bad actors using an AI stole 25 million off a CFO of a company. It's a Hong Kong based company. And basically.

David DeVore (16:48)
Mm.

Calvin (17:17)
the scammers employed a deep fake AI on a Zoom call, basically more as a case study to discuss how dangerous the potential of AI is and how this thing unfolded, at least from what we know today, was basically the CFO got a mail from his co -CFO, which in and of itself, I think he still was like, OK, this is suspicious. So weird, but there was a link to a Zoom call in that session and the email. And so he clicked on it.

And he joined a Zoom call. And basically, in the email, it asked him to do a transaction or multiple transactions that rounded out to what is $132 million Hong Kong dollars, so $25 million US dollars. And basically, he needed to do a series of transactions. He obviously doubted it, clicked on the link, and landed on the Zoom call. On the Zoom call, he had a lot of his colleagues talking on the Zoom call, talking about the transaction.

that he was supposed to do in kind of discussing the topic, whereas he was like, oh, well, you know, I was obviously doubtful of that phishing attempt, but now I have my colleagues here in a Zoom call. So I'm going to trust this. And he did all of those transactions. Turns out all of that was fake. So the email was spoofed by AI and the colleagues on the call were deep fake AIs of his colleagues talking about it. And the fascinating thing is I was in a...

I was working for a security startup for a couple of years and we discussed like phishing, phishing to the max, right? Because phishing is such a fascinating problem that we still haven't solved. Like as much as we improved sort of our boundaries to solving phishing, the phishing attempts just get a lot more complex. And the interesting, most interesting thing is if your target is valuable enough, you would go a long way to get all of that data, right? Like in the earlier days, there was social engineering.

You would do the classical thing of actually going through the bins and trying to find information about sort of the people that you're trying to target. Now you have AI and you can have fabricated all for you. So basically now, if you find a snippet or an interview of your colleagues somewhere online and you spoof their voices, you can regenerate their faces like us here being on a podcast, right? We can totally be deep faked and put together in a call and we could be in a call with us together for five minutes and convince us to send money there.

David DeVore (19:36)
We need a password. We need.

Calvin (19:36)
That's wild.

Kevin Nuest (19:40)
We do. Don't say your safe word on this podcast, Dave, but yeah, let's talk afterwards. Get that safe word set up.

Calvin (19:45)
Yeah.

Yeah, everybody listening, this is your moment in time to get a safe word with your family.

David DeVore (19:50)
Yeah.

Yeah, I get a safe word with it. I've actually been getting ready to do that for sure. Cause that's, I mean, that's the other, I mean, $25 million. And that's like sort of the extreme scenario. We've talked about the scenario where, um, you know, where like, uh, granddaughter calls grandma, Hey, I need, you know, I'm stuck. I need a credit card. Um, you know, uh, can you give me a card so that way I can, you know, get my car out of.

Kevin Nuest (19:56)
It is.

David DeVore (20:24)
out of the shop or whatever right and it's like

like all of a sudden, like what we see and what we trust, even on the most basic level is going to be, you know, just called into question, right? Like I would have never imagined that I couldn't, you know, that that like my mom could receive a call from me asking for money and it's not me. Like I've never imagined that like until until the past couple of years. It's like, wow, that's

Calvin (20:47)
100%.

Kevin Nuest (20:47)
Yeah.

David DeVore (21:00)
that is that's that's incredibly scary. And this is this is interesting that what's interesting is that they deep faked all of his colleagues.

Kevin Nuest (21:04)
Great.

Yeah. That's the extra mile right there. The, the, the scam of asking, asking the CFO to pay the invoice is like the scam in business, right? That is the, the scam that's results in it's in billions of dollars lost every, every year. And it's just the most simple thing of the CFO or an admin, Hey, pay this invoice, right? 25 million shooting, shooting big 25 million is worth setting up.

paying for 10 zoom accounts and and training 10 AI co -workers to go chatter on a script that you wrote with GPT about the about the deal and then the human in the middle here the CFO going oh well I must have missed a message somewhere about this but I don't want to look dumb and like ask the basic questions and it seems to be a real thing and I don't want to come blow it up so I better get better get this done right I don't want to look like I'm not doing my job and and you know

Calvin (21:59)
Mm -hmm.

Kevin Nuest (22:08)
Human rights check.

Calvin (22:12)
Yeah, and that's what I mean with phishing attempts. In the end, it always boils down to the human aspect. You can make the technology as good as you can. The question is, now looking at the new era of AI, I was wondering, how would you have prevented that scenario? Looking through a security lens or through any employee training, which would be.

David DeVore (22:12)
So -

Calvin (22:38)
I guess you're classic here, right? But how would you train that? I mean, it's a trained role, it's a CFO, it's certainly not the first time they've seen a scam, like you said, Kevin, right? Like, how should companies act?

Kevin Nuest (22:50)
So this is a, you know, one thing, some flavor of this. So thinking through this year on the fly, one flavor of this is when I get an email from a service provider, let's say a bank, right? Let's say a financial institution. I advise my wife in a very similar manner of, great, so you've got like a prompt coming at you from a financial institution. It may, like it may or may not be real. So approach it with some skepticism, but if it is real, go directly to said website.

of financial institution and log in there and then move forward with whatever the thing, the alert is, the transaction, whatever it is. Don't trust a link in an email, for example. I'm calling out something of a basic phishing attack here, right? So what do you do when your visual and your voice can be fake like that? What's the equivalent of just go straight to the website? So is it pulling people into encrypted messaging services that wouldn't...

be able to be faked and it would be as hard as needing to have possession of the device of a person. So now you're back to like wrench attack compromising of someone, not basic phishing attack. And so anything that's of question or of certain financial value, say, yeah, yeah, great. So hit me up on, insert your encrypted messaging service you like to use here. And once I get that, then I'll totally clear this. And now that's like standard operating procedure.

And everything else is considered compromised, right? By default.

Calvin (24:20)
Yeah, 100%. Yeah, and it's interesting, right? Because, okay, if we talk about the technical solution, and I think this would also be on platforms like Zoom, like if we, like there's some of the tools that we're using for like securing, securely exchanging secrets, like Keybase or other tools, right? That allow you basically have a standard, and we've had this for decades, right? Like a public private key infrastructure that you can work with, right? So if Zoom would start employing that where users had a phone, right? With their private key on them.

David DeVore (24:20)
Mm.

Calvin (24:47)
So they can sign when they join the call. Basically, and at that point, Zoom displays a little lock icon or a secure icon on the Zoom call saying that this is actually the person that pretends to be the person on this call. That would go a long way. We have our devices. We have our phones. They can hold our private keys very well. And so can my laptop or any other machine. And so as soon as you have a private key infrastructure to do that, that goes a long way.

David DeVore (24:56)
Mmm.

Calvin (25:14)
And that was actually like part of the beauty of it. If you think about like how blockchain works, right? As an example, public private key infrastructure, that's all that is in the essence for us to allow doing that. And we've had signing and all of these things for ages, but it just hasn't gotten into the mainstream yet. And so I, my assumption would be that we're going to hopefully see a lot more of, you know, P, PKI in, in practice and more in consumer tech, because I think it's just necessary for authenticity and integrity.

David DeVore (25:21)
Mm.

Kevin Nuest (25:40)
Yes, yes, yes. And I think it needs to go one step further of an allow list version of that. So, you know, Calvin comes onto the call and signs, but it's Calvin with an I instead of an L. And I just looked right past it, but you're you're you're who you are, but you're not who I think you are. And so still that allow list of, OK, I'm expecting one of these hundred people within my organization. And if they sign on, I'm good.

And so maybe it's just the same of, you know, one step further of, yeah, this is who it is, who you think it is. And then this is certifying their end or that you think that they're in. So just a little, a little bit further there. And that's where I went to the other direction of, Hey, DM me on the thing that only both of us know about and have a one -to -one communication on. So I know I'm talking to who I think I'm talking to. Right. So I could get spoofed and fake the very first time, but, uh, but I assume I already.

Calvin (26:20)
Yeah.

Kevin Nuest (26:38)
have established a trusting relationship with Calvin's public private key fair like you're talking about, then I know anything that comes from that is trusted communication, right?

Calvin (26:46)
Yeah. Yeah, that's how we would do three -way secure communication regardless, right? That's how we do it in all of our communications today is, right? You know, I have my public and my private key, right? We each exchange messages with the signers so that I know the message comes from you. I sign the message back, it goes back to you. So now we have a secure channel, right? So now I know that you've actually sent me that message. You know that I've actually sent your extra message back. So now we can talk to each other, right? But the thing is, that's the tech under it, right? But I think like,

tools like Zoom and stuff need to act and display that properly visually in all of their platform tools, because this is where users are going to see it. No one's going to look up their keys someplace. And getting that authenticity and getting that registered will be a huge step, probably. I'm excited to see some of these things coming out of the woodwork.

Kevin Nuest (27:33)
The difference between.

Yeah, the difference in some of our experiences and consumer experiences between the challenges of Discord user names that looked like roughly what you thought they were going to be and they weren't. And then using public crypto wallet addresses where you then can display the first four, last four characters of that wallet address or the ENS name tied to that and have much higher confidence. This is what I expected.

to see and someone's not just spoofing with some characters or change their display name and to look like who I think they are.

Calvin (28:18)
Yeah, I mean, I think that the only way out there is keys, for sure. That's the only way where you can tie this to keys. Otherwise, you know that. But probably the organization needs to provision them. And there's a lot of nonsense to sort out. The interesting thing is, they also said, and I think it's still not clear, and I don't think they have released all the information on it, it's like how many of the participants of the call were faked and recorded in advance, or was it actually live? I doubt that it was live, right? Like any.

I mean, tools like agent have introduced streaming avatars, right? So we're kind of getting to the point. I don't think we're at the point just yet where they could have streamed the entire video. But I think there's some reports say they assume that they had pre -recorded that. And actually, there was just one or two fake colleagues on the call. And then they had other people actively join the call who were just unknown. So probably the CFO was on the call and be like, yeah, I know these two guys. The other three, I don't. But that's fine, right?

Kevin Nuest (29:18)
Mm -hmm.

Calvin (29:18)
So they must have used some fairly interesting mix to get to that, which I'm curious to see what more comes from that story. But yeah, it definitely leads us to sort of the ethical question in general of AI and how we kind of interact with these machines now going forward.

Kevin Nuest (29:40)
Yeah. I mean, continuing on right. The ethical boundaries and human forgiveness is, you know, this article that I wanted to share a little bit about, uh, DHH, one of the co -founders of 37 signals behind Basecamp. And actually once now as well, the software company posted an article, um, that it's easier to forgive a human than a robot is what it's, what it's called. And really specifically uses a very visceral example.

of autonomous driving. And, and so Dave, what if I told you we could save 10 ,000 lives this year from reducing car accidents? Autonomous cars only have to be 25 % better than human drivers. Back in 2022, reported 40 just about 43 ,000 people tragically died in car accidents, 43 ,000 people. So if a

David DeVore (30:23)
Yeah, I'll take it.

Kevin Nuest (30:40)
If everyone had an autonomous car, it wouldn't even take everybody. Let's assume then the car only has to be 25 % better than human drivers and we save 10 ,000 lives. That would be very meaningful to a lot of very specific people. If it was 50 % better, it'd save 20 ,000 lives, right? But let's go to the 25 % number. So we're 25 % better, save 10 ,000 lives. But everybody's in a...

a self -driving car. Those self -driving cars now killed 33 ,000 people. Will you forgive the 33 ,000 people that self -driving cars tragically killed, even though we saved 10 ,000 people statistically? It sounds like a lot. It sounds like a lot of people. If it said, if there was a headline that read Tesla cars, cars killed 33 ,000 people last year, ooh, stock price go down.

David DeVore (31:16)
Right.

No.

Right.

Kevin Nuest (31:39)
Right. That's hard to swallow, but rationally it is like phenomenal for society. So. Now all that is using that visceral example. It's easier to forgive a human than it is to forgive a robot. You expect perfection from the robot. So not 25 % better, not 50 % better magnitude better. It needs to be a 10 X better. It needs to be 100 X better. And.

We've even seen this out play out already with some of the other autonomous car companies that have had a couple of different accidents in a couple of different ways. And they make headlines every single time. And that's, you I don't have the exact stats in front of me, but let's call it a dozen headlines or something like that. And pretty often it makes a headline. I don't think there's an unreported, I don't think there's an unreported.

self -driving accident, right? So let's call it pretty close to every single one that's reported. And I can think of like a dozen. And we're talking about like 43 ,000 in a year. So I don't know, that's a big hurdle to climb. But so yeah, what does that mean? How do we leverage this technology without having such a binary decision in front of us?

David DeVore (32:45)
Not gonna make it.

Well, I mean, first of all, it probably, you know, slow slows the role for on, you know, AI taking over everything, right, which is ultimately the I think the point of of DHH article is like, mean, things are not going to change as quickly as y 'all pundits want to put down, you know, strictly because like humans, because it's a human dilemma, it's not a technology, you know, it's not necessarily a technology dilemma, it's a human dilemma.

Kevin Nuest (33:18)
Mm -hmm.

David DeVore (33:28)
And it's interesting. I mean, we've been Calvin knows here, we've been sort of deeply playing around with content, right? And

Anybody who's sort of played around in chat, GPT for a little while and, and, or, or, or, or there's, there's sort of this like, or even mid journey, like there's a, Oh, wow. That's amazing. Yeah, but it's not quite there yet. Right. It's still like, you know, and I think that our expectations.

are so high at this stage. I mean, our expectations in terms of, you know, especially after like, millions of dollars in, you know, in Super Bowl ads just in the past week, like millions of dollars in, like the most amazing photography and brand creation and video creation and media creation for for 50 years, not even like, I mean, even before even before the internet, right? I mean, we've been

creating really great, like, olgolvy level content for so long. Like, our expectation is really high for what great copywriting is, what great images are, what great video is, what great storytelling is. And...

Calvin (35:00)
especially when it comes to the big, broad public.

David DeVore (35:04)
That's right. Yeah. And so it is. So, and you sort of, you know, even though, yeah, something might save 95 % of the time.

Kevin Nuest (35:06)
Mm -hmm.

David DeVore (35:17)
it still is not good enough for, you know, our human brains to go, yeah, that's cool. You know, that that that is worth consuming. And it's it's interesting. I mean, there's going to as we sort of look out in the future, this the the the the trough of disillusionment also is going to I think look a lot like a a

pendulum swing back to the appreciation for human created for human creatives, right? And, and whatever that what in all its form, that's, that's kind of a beautiful thing, I think, as well. Like, I think that, you know, and,

Calvin (35:53)
100%.

Yeah. Well, the pendulum has to swing. You you can twist the fabric of reality that much and not expect it to kind of, you know, bounce back a bit and strike at you.

David DeVore (36:07)
Yeah.

Kevin Nuest (36:17)
I want some artisanal human content for a premium.

Calvin (36:20)
Have you, did you, yeah, I watched, I watched Poor Things last night. Have you guys seen Poor Things, the movie? No? I would highly recommend to have a look. It's the same, same producer that did the lobster. What kind of, the big waves in, what is it, 2017? Might be wrong. But have a look, I can highly recommend it. It's like with Emma Stone, a couple of very, like it's William DeFoe. It's quite a high cast and what's super interesting about it though is that it's incredibly artsy.

David DeVore (36:20)
I mean, and so it's -

Kevin Nuest (36:27)
Not yet, no.

David DeVore (36:27)
Mm -mm.

Calvin (36:49)
and raw and wild and like human artistic creation, but like on the highest Hollywood level, which I've seen a lot more coming out. Like if you look at like, I don't know, some of the recent Wes Anderson movies like Asteroid City, or, you know, he's done on Netflix, he's done the Roald Dahl short series, right? And you see all of these, which are a lot more like this, you know, cinema, like this art cinema vibes of like, you know, some, some were like.

1930s, 1930s -ish, like when the movie just came around, it's like this raw human essence of creation. You can really see this is more like in sort of this side movement, there's anti -AI. And if you see how they did the, I saw a sort of behind the scenes, how they did it, right? They didn't create the, if you see the movie, there's a ship on the wide ocean and they didn't create it in post -pro, but they actually created an entire.

3D curved LED screen around the ship set to do the sky and the waves and whatnot. So like wild. And you can kind of almost see this like resurgence of, like you said, right, this raw, rough creation, right? We're being so fed with the beautiful stuff that AI creates. And now we're going back to like give me the raw human vibes. And I think that's a very interesting trend, right? Fascinating trend.

David DeVore (38:07)
Mmm.

Yeah, I love that. I love watching like even like design like if you like watch watch like trends and design right and like typefaces and you know colors and like oh here's gradients or here's like you know retro mono types and whatnot it's really interesting to see.

watch the ebb and flow of design. And I think you're caught what you're calling out is like, yeah, there's gonna there's gonna be a pushback to sort of this, like, perfect design that has some level of raw human watermark, right? embedded in it, right? Like that it's like, yeah, you know, try like, you know, hey, robot, watch this, right?

Calvin (39:00)
Yeah. So what's going to happen next, right, if you play this to the next step? The interesting question here is, OK, so it almost seems like it could go two ways, right? You could see, if you look at AI in humans, right, and you look at this, it's a very, I'm just going to paint this as a very black and white picture for a second. So it could be like this resurgence, like we just said, it's a resurgence of artistic creation and how you want to see it. Great.

Kevin Nuest (39:01)
Yeah.

Calvin (39:25)
It could also be that humans are sort of like resorting to the corner that AI hasn't captured yet, right? So that kind of this area becomes smaller and smaller and smaller. So you're looking at, you know, more raw, more rough content because AI like mid journey has been fed with so much high professional, awesome photography content. And now imagine what this does to the scene, right? Now you have kind of devalued professional photographers, but you...

David DeVore (39:31)
Right.

Calvin (39:51)
massively increase the value of authentic content of me just on my phone running through the woods. So isn't that, it's just wild, right? We've spent all of the last decades trying to get quality to the highest possible level, like selling at our skills. Now AI gets to me like, okay, you know what, F this, I'm just going to go and do the worst possible quality I can. So isn't that, but is AI, so the question to you guys is like, is AI going to follow suit? Like we've seen some examples with meta.

David DeVore (39:58)
Yeah.

Let's bring it, bring it back down.

Kevin Nuest (40:13)
There's.

Calvin (40:19)
where they try to train it on stories. They're trying to train the model on story content. So where's this going to go? Are we going to see AI catching up in that direction? Are we going to resort further? Where's this going?

David DeVore (40:33)
It's interesting. I mean, we started out the conversation basically with deep fakes that are good enough to, so it's good enough that it can, you know, that with the right sort of social engineering, Hollywood, you know, puppeteers, we can, we can fool a CFO and to giving up $25 million. Um, on the, on the flip side, like it's not good enough for people.

Kevin Nuest (40:33)
I can think of...

David DeVore (41:03)
at large to sort of like accepted as art, right? So it's, it is, we're sort of in this strange place where it's not, it's both and, right? And it'll be really, you know, I think it's gonna, it's gonna take time. We're not there yet. We're still, we're still very early, right? And so, but at some point you just,

You have to assume that the technology is going to.

be better and better at storytelling, right?

Kevin Nuest (41:41)
Storytelling, there's maybe a couple of different aspects like that storytelling is part of it. But the admirable thing that humans like about other humans being successful is think about the documentaries that are made. It's about the people that master their craft. They've put in this time.

They've done it over years and decades and you can see out of an output, we're talking about art, but think about sports and think about any other type of craft where you go, wow, that is high quality because the person working on that is put in so much effort. Jiro loves sushi, right? Great documentary. I like sushi. I've never had his sushi, but I would go, wow, that guy is a master craftsman of his work, right? Just sticking with the...

David DeVore (42:30)
Mm -hmm.

Kevin Nuest (42:34)
the art side of it. So I think there's that part. I think there's human connection to I know that someone has put time into this, has mastered their craft. And so historically, we've judged that by quality being the barometer of wow, that must have took a lot of time both directly and then indirectly over a long period of time is how we've judged that again, going back to visually glancing at something and trying to make a judgment call about it. The other direction and the wrongness, I think about, um,

photographer Drifter that we know Drifter shoots on Twitter and he climbs buildings and towers and just ridiculous situations and takes a picture of his vans hanging off to the edge as he dangles his feet off the edge of some multi hundred story building or tower. And why that's interesting is because I know

There's a human on the other side of that picture. There's a human on the other side of that video that wasn't generated by AI going. What's the perspective from the top of the Sears tower? It there's, there's life and limb being risk that piece of content, right? To get that artwork. And so again, it's, it's a, it's a little bit of column, a of mastery and a little bit of column B of rawness that makes it valuable and different than, um, if we took a, uh, if we trained a model on all of his work to date.

David DeVore (43:44)
Yeah.

Kevin Nuest (44:02)
We collectively on this team could absolutely give you another image where you'd go, yeah, that looks like one of his images, but the story behind it and the provenance of it would not be that he, he climbed to the top of a building to take that picture. Right. So it becomes boring at that point. Worthless.

Calvin (44:18)
And so where is the difference? Where exactly is the difference? And my hunch is that AI is going to, or I think the developers of models are going to probably shift more towards understanding what actually drives people's emotions. And we've already talked about this, I think, in one of the previous podcasts, that there's more and more studies on the emotional analysis and going more into the fine -tuned details, because we're so...

humans are so good at seeing micro interactions in our faces, right? They're so micro that are really hard to tell still for an AI. And I'm not talking about happy or sad. I'm talking about, oh, this one word that I used was off -putting, right? Like that, what we talked about before, that nuance of it looks great, but there's this one little thing that's off, so I hate it. Right? And I, like when I, when I built the, so I built these, these AI audiograms the last couple of days, right? And I would spend,

David DeVore (44:53)
Mm.

Mm -hmm.

Calvin (45:07)
a lot of time like really getting to that level. I'm kind of happy with the result that we have now, but it took a lot of work to get to the state of producing something that feels like a human. I'm not saying it sounds like a human. I'm saying it feels like a human, which is a vast difference. It's because humans aren't perfect, like far from it, right? Like in spoken word, that is we make like uneven pauses. We stutter, right? We repeat words.

Kevin Nuest (45:24)
Yep.

David DeVore (45:28)
No.

Calvin (45:35)
not just arms and arms, but we do all sorts of things that make it very inefficient speaking, unless we're like trained professional speakers. But like all of these things, and so is the shift in AI that we could potentially see is the models shifting a lot more towards that, because in the end, for any organization to keep, let's look past the initial hype. For any AI company that's out there now that is producing content,

they do need to rely on the broader public and kind of cross the chasm of going into people feel that and want to see the content. So isn't then the only way to have those models move closer into understanding what evokes emotions in us and what is off putting and reacting to that.

David DeVore (46:09)
Mm -hmm.

Kevin Nuest (46:20)
Yeah, that's interesting for sure. And a flavor of that and going back to the forgiveness, you know, question here in the article itself, we've seen it done in the past with anthropomorphizing it, right? Of, hey, take an object and make it look like a human. So, know, Wally's got eyes and a mouth and moves like a human as an example to create emotion, like you're talking about to

to further mimic a part that helps our brains signal and go, okay, well, that's close enough. It's closer to one of us. It invokes emotion. And now I will be more receptive to his message. Now I will be more forgiving to its mistakes it may make because it seems to be, it's expressing remorse, right? Even if it doesn't feel remorse, if I think it's expressing remorse, I would be more likely to...

David DeVore (47:12)
Okay.

Kevin Nuest (47:19)
to forgive it. So Calvin, anything that you can create in a model, be it audio or video, right, to express remorse in a way that I would accept gets passed a lot of forgiveness, right?

Calvin (47:33)
Yeah, emotional undertone and emotional tags are one of the highest voted feature requests on 11 Labs at the moment. Fun fact.

David DeVore (47:35)
Yeah.

I mean, you know, we've sort of been in this state for like 20 years like this, where we are driving into postmodernism, right? So which is basically if you you know, which is basically the the idea around postmodernism, for those who don't know is basically when you when when everything contains some level of truth, what do you trust? Right?

And we've we've we've as a society, we've been moving closer and closer in that direction. It's just whatever. Like it has nothing to do with AI has to do with like media and the abundance of ads and the abundance of messages and the abundance of, you know, politicians just like spinning this thing or spinning that thing and like the like.

overwhelming consumption of media has created. And not only that, but also like for, you know, for a first time ever, we can literally go to any country, anywhere, any culture, you know, and, uh, see that they have a different point of view than we've had. Right. And so it's interesting. Cause what the result of all of that, um, uh, of.

all that variety of messaging and all of that is that of all that sort of.

proliferation of information and consequently, the human psyche going well, what do I trust? What do I not trust? Is ultimately I think what we've seen is, I think that the digital natives and you know, the kids are and we've seen this inside the communities that we've worked in, in, in props and whatnot is like, they don't trust anything. Like they like they literally they

only thing that they trust is because, you know, psych said so. It's like it is, it is person. It is trust based on person to person as opposed to which is a very different landscape for how humans have operated for the last hundred thousand years or something like that, right? Throughout our evolution, which is, you know, where we'd be able to like look around and I had a

Calvin (49:54)
Yeah.

David DeVore (50:15)
community, and we all sort of believe the same thing. And we all sort of came from the same place. We all sort of, you know, had a shared understanding of the world, like now, humans shared understanding the world is shattered. And so the only thing that really matters, like is, is here's a person that, that I trust, and what do they say? And if they say it's cool, then it's cool. If they say it's not cool, then it's not cool. And it really,

Calvin (50:41)
That's kind of smart going back to the Dunbar, to the Dunbar number, right? Staying below your 140 contacts around you and that is your trust circle. That is what you fundamentally rely on, like not the scale that you're looking at, not the scaled reviews. And we've seen that with testimonials and reviews and all of that, right? It's now that everything can be fabricated like the person I will trust most is the one sitting next to me that I've known for 20 years, like in the past, right? And like the internet is becoming in a weird way less trust.

David DeVore (50:48)
Yeah.

Kevin Nuest (50:50)
Yep.

David DeVore (51:11)
It is.

Kevin Nuest (51:11)
And Dave, to your point, human history is call it 10 ,000 years old. I like Kurt's definition of that. So collective human history is like 10 ,000 years. In the last two decades, we've went from, in that 10 ,000 years, in the last 20 years, we've moved from hyper local tribes to the entire world could be part of your tribe. You can still only hold.

150 people in your tribe, but it can be literally anyone. So all throughout human history, just up until very, very recently in that grand scheme, it has been the people you can look in the eyes right by you are your tribe and they, you trust them to you, you trust them as the default and you trust them the majority of the degree, right? That people, your neighbor would break your trust and that type of thing, but you trust them by default. And in why in

You couldn't afford the penalty was too high to both not trust and to break the trust because you were in such close proximity. And it's still, you know, 20 years ago, it wasn't as bad as you were completely banished from the tribe and go off into the wilderness. But to equivalently be banished from the neighborhood block party is the same, same thing. And now that's just not, that's not a thing anymore. Gen Z doesn't have the block party.

tribe isn't the only or the primary gravity of their trust circle. Of our trust circle either.

David DeVore (52:41)
Yeah.

And a lot of times it's not even, yeah. And in a lot of times it's like, you don't even need to know their name. Like it is, it is, it is a, uh, you know, trust more and more is not necessarily even knowing their name or not necessarily even knowing like what they look like or where they're from.

You know, and it's, you know, this is sort of block blockchain, right? Like, it's like, I trust them because they have this history on chain. And they have this history inside of, you know, Twitter, they have x number of other followers, or they have this history inside of discord. And that's why they're trusted not necessarily because it doesn't matter where they're from, he'd be from anywhere worldwide, like, oh, you know,

We don't care as long as they got vibes.

Kevin Nuest (53:42)
Yeah. And the goal of blockchain and most protocols is to literally build something that's trustless as well. So even if you come for the vibes, you don't have to even trust the person if you trust the protocol and the technology in between.

Calvin (53:43)
the vibes matter.

David DeVore (53:51)
Alright.

Calvin (53:59)
Yeah, if you trust the history of transactions, that's a very fair point. I think we touched on a lot of very exciting topics today. I think maybe if we can leave the audience with one question out of all of the ethics that we've discussed, I'd be super curious, where is the AI ethics taking us? How are we going to bring that to a level where?

we can make use of the power of AI without kind of scaring people away. I think that's one of the big things that I'm seeing here because of all of the danger and the potential. Any other last thoughts from you guys to add on to it?

David DeVore (54:38)
Yeah, it's really, I mean, it's really interesting. Um,

It's over and over and over again. What we're seeing is like AI being slowly, um, weaving its tentacles into many, many, many products. Right. And we've said this before. It's like the best, the best day I, um, doesn't look like AI, right? Like you don't, you don't really.

Calvin (55:07)
Yeah, isn't called AI. Yeah.

David DeVore (55:10)
It's not called AI, it's, it is, it is, it, it, all it does is like, it makes an interesting, useful experience. Um, and so it, it's going to be really interesting. Uh, as the PR, you know, as the progression keeps moving forward, uh, sort of where are the lines where human trust.

And whatever the experience is, whether you know, meet up and where do people, you know, go, oh, yeah, here, take my $25 million. Or like, you know, and, you know, whether that is just, you know, humans making the call shot or whether it's technologies making the call shot. I think increasingly, it's going to be harder and harder for humans to make the make the differentiation.

Um, and the, you know, the danger and all that is humans ultimately trust nothing and are become more and more tribalistic, um, in kind of dangerous ways. Um, you know, we've already seen that in politics a little bit. So I'm, I'm not, yeah, I wish, I wish I had a rose, rosy picture to paint, but I'm not sure that I do. Yeah.

Calvin (56:30)
I'm rosey up.

Kevin Nuest (56:34)
I would echo some of that as well. Take your, just like everything else, take your security into your own hands to help mitigate some risks. So for real, everybody go to your family members and have a password, a verbal password you've never said or written down anywhere else. Like go do that this weekend. That's your homework. But keep an, I'd ask that everyone keeps an open mind of how and where.

the technology could be applied. And just like a lot of other areas of societal change, try to do first principles thinking and research on it. I think the example of the autonomous car driving from DHH is a really great one. Rational, statistical, first principle thinking is this is a no -brainer, we gotta make this happen like yesterday. And...

Every year that we don't, every two years that we don't, we're another 100 ,000 people down from this planet, right? And stolen those experiences. So asking the first principles questions and then asking how can you, how can others help implement that societal change and be the change you want to see in the world, echo that out there to be one of the voices to help that spread.

Calvin (57:58)
Perfect. I think that's a great note to end on. Thanks to everyone for tuning in. That was another episode of the What If We Could Show. As always, if you're interested in finding the news on a day -to -day basis, head over to meshmesh .io and subscribe to the newsletter. You'll be getting a daily AI -powered newsletter with all the things that excite us, and hopefully you too. And see you next time.

David DeVore (58:24)
Thank you, thank you. Happy Valentine's Day. Bye.

Kevin Nuest (58:26)
Thanks.

Calvin (58:27)
Yeah, happy Valentine's.